Gentoo Archives: gentoo-commits

From: "Tom Wijsman (tomwij)" <tomwij@g.o>
To: gentoo-commits@l.g.o
Subject: [gentoo-commits] linux-patches r2497 - in genpatches-2.6/trunk: . 3.10.7
Date: Thu, 29 Aug 2013 12:09:23
Message-Id: 20130829120912.B24412004C@flycatcher.gentoo.org
1 Author: tomwij
2 Date: 2013-08-29 12:09:12 +0000 (Thu, 29 Aug 2013)
3 New Revision: 2497
4
5 Added:
6 genpatches-2.6/trunk/3.10.7/
7 genpatches-2.6/trunk/3.10.7/0000_README
8 genpatches-2.6/trunk/3.10.7/1000_linux-3.10.1.patch
9 genpatches-2.6/trunk/3.10.7/1001_linux-3.10.2.patch
10 genpatches-2.6/trunk/3.10.7/1002_linux-3.10.3.patch
11 genpatches-2.6/trunk/3.10.7/1003_linux-3.10.4.patch
12 genpatches-2.6/trunk/3.10.7/1004_linux-3.10.5.patch
13 genpatches-2.6/trunk/3.10.7/1005_linux-3.10.6.patch
14 genpatches-2.6/trunk/3.10.7/1006_linux-3.10.7.patch
15 genpatches-2.6/trunk/3.10.7/1500_XATTR_USER_PREFIX.patch
16 genpatches-2.6/trunk/3.10.7/1700_enable-thinkpad-micled.patch
17 genpatches-2.6/trunk/3.10.7/1801_block-cgroups-kconfig-build-bits-for-BFQ-v6r2-3.10.patch
18 genpatches-2.6/trunk/3.10.7/1802_block-introduce-the-BFQ-v6r2-I-O-sched-for-3.10.patch1
19 genpatches-2.6/trunk/3.10.7/1803_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v6r2-for-3.10.0.patch1
20 genpatches-2.6/trunk/3.10.7/2400_kcopy-patch-for-infiniband-driver.patch
21 genpatches-2.6/trunk/3.10.7/2700_ThinkPad-30-brightness-control-fix.patch
22 genpatches-2.6/trunk/3.10.7/2900_dev-root-proc-mount-fix.patch
23 genpatches-2.6/trunk/3.10.7/4200_fbcondecor-0.9.6.patch
24 genpatches-2.6/trunk/3.10.7/4500_nouveau-video-output-control-Kconfig.patch
25 genpatches-2.6/trunk/3.10.7/4567_distro-Gentoo-Kconfig.patch
26 Log:
27 Import 3.10-13 (3.10.7 release) as 3.10.7 branch, to bring security fixes to stable.
28
29 Added: genpatches-2.6/trunk/3.10.7/0000_README
30 ===================================================================
31 --- genpatches-2.6/trunk/3.10.7/0000_README (rev 0)
32 +++ genpatches-2.6/trunk/3.10.7/0000_README 2013-08-29 12:09:12 UTC (rev 2497)
33 @@ -0,0 +1,116 @@
34 +README
35 +--------------------------------------------------------------------------
36 +This patchset is to be the 2.6 series of gentoo-sources.
37 +It is designed for cross-compatibility, fixes and stability, with performance
38 +and additional features/driver support being a second.
39 +
40 +Unless otherwise stated and marked as such, this kernel should be suitable for
41 +all environments.
42 +
43 +
44 +Patchset Numbering Scheme
45 +--------------------------------------------------------------------------
46 +
47 +FIXES
48 +1000-1400 linux-stable
49 +1400-1500 linux-stable queue
50 +1500-1700 security
51 +1700-1800 architecture-related
52 +1800-1900 mm/scheduling/misc
53 +1900-2000 filesystems
54 +2000-2100 networking core
55 +2100-2200 storage core
56 +2200-2300 power management (ACPI, APM)
57 +2300-2400 bus (USB, IEEE1394, PCI, PCMCIA, ...)
58 +2400-2500 network drivers
59 +2500-2600 storage drivers
60 +2600-2700 input
61 +2700-2900 media (graphics, sound, tv)
62 +2900-3000 other
63 +3000-4000 reserved
64 +
65 +FEATURES
66 +4000-4100 network
67 +4100-4200 storage
68 +4200-4300 graphics
69 +4300-4400 filesystem
70 +4400-4500 security enhancement
71 +4500-4600 other
72 +
73 +Individual Patch Descriptions:
74 +--------------------------------------------------------------------------
75 +Patch: 1000_linux-3.10.1.patch
76 +From: http://www.kernel.org
77 +Desc: Linux 3.10.1
78 +
79 +Patch: 1001_linux-3.10.2.patch
80 +From: http://www.kernel.org
81 +Desc: Linux 3.10.2
82 +
83 +Patch: 1002_linux-3.10.3.patch
84 +From: http://www.kernel.org
85 +Desc: Linux 3.10.3
86 +
87 +Patch: 1003_linux-3.10.4.patch
88 +From: http://www.kernel.org
89 +Desc: Linux 3.10.4
90 +
91 +Patch: 1004_linux-3.10.5.patch
92 +From: http://www.kernel.org
93 +Desc: Linux 3.10.5
94 +
95 +Patch: 1005_linux-3.10.6.patch
96 +From: http://www.kernel.org
97 +Desc: Linux 3.10.6
98 +
99 +Patch: 1006_linux-3.10.7.patch
100 +From: http://www.kernel.org
101 +Desc: Linux 3.10.7
102 +
103 +Patch: 1500_XATTR_USER_PREFIX.patch
104 +From: https://bugs.gentoo.org/show_bug.cgi?id=470644
105 +Desc: Support for namespace user.pax.* on tmpfs.
106 +
107 +Patch: 1700_enable-thinkpad-micled.patch
108 +From: https://bugs.gentoo.org/show_bug.cgi?id=449248
109 +Desc: Enable mic mute led in thinkpads
110 +
111 +Patch: 1800_memcg-OOM-revert-ZFS-deadlock.patch
112 +From: https://bugs.gentoo.org/show_bug.cgi?id=462066
113 +Desc: Revert memcg patches that prevent OOM with too many dirty pages.
114 +
115 +Patch: 1801_block-cgroups-kconfig-build-bits-for-BFQ-v6r2-3.10.patch
116 +From: http://algo.ing.unimo.it/people/paolo/disk_sched/
117 +Desc: BFQ v6r2 patch 1 for 3.10: Build, cgroups and kconfig bits
118 +
119 +Patch: 1802_block-introduce-the-BFQ-v6r2-I-O-sched-for-3.10.patch1
120 +From: http://algo.ing.unimo.it/people/paolo/disk_sched/
121 +Desc: BFQ v6r2 patch 2 for 3.10: BFQ Scheduler
122 +
123 +Patch: 1803_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v6r2-for-3.10.0.patch1
124 +From: http://algo.ing.unimo.it/people/paolo/disk_sched/
125 +Desc: BFQ v6r2 patch 3 for 3.10: Early Queue Merge (EQM)
126 +
127 +Patch: 2400_kcopy-patch-for-infiniband-driver.patch
128 +From: Alexey Shvetsov <alexxy@g.o>
129 +Desc: Zero copy for infiniband psm userspace driver
130 +
131 +Patch: 2700_ThinkPad-30-brightness-control-fix.patch
132 +From: Seth Forshee <seth.forshee@×××××××××.com>
133 +Desc: ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads
134 +
135 +Patch: 2900_dev-root-proc-mount-fix.patch
136 +From: https://bugs.gentoo.org/show_bug.cgi?id=438380
137 +Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
138 +
139 +Patch: 4200_fbcondecor-0.9.6.patch
140 +From: http://dev.gentoo.org/~spock
141 +Desc: Bootsplash successor by Michal Januszewski ported by Jeremy (bug #452574)
142 +
143 +Patch: 4500_nouveau-video-output-control-Kconfig.patch
144 +From: Tom Wijsman <TomWij@g.o
145 +Desc: Make DRM_NOUVEU select VIDEO_OUTPUT_CONTROL, fixes bug #475748.
146 +
147 +Patch: 4567_distro-Gentoo-Kconfig.patch
148 +From: Tom Wijsman <TomWij@g.o
149 +Desc: Add Gentoo Linux support config settings and defaults.
150
151 Added: genpatches-2.6/trunk/3.10.7/1000_linux-3.10.1.patch
152 ===================================================================
153 --- genpatches-2.6/trunk/3.10.7/1000_linux-3.10.1.patch (rev 0)
154 +++ genpatches-2.6/trunk/3.10.7/1000_linux-3.10.1.patch 2013-08-29 12:09:12 UTC (rev 2497)
155 @@ -0,0 +1,511 @@
156 +diff --git a/MAINTAINERS b/MAINTAINERS
157 +index ad7e322..48c7480 100644
158 +--- a/MAINTAINERS
159 ++++ b/MAINTAINERS
160 +@@ -7667,6 +7667,7 @@ STABLE BRANCH
161 + M: Greg Kroah-Hartman <gregkh@×××××××××××××××.org>
162 + L: stable@×××××××××××.org
163 + S: Supported
164 ++F: Documentation/stable_kernel_rules.txt
165 +
166 + STAGING SUBSYSTEM
167 + M: Greg Kroah-Hartman <gregkh@×××××××××××××××.org>
168 +diff --git a/Makefile b/Makefile
169 +index e5e3ba0..b75cc30 100644
170 +--- a/Makefile
171 ++++ b/Makefile
172 +@@ -1,6 +1,6 @@
173 + VERSION = 3
174 + PATCHLEVEL = 10
175 +-SUBLEVEL = 0
176 ++SUBLEVEL = 1
177 + EXTRAVERSION =
178 + NAME = Unicycling Gorilla
179 +
180 +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
181 +index 260a919..5402c94 100644
182 +--- a/arch/x86/kvm/vmx.c
183 ++++ b/arch/x86/kvm/vmx.c
184 +@@ -3399,15 +3399,22 @@ static void vmx_get_segment(struct kvm_vcpu *vcpu,
185 + var->limit = vmx_read_guest_seg_limit(vmx, seg);
186 + var->selector = vmx_read_guest_seg_selector(vmx, seg);
187 + ar = vmx_read_guest_seg_ar(vmx, seg);
188 ++ var->unusable = (ar >> 16) & 1;
189 + var->type = ar & 15;
190 + var->s = (ar >> 4) & 1;
191 + var->dpl = (ar >> 5) & 3;
192 +- var->present = (ar >> 7) & 1;
193 ++ /*
194 ++ * Some userspaces do not preserve unusable property. Since usable
195 ++ * segment has to be present according to VMX spec we can use present
196 ++ * property to amend userspace bug by making unusable segment always
197 ++ * nonpresent. vmx_segment_access_rights() already marks nonpresent
198 ++ * segment as unusable.
199 ++ */
200 ++ var->present = !var->unusable;
201 + var->avl = (ar >> 12) & 1;
202 + var->l = (ar >> 13) & 1;
203 + var->db = (ar >> 14) & 1;
204 + var->g = (ar >> 15) & 1;
205 +- var->unusable = (ar >> 16) & 1;
206 + }
207 +
208 + static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
209 +diff --git a/block/genhd.c b/block/genhd.c
210 +index 20625ee..cdeb527 100644
211 +--- a/block/genhd.c
212 ++++ b/block/genhd.c
213 +@@ -512,7 +512,7 @@ static void register_disk(struct gendisk *disk)
214 +
215 + ddev->parent = disk->driverfs_dev;
216 +
217 +- dev_set_name(ddev, disk->disk_name);
218 ++ dev_set_name(ddev, "%s", disk->disk_name);
219 +
220 + /* delay uevents, until we scanned partition table */
221 + dev_set_uevent_suppress(ddev, 1);
222 +diff --git a/crypto/algapi.c b/crypto/algapi.c
223 +index 6149a6e..7a1ae87 100644
224 +--- a/crypto/algapi.c
225 ++++ b/crypto/algapi.c
226 +@@ -495,7 +495,8 @@ static struct crypto_template *__crypto_lookup_template(const char *name)
227 +
228 + struct crypto_template *crypto_lookup_template(const char *name)
229 + {
230 +- return try_then_request_module(__crypto_lookup_template(name), name);
231 ++ return try_then_request_module(__crypto_lookup_template(name), "%s",
232 ++ name);
233 + }
234 + EXPORT_SYMBOL_GPL(crypto_lookup_template);
235 +
236 +diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
237 +index 037288e..46b35f7 100644
238 +--- a/drivers/block/nbd.c
239 ++++ b/drivers/block/nbd.c
240 +@@ -714,7 +714,8 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
241 + else
242 + blk_queue_flush(nbd->disk->queue, 0);
243 +
244 +- thread = kthread_create(nbd_thread, nbd, nbd->disk->disk_name);
245 ++ thread = kthread_create(nbd_thread, nbd, "%s",
246 ++ nbd->disk->disk_name);
247 + if (IS_ERR(thread)) {
248 + mutex_lock(&nbd->tx_lock);
249 + return PTR_ERR(thread);
250 +diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
251 +index d620b44..8a3aff7 100644
252 +--- a/drivers/cdrom/cdrom.c
253 ++++ b/drivers/cdrom/cdrom.c
254 +@@ -2882,7 +2882,7 @@ static noinline int mmc_ioctl_cdrom_read_data(struct cdrom_device_info *cdi,
255 + if (lba < 0)
256 + return -EINVAL;
257 +
258 +- cgc->buffer = kmalloc(blocksize, GFP_KERNEL);
259 ++ cgc->buffer = kzalloc(blocksize, GFP_KERNEL);
260 + if (cgc->buffer == NULL)
261 + return -ENOMEM;
262 +
263 +diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
264 +index fb65dec..591b6fb 100644
265 +--- a/drivers/cpufreq/cpufreq_stats.c
266 ++++ b/drivers/cpufreq/cpufreq_stats.c
267 +@@ -349,6 +349,7 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
268 +
269 + switch (action) {
270 + case CPU_ONLINE:
271 ++ case CPU_ONLINE_FROZEN:
272 + cpufreq_update_policy(cpu);
273 + break;
274 + case CPU_DOWN_PREPARE:
275 +diff --git a/drivers/power/charger-manager.c b/drivers/power/charger-manager.c
276 +index fefc39f..98de1dd 100644
277 +--- a/drivers/power/charger-manager.c
278 ++++ b/drivers/power/charger-manager.c
279 +@@ -450,7 +450,7 @@ static void uevent_notify(struct charger_manager *cm, const char *event)
280 + strncpy(env_str, event, UEVENT_BUF_SIZE);
281 + kobject_uevent(&cm->dev->kobj, KOBJ_CHANGE);
282 +
283 +- dev_info(cm->dev, event);
284 ++ dev_info(cm->dev, "%s", event);
285 + }
286 +
287 + /**
288 +diff --git a/drivers/scsi/osd/osd_uld.c b/drivers/scsi/osd/osd_uld.c
289 +index 0fab6b5..9d86947 100644
290 +--- a/drivers/scsi/osd/osd_uld.c
291 ++++ b/drivers/scsi/osd/osd_uld.c
292 +@@ -485,7 +485,7 @@ static int osd_probe(struct device *dev)
293 + oud->class_dev.class = &osd_uld_class;
294 + oud->class_dev.parent = dev;
295 + oud->class_dev.release = __remove;
296 +- error = dev_set_name(&oud->class_dev, disk->disk_name);
297 ++ error = dev_set_name(&oud->class_dev, "%s", disk->disk_name);
298 + if (error) {
299 + OSD_ERR("dev_set_name failed => %d\n", error);
300 + goto err_put_cdev;
301 +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
302 +index c1c5552..6f6a1b4 100644
303 +--- a/drivers/scsi/sd.c
304 ++++ b/drivers/scsi/sd.c
305 +@@ -142,7 +142,7 @@ sd_store_cache_type(struct device *dev, struct device_attribute *attr,
306 + char *buffer_data;
307 + struct scsi_mode_data data;
308 + struct scsi_sense_hdr sshdr;
309 +- const char *temp = "temporary ";
310 ++ static const char temp[] = "temporary ";
311 + int len;
312 +
313 + if (sdp->type != TYPE_DISK)
314 +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
315 +index 26e3a97..c52948b 100644
316 +--- a/drivers/tty/serial/8250/8250_pci.c
317 ++++ b/drivers/tty/serial/8250/8250_pci.c
318 +@@ -4797,10 +4797,6 @@ static struct pci_device_id serial_pci_tbl[] = {
319 + PCI_VENDOR_ID_IBM, 0x0299,
320 + 0, 0, pbn_b0_bt_2_115200 },
321 +
322 +- { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9835,
323 +- 0x1000, 0x0012,
324 +- 0, 0, pbn_b0_bt_2_115200 },
325 +-
326 + { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9901,
327 + 0xA000, 0x1000,
328 + 0, 0, pbn_b0_1_115200 },
329 +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
330 +index 6464029..4476682 100644
331 +--- a/drivers/tty/tty_io.c
332 ++++ b/drivers/tty/tty_io.c
333 +@@ -1618,6 +1618,8 @@ static void release_tty(struct tty_struct *tty, int idx)
334 + tty_free_termios(tty);
335 + tty_driver_remove_tty(tty->driver, tty);
336 + tty->port->itty = NULL;
337 ++ if (tty->link)
338 ++ tty->link->port->itty = NULL;
339 + cancel_work_sync(&tty->port->buf.work);
340 +
341 + if (tty->link)
342 +diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c
343 +index 9b6b2b6..be661d8 100644
344 +--- a/fs/ceph/xattr.c
345 ++++ b/fs/ceph/xattr.c
346 +@@ -675,17 +675,18 @@ ssize_t ceph_getxattr(struct dentry *dentry, const char *name, void *value,
347 + if (!ceph_is_valid_xattr(name))
348 + return -ENODATA;
349 +
350 +- spin_lock(&ci->i_ceph_lock);
351 +- dout("getxattr %p ver=%lld index_ver=%lld\n", inode,
352 +- ci->i_xattrs.version, ci->i_xattrs.index_version);
353 +
354 + /* let's see if a virtual xattr was requested */
355 + vxattr = ceph_match_vxattr(inode, name);
356 + if (vxattr && !(vxattr->exists_cb && !vxattr->exists_cb(ci))) {
357 + err = vxattr->getxattr_cb(ci, value, size);
358 +- goto out;
359 ++ return err;
360 + }
361 +
362 ++ spin_lock(&ci->i_ceph_lock);
363 ++ dout("getxattr %p ver=%lld index_ver=%lld\n", inode,
364 ++ ci->i_xattrs.version, ci->i_xattrs.index_version);
365 ++
366 + if (__ceph_caps_issued_mask(ci, CEPH_CAP_XATTR_SHARED, 1) &&
367 + (ci->i_xattrs.index_version >= ci->i_xattrs.version)) {
368 + goto get_xattr;
369 +diff --git a/fs/hpfs/map.c b/fs/hpfs/map.c
370 +index 4acb19d..803d3da 100644
371 +--- a/fs/hpfs/map.c
372 ++++ b/fs/hpfs/map.c
373 +@@ -17,7 +17,8 @@ __le32 *hpfs_map_bitmap(struct super_block *s, unsigned bmp_block,
374 + struct quad_buffer_head *qbh, char *id)
375 + {
376 + secno sec;
377 +- if (hpfs_sb(s)->sb_chk) if (bmp_block * 16384 > hpfs_sb(s)->sb_fs_size) {
378 ++ unsigned n_bands = (hpfs_sb(s)->sb_fs_size + 0x3fff) >> 14;
379 ++ if (hpfs_sb(s)->sb_chk) if (bmp_block >= n_bands) {
380 + hpfs_error(s, "hpfs_map_bitmap called with bad parameter: %08x at %s", bmp_block, id);
381 + return NULL;
382 + }
383 +diff --git a/fs/hpfs/super.c b/fs/hpfs/super.c
384 +index a0617e7..962e90c 100644
385 +--- a/fs/hpfs/super.c
386 ++++ b/fs/hpfs/super.c
387 +@@ -558,7 +558,13 @@ static int hpfs_fill_super(struct super_block *s, void *options, int silent)
388 + sbi->sb_cp_table = NULL;
389 + sbi->sb_c_bitmap = -1;
390 + sbi->sb_max_fwd_alloc = 0xffffff;
391 +-
392 ++
393 ++ if (sbi->sb_fs_size >= 0x80000000) {
394 ++ hpfs_error(s, "invalid size in superblock: %08x",
395 ++ (unsigned)sbi->sb_fs_size);
396 ++ goto bail4;
397 ++ }
398 ++
399 + /* Load bitmap directory */
400 + if (!(sbi->sb_bmp_dir = hpfs_load_bitmap_directory(s, le32_to_cpu(superblock->bitmaps))))
401 + goto bail4;
402 +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
403 +index 1fab140..2c37442 100644
404 +--- a/fs/nfs/nfs4state.c
405 ++++ b/fs/nfs/nfs4state.c
406 +@@ -228,19 +228,8 @@ static int nfs41_setup_state_renewal(struct nfs_client *clp)
407 + return status;
408 + }
409 +
410 +-/*
411 +- * Back channel returns NFS4ERR_DELAY for new requests when
412 +- * NFS4_SESSION_DRAINING is set so there is no work to be done when draining
413 +- * is ended.
414 +- */
415 +-static void nfs4_end_drain_session(struct nfs_client *clp)
416 ++static void nfs4_end_drain_slot_table(struct nfs4_slot_table *tbl)
417 + {
418 +- struct nfs4_session *ses = clp->cl_session;
419 +- struct nfs4_slot_table *tbl;
420 +-
421 +- if (ses == NULL)
422 +- return;
423 +- tbl = &ses->fc_slot_table;
424 + if (test_and_clear_bit(NFS4_SLOT_TBL_DRAINING, &tbl->slot_tbl_state)) {
425 + spin_lock(&tbl->slot_tbl_lock);
426 + nfs41_wake_slot_table(tbl);
427 +@@ -248,6 +237,16 @@ static void nfs4_end_drain_session(struct nfs_client *clp)
428 + }
429 + }
430 +
431 ++static void nfs4_end_drain_session(struct nfs_client *clp)
432 ++{
433 ++ struct nfs4_session *ses = clp->cl_session;
434 ++
435 ++ if (ses != NULL) {
436 ++ nfs4_end_drain_slot_table(&ses->bc_slot_table);
437 ++ nfs4_end_drain_slot_table(&ses->fc_slot_table);
438 ++ }
439 ++}
440 ++
441 + /*
442 + * Signal state manager thread if session fore channel is drained
443 + */
444 +diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
445 +index 6cd86e0..582321a 100644
446 +--- a/fs/nfsd/nfs4xdr.c
447 ++++ b/fs/nfsd/nfs4xdr.c
448 +@@ -162,8 +162,8 @@ static __be32 *read_buf(struct nfsd4_compoundargs *argp, u32 nbytes)
449 + */
450 + memcpy(p, argp->p, avail);
451 + /* step to next page */
452 +- argp->p = page_address(argp->pagelist[0]);
453 + argp->pagelist++;
454 ++ argp->p = page_address(argp->pagelist[0]);
455 + if (argp->pagelen < PAGE_SIZE) {
456 + argp->end = argp->p + (argp->pagelen>>2);
457 + argp->pagelen = 0;
458 +diff --git a/include/linux/ceph/decode.h b/include/linux/ceph/decode.h
459 +index 379f715..0442c3d 100644
460 +--- a/include/linux/ceph/decode.h
461 ++++ b/include/linux/ceph/decode.h
462 +@@ -160,11 +160,6 @@ static inline void ceph_decode_timespec(struct timespec *ts,
463 + static inline void ceph_encode_timespec(struct ceph_timespec *tv,
464 + const struct timespec *ts)
465 + {
466 +- BUG_ON(ts->tv_sec < 0);
467 +- BUG_ON(ts->tv_sec > (__kernel_time_t)U32_MAX);
468 +- BUG_ON(ts->tv_nsec < 0);
469 +- BUG_ON(ts->tv_nsec > (long)U32_MAX);
470 +-
471 + tv->tv_sec = cpu_to_le32((u32)ts->tv_sec);
472 + tv->tv_nsec = cpu_to_le32((u32)ts->tv_nsec);
473 + }
474 +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
475 +index 6b4890f..feaf0c7 100644
476 +--- a/include/linux/hugetlb.h
477 ++++ b/include/linux/hugetlb.h
478 +@@ -358,6 +358,17 @@ static inline int hstate_index(struct hstate *h)
479 + return h - hstates;
480 + }
481 +
482 ++pgoff_t __basepage_index(struct page *page);
483 ++
484 ++/* Return page->index in PAGE_SIZE units */
485 ++static inline pgoff_t basepage_index(struct page *page)
486 ++{
487 ++ if (!PageCompound(page))
488 ++ return page->index;
489 ++
490 ++ return __basepage_index(page);
491 ++}
492 ++
493 + #else /* CONFIG_HUGETLB_PAGE */
494 + struct hstate {};
495 + #define alloc_huge_page_node(h, nid) NULL
496 +@@ -378,6 +389,11 @@ static inline unsigned int pages_per_huge_page(struct hstate *h)
497 + }
498 + #define hstate_index_to_shift(index) 0
499 + #define hstate_index(h) 0
500 ++
501 ++static inline pgoff_t basepage_index(struct page *page)
502 ++{
503 ++ return page->index;
504 ++}
505 + #endif /* CONFIG_HUGETLB_PAGE */
506 +
507 + #endif /* _LINUX_HUGETLB_H */
508 +diff --git a/kernel/futex.c b/kernel/futex.c
509 +index b26dcfc..49dacfb 100644
510 +--- a/kernel/futex.c
511 ++++ b/kernel/futex.c
512 +@@ -61,6 +61,7 @@
513 + #include <linux/nsproxy.h>
514 + #include <linux/ptrace.h>
515 + #include <linux/sched/rt.h>
516 ++#include <linux/hugetlb.h>
517 +
518 + #include <asm/futex.h>
519 +
520 +@@ -365,7 +366,7 @@ again:
521 + } else {
522 + key->both.offset |= FUT_OFF_INODE; /* inode-based key */
523 + key->shared.inode = page_head->mapping->host;
524 +- key->shared.pgoff = page_head->index;
525 ++ key->shared.pgoff = basepage_index(page);
526 + }
527 +
528 + get_futex_key_refs(key);
529 +diff --git a/kernel/module.c b/kernel/module.c
530 +index cab4bce..fa53db8 100644
531 +--- a/kernel/module.c
532 ++++ b/kernel/module.c
533 +@@ -2927,7 +2927,6 @@ static struct module *layout_and_allocate(struct load_info *info, int flags)
534 + {
535 + /* Module within temporary copy. */
536 + struct module *mod;
537 +- Elf_Shdr *pcpusec;
538 + int err;
539 +
540 + mod = setup_load_info(info, flags);
541 +@@ -2942,17 +2941,10 @@ static struct module *layout_and_allocate(struct load_info *info, int flags)
542 + err = module_frob_arch_sections(info->hdr, info->sechdrs,
543 + info->secstrings, mod);
544 + if (err < 0)
545 +- goto out;
546 ++ return ERR_PTR(err);
547 +
548 +- pcpusec = &info->sechdrs[info->index.pcpu];
549 +- if (pcpusec->sh_size) {
550 +- /* We have a special allocation for this section. */
551 +- err = percpu_modalloc(mod,
552 +- pcpusec->sh_size, pcpusec->sh_addralign);
553 +- if (err)
554 +- goto out;
555 +- pcpusec->sh_flags &= ~(unsigned long)SHF_ALLOC;
556 +- }
557 ++ /* We will do a special allocation for per-cpu sections later. */
558 ++ info->sechdrs[info->index.pcpu].sh_flags &= ~(unsigned long)SHF_ALLOC;
559 +
560 + /* Determine total sizes, and put offsets in sh_entsize. For now
561 + this is done generically; there doesn't appear to be any
562 +@@ -2963,17 +2955,22 @@ static struct module *layout_and_allocate(struct load_info *info, int flags)
563 + /* Allocate and move to the final place */
564 + err = move_module(mod, info);
565 + if (err)
566 +- goto free_percpu;
567 ++ return ERR_PTR(err);
568 +
569 + /* Module has been copied to its final place now: return it. */
570 + mod = (void *)info->sechdrs[info->index.mod].sh_addr;
571 + kmemleak_load_module(mod, info);
572 + return mod;
573 ++}
574 +
575 +-free_percpu:
576 +- percpu_modfree(mod);
577 +-out:
578 +- return ERR_PTR(err);
579 ++static int alloc_module_percpu(struct module *mod, struct load_info *info)
580 ++{
581 ++ Elf_Shdr *pcpusec = &info->sechdrs[info->index.pcpu];
582 ++ if (!pcpusec->sh_size)
583 ++ return 0;
584 ++
585 ++ /* We have a special allocation for this section. */
586 ++ return percpu_modalloc(mod, pcpusec->sh_size, pcpusec->sh_addralign);
587 + }
588 +
589 + /* mod is no longer valid after this! */
590 +@@ -3237,6 +3234,11 @@ static int load_module(struct load_info *info, const char __user *uargs,
591 + }
592 + #endif
593 +
594 ++ /* To avoid stressing percpu allocator, do this once we're unique. */
595 ++ err = alloc_module_percpu(mod, info);
596 ++ if (err)
597 ++ goto unlink_mod;
598 ++
599 + /* Now module is in final location, initialize linked lists, etc. */
600 + err = module_unload_init(mod);
601 + if (err)
602 +diff --git a/mm/hugetlb.c b/mm/hugetlb.c
603 +index e2bfbf7..5cf99bf 100644
604 +--- a/mm/hugetlb.c
605 ++++ b/mm/hugetlb.c
606 +@@ -690,6 +690,23 @@ int PageHuge(struct page *page)
607 + }
608 + EXPORT_SYMBOL_GPL(PageHuge);
609 +
610 ++pgoff_t __basepage_index(struct page *page)
611 ++{
612 ++ struct page *page_head = compound_head(page);
613 ++ pgoff_t index = page_index(page_head);
614 ++ unsigned long compound_idx;
615 ++
616 ++ if (!PageHuge(page_head))
617 ++ return page_index(page);
618 ++
619 ++ if (compound_order(page_head) >= MAX_ORDER)
620 ++ compound_idx = page_to_pfn(page) - page_to_pfn(page_head);
621 ++ else
622 ++ compound_idx = page - page_head;
623 ++
624 ++ return (index << compound_order(page_head)) + compound_idx;
625 ++}
626 ++
627 + static struct page *alloc_fresh_huge_page_node(struct hstate *h, int nid)
628 + {
629 + struct page *page;
630 +diff --git a/mm/memcontrol.c b/mm/memcontrol.c
631 +index 1947218..fd79df5 100644
632 +--- a/mm/memcontrol.c
633 ++++ b/mm/memcontrol.c
634 +@@ -6303,8 +6303,6 @@ mem_cgroup_css_online(struct cgroup *cont)
635 + * call __mem_cgroup_free, so return directly
636 + */
637 + mem_cgroup_put(memcg);
638 +- if (parent->use_hierarchy)
639 +- mem_cgroup_put(parent);
640 + }
641 + return error;
642 + }
643 +diff --git a/net/ceph/auth_none.c b/net/ceph/auth_none.c
644 +index 925ca58..8c93fa8 100644
645 +--- a/net/ceph/auth_none.c
646 ++++ b/net/ceph/auth_none.c
647 +@@ -39,6 +39,11 @@ static int should_authenticate(struct ceph_auth_client *ac)
648 + return xi->starting;
649 + }
650 +
651 ++static int build_request(struct ceph_auth_client *ac, void *buf, void *end)
652 ++{
653 ++ return 0;
654 ++}
655 ++
656 + /*
657 + * the generic auth code decode the global_id, and we carry no actual
658 + * authenticate state, so nothing happens here.
659 +@@ -106,6 +111,7 @@ static const struct ceph_auth_client_ops ceph_auth_none_ops = {
660 + .destroy = destroy,
661 + .is_authenticated = is_authenticated,
662 + .should_authenticate = should_authenticate,
663 ++ .build_request = build_request,
664 + .handle_reply = handle_reply,
665 + .create_authorizer = ceph_auth_none_create_authorizer,
666 + .destroy_authorizer = ceph_auth_none_destroy_authorizer,
667
668 Added: genpatches-2.6/trunk/3.10.7/1001_linux-3.10.2.patch
669 ===================================================================
670 --- genpatches-2.6/trunk/3.10.7/1001_linux-3.10.2.patch (rev 0)
671 +++ genpatches-2.6/trunk/3.10.7/1001_linux-3.10.2.patch 2013-08-29 12:09:12 UTC (rev 2497)
672 @@ -0,0 +1,2395 @@
673 +diff --git a/Documentation/parisc/registers b/Documentation/parisc/registers
674 +index dd3cadd..10c7d17 100644
675 +--- a/Documentation/parisc/registers
676 ++++ b/Documentation/parisc/registers
677 +@@ -78,6 +78,14 @@ Shadow Registers used by interruption handler code
678 + TOC enable bit 1
679 +
680 + =========================================================================
681 ++
682 ++The PA-RISC architecture defines 7 registers as "shadow registers".
683 ++Those are used in RETURN FROM INTERRUPTION AND RESTORE instruction to reduce
684 ++the state save and restore time by eliminating the need for general register
685 ++(GR) saves and restores in interruption handlers.
686 ++Shadow registers are the GRs 1, 8, 9, 16, 17, 24, and 25.
687 ++
688 ++=========================================================================
689 + Register usage notes, originally from John Marvin, with some additional
690 + notes from Randolph Chung.
691 +
692 +diff --git a/Makefile b/Makefile
693 +index b75cc30..4336730 100644
694 +--- a/Makefile
695 ++++ b/Makefile
696 +@@ -1,6 +1,6 @@
697 + VERSION = 3
698 + PATCHLEVEL = 10
699 +-SUBLEVEL = 1
700 ++SUBLEVEL = 2
701 + EXTRAVERSION =
702 + NAME = Unicycling Gorilla
703 +
704 +diff --git a/arch/arm/boot/dts/imx23.dtsi b/arch/arm/boot/dts/imx23.dtsi
705 +index 73fd7d0..587ceef 100644
706 +--- a/arch/arm/boot/dts/imx23.dtsi
707 ++++ b/arch/arm/boot/dts/imx23.dtsi
708 +@@ -23,8 +23,12 @@
709 + };
710 +
711 + cpus {
712 +- cpu@0 {
713 +- compatible = "arm,arm926ejs";
714 ++ #address-cells = <0>;
715 ++ #size-cells = <0>;
716 ++
717 ++ cpu {
718 ++ compatible = "arm,arm926ej-s";
719 ++ device_type = "cpu";
720 + };
721 + };
722 +
723 +diff --git a/arch/arm/boot/dts/imx28.dtsi b/arch/arm/boot/dts/imx28.dtsi
724 +index 600f7cb..4c10a19 100644
725 +--- a/arch/arm/boot/dts/imx28.dtsi
726 ++++ b/arch/arm/boot/dts/imx28.dtsi
727 +@@ -32,8 +32,12 @@
728 + };
729 +
730 + cpus {
731 +- cpu@0 {
732 +- compatible = "arm,arm926ejs";
733 ++ #address-cells = <0>;
734 ++ #size-cells = <0>;
735 ++
736 ++ cpu {
737 ++ compatible = "arm,arm926ej-s";
738 ++ device_type = "cpu";
739 + };
740 + };
741 +
742 +diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
743 +index 5bcdf3a..62dc781 100644
744 +--- a/arch/arm/boot/dts/imx6dl.dtsi
745 ++++ b/arch/arm/boot/dts/imx6dl.dtsi
746 +@@ -18,12 +18,14 @@
747 +
748 + cpu@0 {
749 + compatible = "arm,cortex-a9";
750 ++ device_type = "cpu";
751 + reg = <0>;
752 + next-level-cache = <&L2>;
753 + };
754 +
755 + cpu@1 {
756 + compatible = "arm,cortex-a9";
757 ++ device_type = "cpu";
758 + reg = <1>;
759 + next-level-cache = <&L2>;
760 + };
761 +diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
762 +index 21e6758..dc54a72 100644
763 +--- a/arch/arm/boot/dts/imx6q.dtsi
764 ++++ b/arch/arm/boot/dts/imx6q.dtsi
765 +@@ -18,6 +18,7 @@
766 +
767 + cpu@0 {
768 + compatible = "arm,cortex-a9";
769 ++ device_type = "cpu";
770 + reg = <0>;
771 + next-level-cache = <&L2>;
772 + operating-points = <
773 +@@ -39,18 +40,21 @@
774 +
775 + cpu@1 {
776 + compatible = "arm,cortex-a9";
777 ++ device_type = "cpu";
778 + reg = <1>;
779 + next-level-cache = <&L2>;
780 + };
781 +
782 + cpu@2 {
783 + compatible = "arm,cortex-a9";
784 ++ device_type = "cpu";
785 + reg = <2>;
786 + next-level-cache = <&L2>;
787 + };
788 +
789 + cpu@3 {
790 + compatible = "arm,cortex-a9";
791 ++ device_type = "cpu";
792 + reg = <3>;
793 + next-level-cache = <&L2>;
794 + };
795 +diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
796 +index a7b85e0..dc90203 100644
797 +--- a/arch/arm/include/asm/mmu_context.h
798 ++++ b/arch/arm/include/asm/mmu_context.h
799 +@@ -27,7 +27,15 @@ void __check_vmalloc_seq(struct mm_struct *mm);
800 + void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk);
801 + #define init_new_context(tsk,mm) ({ atomic64_set(&mm->context.id, 0); 0; })
802 +
803 +-DECLARE_PER_CPU(atomic64_t, active_asids);
804 ++#ifdef CONFIG_ARM_ERRATA_798181
805 ++void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm,
806 ++ cpumask_t *mask);
807 ++#else /* !CONFIG_ARM_ERRATA_798181 */
808 ++static inline void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm,
809 ++ cpumask_t *mask)
810 ++{
811 ++}
812 ++#endif /* CONFIG_ARM_ERRATA_798181 */
813 +
814 + #else /* !CONFIG_CPU_HAS_ASID */
815 +
816 +diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
817 +index 8c3094d..d9f5cd4 100644
818 +--- a/arch/arm/kernel/perf_event.c
819 ++++ b/arch/arm/kernel/perf_event.c
820 +@@ -569,6 +569,7 @@ perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
821 + return;
822 + }
823 +
824 ++ perf_callchain_store(entry, regs->ARM_pc);
825 + tail = (struct frame_tail __user *)regs->ARM_fp - 1;
826 +
827 + while ((entry->nr < PERF_MAX_STACK_DEPTH) &&
828 +diff --git a/arch/arm/kernel/smp_tlb.c b/arch/arm/kernel/smp_tlb.c
829 +index 9a52a07..a98b62d 100644
830 +--- a/arch/arm/kernel/smp_tlb.c
831 ++++ b/arch/arm/kernel/smp_tlb.c
832 +@@ -103,7 +103,7 @@ static void broadcast_tlb_a15_erratum(void)
833 +
834 + static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
835 + {
836 +- int cpu, this_cpu;
837 ++ int this_cpu;
838 + cpumask_t mask = { CPU_BITS_NONE };
839 +
840 + if (!erratum_a15_798181())
841 +@@ -111,21 +111,7 @@ static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
842 +
843 + dummy_flush_tlb_a15_erratum();
844 + this_cpu = get_cpu();
845 +- for_each_online_cpu(cpu) {
846 +- if (cpu == this_cpu)
847 +- continue;
848 +- /*
849 +- * We only need to send an IPI if the other CPUs are running
850 +- * the same ASID as the one being invalidated. There is no
851 +- * need for locking around the active_asids check since the
852 +- * switch_mm() function has at least one dmb() (as required by
853 +- * this workaround) in case a context switch happens on
854 +- * another CPU after the condition below.
855 +- */
856 +- if (atomic64_read(&mm->context.id) ==
857 +- atomic64_read(&per_cpu(active_asids, cpu)))
858 +- cpumask_set_cpu(cpu, &mask);
859 +- }
860 ++ a15_erratum_get_cpumask(this_cpu, mm, &mask);
861 + smp_call_function_many(&mask, ipi_flush_tlb_a15_erratum, NULL, 1);
862 + put_cpu();
863 + }
864 +diff --git a/arch/arm/kernel/smp_twd.c b/arch/arm/kernel/smp_twd.c
865 +index 90525d9..f6fd1d4 100644
866 +--- a/arch/arm/kernel/smp_twd.c
867 ++++ b/arch/arm/kernel/smp_twd.c
868 +@@ -120,7 +120,7 @@ static int twd_rate_change(struct notifier_block *nb,
869 + * changing cpu.
870 + */
871 + if (flags == POST_RATE_CHANGE)
872 +- smp_call_function(twd_update_frequency,
873 ++ on_each_cpu(twd_update_frequency,
874 + (void *)&cnd->new_rate, 1);
875 +
876 + return NOTIFY_OK;
877 +diff --git a/arch/arm/mach-shmobile/setup-emev2.c b/arch/arm/mach-shmobile/setup-emev2.c
878 +index 899a86c..1ccddd2 100644
879 +--- a/arch/arm/mach-shmobile/setup-emev2.c
880 ++++ b/arch/arm/mach-shmobile/setup-emev2.c
881 +@@ -287,14 +287,14 @@ static struct gpio_em_config gio3_config = {
882 + static struct resource gio3_resources[] = {
883 + [0] = {
884 + .name = "GIO_096",
885 +- .start = 0xe0050100,
886 +- .end = 0xe005012b,
887 ++ .start = 0xe0050180,
888 ++ .end = 0xe00501ab,
889 + .flags = IORESOURCE_MEM,
890 + },
891 + [1] = {
892 + .name = "GIO_096",
893 +- .start = 0xe0050140,
894 +- .end = 0xe005015f,
895 ++ .start = 0xe00501c0,
896 ++ .end = 0xe00501df,
897 + .flags = IORESOURCE_MEM,
898 + },
899 + [2] = {
900 +diff --git a/arch/arm/mach-shmobile/setup-r8a73a4.c b/arch/arm/mach-shmobile/setup-r8a73a4.c
901 +index c5a75a7..7f45c2e 100644
902 +--- a/arch/arm/mach-shmobile/setup-r8a73a4.c
903 ++++ b/arch/arm/mach-shmobile/setup-r8a73a4.c
904 +@@ -62,7 +62,7 @@ enum { SCIFA0, SCIFA1, SCIFB0, SCIFB1, SCIFB2, SCIFB3 };
905 + static const struct plat_sci_port scif[] = {
906 + SCIFA_DATA(SCIFA0, 0xe6c40000, gic_spi(144)), /* SCIFA0 */
907 + SCIFA_DATA(SCIFA1, 0xe6c50000, gic_spi(145)), /* SCIFA1 */
908 +- SCIFB_DATA(SCIFB0, 0xe6c50000, gic_spi(145)), /* SCIFB0 */
909 ++ SCIFB_DATA(SCIFB0, 0xe6c20000, gic_spi(148)), /* SCIFB0 */
910 + SCIFB_DATA(SCIFB1, 0xe6c30000, gic_spi(149)), /* SCIFB1 */
911 + SCIFB_DATA(SCIFB2, 0xe6ce0000, gic_spi(150)), /* SCIFB2 */
912 + SCIFB_DATA(SCIFB3, 0xe6cf0000, gic_spi(151)), /* SCIFB3 */
913 +diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
914 +index 2ac3737..eeab06e 100644
915 +--- a/arch/arm/mm/context.c
916 ++++ b/arch/arm/mm/context.c
917 +@@ -39,19 +39,43 @@
918 + * non 64-bit operations.
919 + */
920 + #define ASID_FIRST_VERSION (1ULL << ASID_BITS)
921 +-#define NUM_USER_ASIDS (ASID_FIRST_VERSION - 1)
922 +-
923 +-#define ASID_TO_IDX(asid) ((asid & ~ASID_MASK) - 1)
924 +-#define IDX_TO_ASID(idx) ((idx + 1) & ~ASID_MASK)
925 ++#define NUM_USER_ASIDS ASID_FIRST_VERSION
926 +
927 + static DEFINE_RAW_SPINLOCK(cpu_asid_lock);
928 + static atomic64_t asid_generation = ATOMIC64_INIT(ASID_FIRST_VERSION);
929 + static DECLARE_BITMAP(asid_map, NUM_USER_ASIDS);
930 +
931 +-DEFINE_PER_CPU(atomic64_t, active_asids);
932 ++static DEFINE_PER_CPU(atomic64_t, active_asids);
933 + static DEFINE_PER_CPU(u64, reserved_asids);
934 + static cpumask_t tlb_flush_pending;
935 +
936 ++#ifdef CONFIG_ARM_ERRATA_798181
937 ++void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm,
938 ++ cpumask_t *mask)
939 ++{
940 ++ int cpu;
941 ++ unsigned long flags;
942 ++ u64 context_id, asid;
943 ++
944 ++ raw_spin_lock_irqsave(&cpu_asid_lock, flags);
945 ++ context_id = mm->context.id.counter;
946 ++ for_each_online_cpu(cpu) {
947 ++ if (cpu == this_cpu)
948 ++ continue;
949 ++ /*
950 ++ * We only need to send an IPI if the other CPUs are
951 ++ * running the same ASID as the one being invalidated.
952 ++ */
953 ++ asid = per_cpu(active_asids, cpu).counter;
954 ++ if (asid == 0)
955 ++ asid = per_cpu(reserved_asids, cpu);
956 ++ if (context_id == asid)
957 ++ cpumask_set_cpu(cpu, mask);
958 ++ }
959 ++ raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
960 ++}
961 ++#endif
962 ++
963 + #ifdef CONFIG_ARM_LPAE
964 + static void cpu_set_reserved_ttbr0(void)
965 + {
966 +@@ -128,7 +152,16 @@ static void flush_context(unsigned int cpu)
967 + asid = 0;
968 + } else {
969 + asid = atomic64_xchg(&per_cpu(active_asids, i), 0);
970 +- __set_bit(ASID_TO_IDX(asid), asid_map);
971 ++ /*
972 ++ * If this CPU has already been through a
973 ++ * rollover, but hasn't run another task in
974 ++ * the meantime, we must preserve its reserved
975 ++ * ASID, as this is the only trace we have of
976 ++ * the process it is still running.
977 ++ */
978 ++ if (asid == 0)
979 ++ asid = per_cpu(reserved_asids, i);
980 ++ __set_bit(asid & ~ASID_MASK, asid_map);
981 + }
982 + per_cpu(reserved_asids, i) = asid;
983 + }
984 +@@ -167,17 +200,19 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
985 + /*
986 + * Allocate a free ASID. If we can't find one, take a
987 + * note of the currently active ASIDs and mark the TLBs
988 +- * as requiring flushes.
989 ++ * as requiring flushes. We always count from ASID #1,
990 ++ * as we reserve ASID #0 to switch via TTBR0 and indicate
991 ++ * rollover events.
992 + */
993 +- asid = find_first_zero_bit(asid_map, NUM_USER_ASIDS);
994 ++ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
995 + if (asid == NUM_USER_ASIDS) {
996 + generation = atomic64_add_return(ASID_FIRST_VERSION,
997 + &asid_generation);
998 + flush_context(cpu);
999 +- asid = find_first_zero_bit(asid_map, NUM_USER_ASIDS);
1000 ++ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
1001 + }
1002 + __set_bit(asid, asid_map);
1003 +- asid = generation | IDX_TO_ASID(asid);
1004 ++ asid |= generation;
1005 + cpumask_clear(mm_cpumask(mm));
1006 + }
1007 +
1008 +diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
1009 +index 9a5cdc0..0ecc43f 100644
1010 +--- a/arch/arm/mm/init.c
1011 ++++ b/arch/arm/mm/init.c
1012 +@@ -600,7 +600,7 @@ void __init mem_init(void)
1013 +
1014 + #ifdef CONFIG_SA1111
1015 + /* now that our DMA memory is actually so designated, we can free it */
1016 +- free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
1017 ++ free_reserved_area(__va(PHYS_OFFSET), swapper_pg_dir, 0, NULL);
1018 + #endif
1019 +
1020 + free_highpages();
1021 +diff --git a/arch/c6x/mm/init.c b/arch/c6x/mm/init.c
1022 +index a9fcd89..b74ccb5 100644
1023 +--- a/arch/c6x/mm/init.c
1024 ++++ b/arch/c6x/mm/init.c
1025 +@@ -18,6 +18,7 @@
1026 + #include <linux/initrd.h>
1027 +
1028 + #include <asm/sections.h>
1029 ++#include <asm/uaccess.h>
1030 +
1031 + /*
1032 + * ZERO_PAGE is a special page that is used for zero-initialized
1033 +diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
1034 +index d306b75..e150930 100644
1035 +--- a/arch/parisc/include/asm/special_insns.h
1036 ++++ b/arch/parisc/include/asm/special_insns.h
1037 +@@ -32,9 +32,12 @@ static inline void set_eiem(unsigned long val)
1038 + cr; \
1039 + })
1040 +
1041 +-#define mtsp(gr, cr) \
1042 +- __asm__ __volatile__("mtsp %0,%1" \
1043 ++#define mtsp(val, cr) \
1044 ++ { if (__builtin_constant_p(val) && ((val) == 0)) \
1045 ++ __asm__ __volatile__("mtsp %%r0,%0" : : "i" (cr) : "memory"); \
1046 ++ else \
1047 ++ __asm__ __volatile__("mtsp %0,%1" \
1048 + : /* no outputs */ \
1049 +- : "r" (gr), "i" (cr) : "memory")
1050 ++ : "r" (val), "i" (cr) : "memory"); }
1051 +
1052 + #endif /* __PARISC_SPECIAL_INSNS_H */
1053 +diff --git a/arch/parisc/include/asm/tlbflush.h b/arch/parisc/include/asm/tlbflush.h
1054 +index 5273da9..9d086a5 100644
1055 +--- a/arch/parisc/include/asm/tlbflush.h
1056 ++++ b/arch/parisc/include/asm/tlbflush.h
1057 +@@ -63,13 +63,14 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
1058 + static inline void flush_tlb_page(struct vm_area_struct *vma,
1059 + unsigned long addr)
1060 + {
1061 +- unsigned long flags;
1062 ++ unsigned long flags, sid;
1063 +
1064 + /* For one page, it's not worth testing the split_tlb variable */
1065 +
1066 + mb();
1067 +- mtsp(vma->vm_mm->context,1);
1068 ++ sid = vma->vm_mm->context;
1069 + purge_tlb_start(flags);
1070 ++ mtsp(sid, 1);
1071 + pdtlb(addr);
1072 + pitlb(addr);
1073 + purge_tlb_end(flags);
1074 +diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
1075 +index 65fb4cb..2e65aa5 100644
1076 +--- a/arch/parisc/kernel/cache.c
1077 ++++ b/arch/parisc/kernel/cache.c
1078 +@@ -440,8 +440,8 @@ void __flush_tlb_range(unsigned long sid, unsigned long start,
1079 + else {
1080 + unsigned long flags;
1081 +
1082 +- mtsp(sid, 1);
1083 + purge_tlb_start(flags);
1084 ++ mtsp(sid, 1);
1085 + if (split_tlb) {
1086 + while (npages--) {
1087 + pdtlb(start);
1088 +diff --git a/arch/parisc/lib/memcpy.c b/arch/parisc/lib/memcpy.c
1089 +index a49cc81..ac4370b 100644
1090 +--- a/arch/parisc/lib/memcpy.c
1091 ++++ b/arch/parisc/lib/memcpy.c
1092 +@@ -2,6 +2,7 @@
1093 + * Optimized memory copy routines.
1094 + *
1095 + * Copyright (C) 2004 Randolph Chung <tausq@××××××.org>
1096 ++ * Copyright (C) 2013 Helge Deller <deller@×××.de>
1097 + *
1098 + * This program is free software; you can redistribute it and/or modify
1099 + * it under the terms of the GNU General Public License as published by
1100 +@@ -153,17 +154,21 @@ static inline void prefetch_dst(const void *addr)
1101 + #define prefetch_dst(addr) do { } while(0)
1102 + #endif
1103 +
1104 ++#define PA_MEMCPY_OK 0
1105 ++#define PA_MEMCPY_LOAD_ERROR 1
1106 ++#define PA_MEMCPY_STORE_ERROR 2
1107 ++
1108 + /* Copy from a not-aligned src to an aligned dst, using shifts. Handles 4 words
1109 + * per loop. This code is derived from glibc.
1110 + */
1111 +-static inline unsigned long copy_dstaligned(unsigned long dst, unsigned long src, unsigned long len, unsigned long o_dst, unsigned long o_src, unsigned long o_len)
1112 ++static inline unsigned long copy_dstaligned(unsigned long dst,
1113 ++ unsigned long src, unsigned long len)
1114 + {
1115 + /* gcc complains that a2 and a3 may be uninitialized, but actually
1116 + * they cannot be. Initialize a2/a3 to shut gcc up.
1117 + */
1118 + register unsigned int a0, a1, a2 = 0, a3 = 0;
1119 + int sh_1, sh_2;
1120 +- struct exception_data *d;
1121 +
1122 + /* prefetch_src((const void *)src); */
1123 +
1124 +@@ -197,7 +202,7 @@ static inline unsigned long copy_dstaligned(unsigned long dst, unsigned long src
1125 + goto do2;
1126 + case 0:
1127 + if (len == 0)
1128 +- return 0;
1129 ++ return PA_MEMCPY_OK;
1130 + /* a3 = ((unsigned int *) src)[0];
1131 + a0 = ((unsigned int *) src)[1]; */
1132 + ldw(s_space, 0, src, a3, cda_ldw_exc);
1133 +@@ -256,42 +261,35 @@ do0:
1134 + preserve_branch(handle_load_error);
1135 + preserve_branch(handle_store_error);
1136 +
1137 +- return 0;
1138 ++ return PA_MEMCPY_OK;
1139 +
1140 + handle_load_error:
1141 + __asm__ __volatile__ ("cda_ldw_exc:\n");
1142 +- d = &__get_cpu_var(exception_data);
1143 +- DPRINTF("cda_ldw_exc: o_len=%lu fault_addr=%lu o_src=%lu ret=%lu\n",
1144 +- o_len, d->fault_addr, o_src, o_len - d->fault_addr + o_src);
1145 +- return o_len * 4 - d->fault_addr + o_src;
1146 ++ return PA_MEMCPY_LOAD_ERROR;
1147 +
1148 + handle_store_error:
1149 + __asm__ __volatile__ ("cda_stw_exc:\n");
1150 +- d = &__get_cpu_var(exception_data);
1151 +- DPRINTF("cda_stw_exc: o_len=%lu fault_addr=%lu o_dst=%lu ret=%lu\n",
1152 +- o_len, d->fault_addr, o_dst, o_len - d->fault_addr + o_dst);
1153 +- return o_len * 4 - d->fault_addr + o_dst;
1154 ++ return PA_MEMCPY_STORE_ERROR;
1155 + }
1156 +
1157 +
1158 +-/* Returns 0 for success, otherwise, returns number of bytes not transferred. */
1159 +-static unsigned long pa_memcpy(void *dstp, const void *srcp, unsigned long len)
1160 ++/* Returns PA_MEMCPY_OK, PA_MEMCPY_LOAD_ERROR or PA_MEMCPY_STORE_ERROR.
1161 ++ * In case of an access fault the faulty address can be read from the per_cpu
1162 ++ * exception data struct. */
1163 ++static unsigned long pa_memcpy_internal(void *dstp, const void *srcp,
1164 ++ unsigned long len)
1165 + {
1166 + register unsigned long src, dst, t1, t2, t3;
1167 + register unsigned char *pcs, *pcd;
1168 + register unsigned int *pws, *pwd;
1169 + register double *pds, *pdd;
1170 +- unsigned long ret = 0;
1171 +- unsigned long o_dst, o_src, o_len;
1172 +- struct exception_data *d;
1173 ++ unsigned long ret;
1174 +
1175 + src = (unsigned long)srcp;
1176 + dst = (unsigned long)dstp;
1177 + pcs = (unsigned char *)srcp;
1178 + pcd = (unsigned char *)dstp;
1179 +
1180 +- o_dst = dst; o_src = src; o_len = len;
1181 +-
1182 + /* prefetch_src((const void *)srcp); */
1183 +
1184 + if (len < THRESHOLD)
1185 +@@ -401,7 +399,7 @@ byte_copy:
1186 + len--;
1187 + }
1188 +
1189 +- return 0;
1190 ++ return PA_MEMCPY_OK;
1191 +
1192 + unaligned_copy:
1193 + /* possibly we are aligned on a word, but not on a double... */
1194 +@@ -438,8 +436,7 @@ unaligned_copy:
1195 + src = (unsigned long)pcs;
1196 + }
1197 +
1198 +- ret = copy_dstaligned(dst, src, len / sizeof(unsigned int),
1199 +- o_dst, o_src, o_len);
1200 ++ ret = copy_dstaligned(dst, src, len / sizeof(unsigned int));
1201 + if (ret)
1202 + return ret;
1203 +
1204 +@@ -454,17 +451,41 @@ unaligned_copy:
1205 +
1206 + handle_load_error:
1207 + __asm__ __volatile__ ("pmc_load_exc:\n");
1208 +- d = &__get_cpu_var(exception_data);
1209 +- DPRINTF("pmc_load_exc: o_len=%lu fault_addr=%lu o_src=%lu ret=%lu\n",
1210 +- o_len, d->fault_addr, o_src, o_len - d->fault_addr + o_src);
1211 +- return o_len - d->fault_addr + o_src;
1212 ++ return PA_MEMCPY_LOAD_ERROR;
1213 +
1214 + handle_store_error:
1215 + __asm__ __volatile__ ("pmc_store_exc:\n");
1216 ++ return PA_MEMCPY_STORE_ERROR;
1217 ++}
1218 ++
1219 ++
1220 ++/* Returns 0 for success, otherwise, returns number of bytes not transferred. */
1221 ++static unsigned long pa_memcpy(void *dstp, const void *srcp, unsigned long len)
1222 ++{
1223 ++ unsigned long ret, fault_addr, reference;
1224 ++ struct exception_data *d;
1225 ++
1226 ++ ret = pa_memcpy_internal(dstp, srcp, len);
1227 ++ if (likely(ret == PA_MEMCPY_OK))
1228 ++ return 0;
1229 ++
1230 ++ /* if a load or store fault occured we can get the faulty addr */
1231 + d = &__get_cpu_var(exception_data);
1232 +- DPRINTF("pmc_store_exc: o_len=%lu fault_addr=%lu o_dst=%lu ret=%lu\n",
1233 +- o_len, d->fault_addr, o_dst, o_len - d->fault_addr + o_dst);
1234 +- return o_len - d->fault_addr + o_dst;
1235 ++ fault_addr = d->fault_addr;
1236 ++
1237 ++ /* error in load or store? */
1238 ++ if (ret == PA_MEMCPY_LOAD_ERROR)
1239 ++ reference = (unsigned long) srcp;
1240 ++ else
1241 ++ reference = (unsigned long) dstp;
1242 ++
1243 ++ DPRINTF("pa_memcpy: fault type = %lu, len=%lu fault_addr=%lu ref=%lu\n",
1244 ++ ret, len, fault_addr, reference);
1245 ++
1246 ++ if (fault_addr >= reference)
1247 ++ return len - (fault_addr - reference);
1248 ++ else
1249 ++ return len;
1250 + }
1251 +
1252 + #ifdef __KERNEL__
1253 +diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
1254 +index c205035..d606463 100644
1255 +--- a/arch/x86/boot/compressed/eboot.c
1256 ++++ b/arch/x86/boot/compressed/eboot.c
1257 +@@ -992,18 +992,20 @@ static efi_status_t exit_boot(struct boot_params *boot_params,
1258 + efi_memory_desc_t *mem_map;
1259 + efi_status_t status;
1260 + __u32 desc_version;
1261 ++ bool called_exit = false;
1262 + u8 nr_entries;
1263 + int i;
1264 +
1265 + size = sizeof(*mem_map) * 32;
1266 +
1267 + again:
1268 +- size += sizeof(*mem_map);
1269 ++ size += sizeof(*mem_map) * 2;
1270 + _size = size;
1271 + status = low_alloc(size, 1, (unsigned long *)&mem_map);
1272 + if (status != EFI_SUCCESS)
1273 + return status;
1274 +
1275 ++get_map:
1276 + status = efi_call_phys5(sys_table->boottime->get_memory_map, &size,
1277 + mem_map, &key, &desc_size, &desc_version);
1278 + if (status == EFI_BUFFER_TOO_SMALL) {
1279 +@@ -1029,8 +1031,20 @@ again:
1280 + /* Might as well exit boot services now */
1281 + status = efi_call_phys2(sys_table->boottime->exit_boot_services,
1282 + handle, key);
1283 +- if (status != EFI_SUCCESS)
1284 +- goto free_mem_map;
1285 ++ if (status != EFI_SUCCESS) {
1286 ++ /*
1287 ++ * ExitBootServices() will fail if any of the event
1288 ++ * handlers change the memory map. In which case, we
1289 ++ * must be prepared to retry, but only once so that
1290 ++ * we're guaranteed to exit on repeated failures instead
1291 ++ * of spinning forever.
1292 ++ */
1293 ++ if (called_exit)
1294 ++ goto free_mem_map;
1295 ++
1296 ++ called_exit = true;
1297 ++ goto get_map;
1298 ++ }
1299 +
1300 + /* Historic? */
1301 + boot_params->alt_mem_k = 32 * 1024;
1302 +diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
1303 +index 3d88bfd..13e8935 100644
1304 +--- a/arch/x86/xen/time.c
1305 ++++ b/arch/x86/xen/time.c
1306 +@@ -36,9 +36,8 @@ static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
1307 + /* snapshots of runstate info */
1308 + static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate_snapshot);
1309 +
1310 +-/* unused ns of stolen and blocked time */
1311 ++/* unused ns of stolen time */
1312 + static DEFINE_PER_CPU(u64, xen_residual_stolen);
1313 +-static DEFINE_PER_CPU(u64, xen_residual_blocked);
1314 +
1315 + /* return an consistent snapshot of 64-bit time/counter value */
1316 + static u64 get64(const u64 *p)
1317 +@@ -115,7 +114,7 @@ static void do_stolen_accounting(void)
1318 + {
1319 + struct vcpu_runstate_info state;
1320 + struct vcpu_runstate_info *snap;
1321 +- s64 blocked, runnable, offline, stolen;
1322 ++ s64 runnable, offline, stolen;
1323 + cputime_t ticks;
1324 +
1325 + get_runstate_snapshot(&state);
1326 +@@ -125,7 +124,6 @@ static void do_stolen_accounting(void)
1327 + snap = &__get_cpu_var(xen_runstate_snapshot);
1328 +
1329 + /* work out how much time the VCPU has not been runn*ing* */
1330 +- blocked = state.time[RUNSTATE_blocked] - snap->time[RUNSTATE_blocked];
1331 + runnable = state.time[RUNSTATE_runnable] - snap->time[RUNSTATE_runnable];
1332 + offline = state.time[RUNSTATE_offline] - snap->time[RUNSTATE_offline];
1333 +
1334 +@@ -141,17 +139,6 @@ static void do_stolen_accounting(void)
1335 + ticks = iter_div_u64_rem(stolen, NS_PER_TICK, &stolen);
1336 + __this_cpu_write(xen_residual_stolen, stolen);
1337 + account_steal_ticks(ticks);
1338 +-
1339 +- /* Add the appropriate number of ticks of blocked time,
1340 +- including any left-overs from last time. */
1341 +- blocked += __this_cpu_read(xen_residual_blocked);
1342 +-
1343 +- if (blocked < 0)
1344 +- blocked = 0;
1345 +-
1346 +- ticks = iter_div_u64_rem(blocked, NS_PER_TICK, &blocked);
1347 +- __this_cpu_write(xen_residual_blocked, blocked);
1348 +- account_idle_ticks(ticks);
1349 + }
1350 +
1351 + /* Get the TSC speed from Xen */
1352 +diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
1353 +index 536562c..97c949a 100644
1354 +--- a/drivers/acpi/Makefile
1355 ++++ b/drivers/acpi/Makefile
1356 +@@ -43,6 +43,7 @@ acpi-y += acpi_platform.o
1357 + acpi-y += power.o
1358 + acpi-y += event.o
1359 + acpi-y += sysfs.o
1360 ++acpi-$(CONFIG_X86) += acpi_cmos_rtc.o
1361 + acpi-$(CONFIG_DEBUG_FS) += debugfs.o
1362 + acpi-$(CONFIG_ACPI_NUMA) += numa.o
1363 + acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o
1364 +diff --git a/drivers/acpi/acpi_cmos_rtc.c b/drivers/acpi/acpi_cmos_rtc.c
1365 +new file mode 100644
1366 +index 0000000..84190ed
1367 +--- /dev/null
1368 ++++ b/drivers/acpi/acpi_cmos_rtc.c
1369 +@@ -0,0 +1,92 @@
1370 ++/*
1371 ++ * ACPI support for CMOS RTC Address Space access
1372 ++ *
1373 ++ * Copyright (C) 2013, Intel Corporation
1374 ++ * Authors: Lan Tianyu <tianyu.lan@×××××.com>
1375 ++ *
1376 ++ * This program is free software; you can redistribute it and/or modify
1377 ++ * it under the terms of the GNU General Public License version 2 as
1378 ++ * published by the Free Software Foundation.
1379 ++ */
1380 ++
1381 ++#include <linux/acpi.h>
1382 ++#include <linux/device.h>
1383 ++#include <linux/err.h>
1384 ++#include <linux/kernel.h>
1385 ++#include <linux/module.h>
1386 ++#include <asm-generic/rtc.h>
1387 ++
1388 ++#include "internal.h"
1389 ++
1390 ++#define PREFIX "ACPI: "
1391 ++
1392 ++ACPI_MODULE_NAME("cmos rtc");
1393 ++
1394 ++static const struct acpi_device_id acpi_cmos_rtc_ids[] = {
1395 ++ { "PNP0B00" },
1396 ++ { "PNP0B01" },
1397 ++ { "PNP0B02" },
1398 ++ {}
1399 ++};
1400 ++
1401 ++static acpi_status
1402 ++acpi_cmos_rtc_space_handler(u32 function, acpi_physical_address address,
1403 ++ u32 bits, u64 *value64,
1404 ++ void *handler_context, void *region_context)
1405 ++{
1406 ++ int i;
1407 ++ u8 *value = (u8 *)&value64;
1408 ++
1409 ++ if (address > 0xff || !value64)
1410 ++ return AE_BAD_PARAMETER;
1411 ++
1412 ++ if (function != ACPI_WRITE && function != ACPI_READ)
1413 ++ return AE_BAD_PARAMETER;
1414 ++
1415 ++ spin_lock_irq(&rtc_lock);
1416 ++
1417 ++ for (i = 0; i < DIV_ROUND_UP(bits, 8); ++i, ++address, ++value)
1418 ++ if (function == ACPI_READ)
1419 ++ *value = CMOS_READ(address);
1420 ++ else
1421 ++ CMOS_WRITE(*value, address);
1422 ++
1423 ++ spin_unlock_irq(&rtc_lock);
1424 ++
1425 ++ return AE_OK;
1426 ++}
1427 ++
1428 ++static int acpi_install_cmos_rtc_space_handler(struct acpi_device *adev,
1429 ++ const struct acpi_device_id *id)
1430 ++{
1431 ++ acpi_status status;
1432 ++
1433 ++ status = acpi_install_address_space_handler(adev->handle,
1434 ++ ACPI_ADR_SPACE_CMOS,
1435 ++ &acpi_cmos_rtc_space_handler,
1436 ++ NULL, NULL);
1437 ++ if (ACPI_FAILURE(status)) {
1438 ++ pr_err(PREFIX "Error installing CMOS-RTC region handler\n");
1439 ++ return -ENODEV;
1440 ++ }
1441 ++
1442 ++ return 0;
1443 ++}
1444 ++
1445 ++static void acpi_remove_cmos_rtc_space_handler(struct acpi_device *adev)
1446 ++{
1447 ++ if (ACPI_FAILURE(acpi_remove_address_space_handler(adev->handle,
1448 ++ ACPI_ADR_SPACE_CMOS, &acpi_cmos_rtc_space_handler)))
1449 ++ pr_err(PREFIX "Error removing CMOS-RTC region handler\n");
1450 ++}
1451 ++
1452 ++static struct acpi_scan_handler cmos_rtc_handler = {
1453 ++ .ids = acpi_cmos_rtc_ids,
1454 ++ .attach = acpi_install_cmos_rtc_space_handler,
1455 ++ .detach = acpi_remove_cmos_rtc_space_handler,
1456 ++};
1457 ++
1458 ++void __init acpi_cmos_rtc_init(void)
1459 ++{
1460 ++ acpi_scan_add_handler(&cmos_rtc_handler);
1461 ++}
1462 +diff --git a/drivers/acpi/acpica/hwxfsleep.c b/drivers/acpi/acpica/hwxfsleep.c
1463 +index 35eebda..09b06e2 100644
1464 +--- a/drivers/acpi/acpica/hwxfsleep.c
1465 ++++ b/drivers/acpi/acpica/hwxfsleep.c
1466 +@@ -240,12 +240,14 @@ static acpi_status acpi_hw_sleep_dispatch(u8 sleep_state, u32 function_id)
1467 + &acpi_sleep_dispatch[function_id];
1468 +
1469 + #if (!ACPI_REDUCED_HARDWARE)
1470 +-
1471 + /*
1472 + * If the Hardware Reduced flag is set (from the FADT), we must
1473 +- * use the extended sleep registers
1474 ++ * use the extended sleep registers (FADT). Note: As per the ACPI
1475 ++ * specification, these extended registers are to be used for HW-reduced
1476 ++ * platforms only. They are not general-purpose replacements for the
1477 ++ * legacy PM register sleep support.
1478 + */
1479 +- if (acpi_gbl_reduced_hardware || acpi_gbl_FADT.sleep_control.address) {
1480 ++ if (acpi_gbl_reduced_hardware) {
1481 + status = sleep_functions->extended_function(sleep_state);
1482 + } else {
1483 + /* Legacy sleep */
1484 +diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
1485 +index 31c217a..553527c 100644
1486 +--- a/drivers/acpi/device_pm.c
1487 ++++ b/drivers/acpi/device_pm.c
1488 +@@ -324,14 +324,27 @@ int acpi_bus_update_power(acpi_handle handle, int *state_p)
1489 + if (result)
1490 + return result;
1491 +
1492 +- if (state == ACPI_STATE_UNKNOWN)
1493 ++ if (state == ACPI_STATE_UNKNOWN) {
1494 + state = ACPI_STATE_D0;
1495 +-
1496 +- result = acpi_device_set_power(device, state);
1497 +- if (!result && state_p)
1498 ++ result = acpi_device_set_power(device, state);
1499 ++ if (result)
1500 ++ return result;
1501 ++ } else {
1502 ++ if (device->power.flags.power_resources) {
1503 ++ /*
1504 ++ * We don't need to really switch the state, bu we need
1505 ++ * to update the power resources' reference counters.
1506 ++ */
1507 ++ result = acpi_power_transition(device, state);
1508 ++ if (result)
1509 ++ return result;
1510 ++ }
1511 ++ device->power.state = state;
1512 ++ }
1513 ++ if (state_p)
1514 + *state_p = state;
1515 +
1516 +- return result;
1517 ++ return 0;
1518 + }
1519 + EXPORT_SYMBOL_GPL(acpi_bus_update_power);
1520 +
1521 +diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
1522 +index edc0081..80403c1 100644
1523 +--- a/drivers/acpi/ec.c
1524 ++++ b/drivers/acpi/ec.c
1525 +@@ -983,6 +983,10 @@ static struct dmi_system_id __initdata ec_dmi_table[] = {
1526 + ec_enlarge_storm_threshold, "CLEVO hardware", {
1527 + DMI_MATCH(DMI_SYS_VENDOR, "CLEVO Co."),
1528 + DMI_MATCH(DMI_PRODUCT_NAME, "M720T/M730T"),}, NULL},
1529 ++ {
1530 ++ ec_skip_dsdt_scan, "HP Folio 13", {
1531 ++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
1532 ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Folio 13"),}, NULL},
1533 + {},
1534 + };
1535 +
1536 +diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
1537 +index c610a76..63a0854 100644
1538 +--- a/drivers/acpi/internal.h
1539 ++++ b/drivers/acpi/internal.h
1540 +@@ -50,6 +50,11 @@ void acpi_memory_hotplug_init(void);
1541 + #else
1542 + static inline void acpi_memory_hotplug_init(void) {}
1543 + #endif
1544 ++#ifdef CONFIG_X86
1545 ++void acpi_cmos_rtc_init(void);
1546 ++#else
1547 ++static inline void acpi_cmos_rtc_init(void) {}
1548 ++#endif
1549 +
1550 + void acpi_sysfs_add_hotplug_profile(struct acpi_hotplug_profile *hotplug,
1551 + const char *name);
1552 +diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
1553 +index 27da630..14807e5 100644
1554 +--- a/drivers/acpi/scan.c
1555 ++++ b/drivers/acpi/scan.c
1556 +@@ -2040,6 +2040,7 @@ int __init acpi_scan_init(void)
1557 + acpi_pci_link_init();
1558 + acpi_platform_init();
1559 + acpi_lpss_init();
1560 ++ acpi_cmos_rtc_init();
1561 + acpi_container_init();
1562 + acpi_memory_hotplug_init();
1563 + acpi_dock_init();
1564 +diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
1565 +index 2b50dfd..b112625 100644
1566 +--- a/drivers/ata/ahci.c
1567 ++++ b/drivers/ata/ahci.c
1568 +@@ -291,6 +291,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
1569 + { PCI_VDEVICE(INTEL, 0x8d64), board_ahci }, /* Wellsburg RAID */
1570 + { PCI_VDEVICE(INTEL, 0x8d66), board_ahci }, /* Wellsburg RAID */
1571 + { PCI_VDEVICE(INTEL, 0x8d6e), board_ahci }, /* Wellsburg RAID */
1572 ++ { PCI_VDEVICE(INTEL, 0x23a3), board_ahci }, /* Coleto Creek AHCI */
1573 +
1574 + /* JMicron 360/1/3/5/6, match class to avoid IDE function */
1575 + { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
1576 +@@ -310,6 +311,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
1577 +
1578 + /* AMD */
1579 + { PCI_VDEVICE(AMD, 0x7800), board_ahci }, /* AMD Hudson-2 */
1580 ++ { PCI_VDEVICE(AMD, 0x7900), board_ahci }, /* AMD CZ */
1581 + /* AMD is using RAID class only for ahci controllers */
1582 + { PCI_VENDOR_ID_AMD, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
1583 + PCI_CLASS_STORAGE_RAID << 8, 0xffffff, board_ahci },
1584 +diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
1585 +index a70ff15..7b9bdd8 100644
1586 +--- a/drivers/ata/libahci.c
1587 ++++ b/drivers/ata/libahci.c
1588 +@@ -1560,8 +1560,7 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat)
1589 + u32 fbs = readl(port_mmio + PORT_FBS);
1590 + int pmp = fbs >> PORT_FBS_DWE_OFFSET;
1591 +
1592 +- if ((fbs & PORT_FBS_SDE) && (pmp < ap->nr_pmp_links) &&
1593 +- ata_link_online(&ap->pmp_link[pmp])) {
1594 ++ if ((fbs & PORT_FBS_SDE) && (pmp < ap->nr_pmp_links)) {
1595 + link = &ap->pmp_link[pmp];
1596 + fbs_need_dec = true;
1597 + }
1598 +diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
1599 +index 46b35f7..cf1576d 100644
1600 +--- a/drivers/block/nbd.c
1601 ++++ b/drivers/block/nbd.c
1602 +@@ -623,8 +623,10 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
1603 + if (!nbd->sock)
1604 + return -EINVAL;
1605 +
1606 ++ nbd->disconnect = 1;
1607 ++
1608 + nbd_send_req(nbd, &sreq);
1609 +- return 0;
1610 ++ return 0;
1611 + }
1612 +
1613 + case NBD_CLEAR_SOCK: {
1614 +@@ -654,6 +656,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
1615 + nbd->sock = SOCKET_I(inode);
1616 + if (max_part > 0)
1617 + bdev->bd_invalidated = 1;
1618 ++ nbd->disconnect = 0; /* we're connected now */
1619 + return 0;
1620 + } else {
1621 + fput(file);
1622 +@@ -743,6 +746,8 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
1623 + set_capacity(nbd->disk, 0);
1624 + if (max_part > 0)
1625 + ioctl_by_bdev(bdev, BLKRRPART, 0);
1626 ++ if (nbd->disconnect) /* user requested, ignore socket errors */
1627 ++ return 0;
1628 + return nbd->harderror;
1629 + }
1630 +
1631 +diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
1632 +index a17553f..7ec82f0 100644
1633 +--- a/drivers/dma/pl330.c
1634 ++++ b/drivers/dma/pl330.c
1635 +@@ -2485,10 +2485,10 @@ static void pl330_free_chan_resources(struct dma_chan *chan)
1636 + struct dma_pl330_chan *pch = to_pchan(chan);
1637 + unsigned long flags;
1638 +
1639 +- spin_lock_irqsave(&pch->lock, flags);
1640 +-
1641 + tasklet_kill(&pch->task);
1642 +
1643 ++ spin_lock_irqsave(&pch->lock, flags);
1644 ++
1645 + pl330_release_channel(pch->pl330_chid);
1646 + pch->pl330_chid = NULL;
1647 +
1648 +diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
1649 +index feae88b..c7710b5 100644
1650 +--- a/drivers/hid/hid-apple.c
1651 ++++ b/drivers/hid/hid-apple.c
1652 +@@ -524,6 +524,12 @@ static const struct hid_device_id apple_devices[] = {
1653 + .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD },
1654 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_JIS),
1655 + .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
1656 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI),
1657 ++ .driver_data = APPLE_HAS_FN },
1658 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ISO),
1659 ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD },
1660 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_JIS),
1661 ++ .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS },
1662 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI),
1663 + .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
1664 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ISO),
1665 +diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
1666 +index 264f550..402f486 100644
1667 +--- a/drivers/hid/hid-core.c
1668 ++++ b/drivers/hid/hid-core.c
1669 +@@ -1547,6 +1547,9 @@ static const struct hid_device_id hid_have_special_driver[] = {
1670 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_ANSI) },
1671 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_ISO) },
1672 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_JIS) },
1673 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI) },
1674 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ISO) },
1675 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_JIS) },
1676 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI) },
1677 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ISO) },
1678 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_JIS) },
1679 +@@ -2179,6 +2182,9 @@ static const struct hid_device_id hid_mouse_ignore_list[] = {
1680 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_ANSI) },
1681 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_ISO) },
1682 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7A_JIS) },
1683 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI) },
1684 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_ISO) },
1685 ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING8_JIS) },
1686 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY) },
1687 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) },
1688 + { }
1689 +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
1690 +index 38535c9..2168885 100644
1691 +--- a/drivers/hid/hid-ids.h
1692 ++++ b/drivers/hid/hid-ids.h
1693 +@@ -135,6 +135,9 @@
1694 + #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_JIS 0x023b
1695 + #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ANSI 0x0255
1696 + #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO 0x0256
1697 ++#define USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI 0x0291
1698 ++#define USB_DEVICE_ID_APPLE_WELLSPRING8_ISO 0x0292
1699 ++#define USB_DEVICE_ID_APPLE_WELLSPRING8_JIS 0x0293
1700 + #define USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY 0x030a
1701 + #define USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY 0x030b
1702 + #define USB_DEVICE_ID_APPLE_IRCONTROL 0x8240
1703 +diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
1704 +index d6fbb577..791f45d 100644
1705 +--- a/drivers/hv/ring_buffer.c
1706 ++++ b/drivers/hv/ring_buffer.c
1707 +@@ -32,7 +32,7 @@
1708 + void hv_begin_read(struct hv_ring_buffer_info *rbi)
1709 + {
1710 + rbi->ring_buffer->interrupt_mask = 1;
1711 +- smp_mb();
1712 ++ mb();
1713 + }
1714 +
1715 + u32 hv_end_read(struct hv_ring_buffer_info *rbi)
1716 +@@ -41,7 +41,7 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi)
1717 + u32 write;
1718 +
1719 + rbi->ring_buffer->interrupt_mask = 0;
1720 +- smp_mb();
1721 ++ mb();
1722 +
1723 + /*
1724 + * Now check to see if the ring buffer is still empty.
1725 +@@ -71,7 +71,7 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi)
1726 +
1727 + static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi)
1728 + {
1729 +- smp_mb();
1730 ++ mb();
1731 + if (rbi->ring_buffer->interrupt_mask)
1732 + return false;
1733 +
1734 +@@ -442,7 +442,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
1735 + sizeof(u64));
1736 +
1737 + /* Issue a full memory barrier before updating the write index */
1738 +- smp_mb();
1739 ++ mb();
1740 +
1741 + /* Now, update the write location */
1742 + hv_set_next_write_location(outring_info, next_write_location);
1743 +@@ -549,7 +549,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, void *buffer,
1744 + /* Make sure all reads are done before we update the read index since */
1745 + /* the writer may start writing to the read area once the read index */
1746 + /*is updated */
1747 +- smp_mb();
1748 ++ mb();
1749 +
1750 + /* Update the read index */
1751 + hv_set_next_read_location(inring_info, next_read_location);
1752 +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
1753 +index bf421e0..4004e54 100644
1754 +--- a/drivers/hv/vmbus_drv.c
1755 ++++ b/drivers/hv/vmbus_drv.c
1756 +@@ -434,7 +434,7 @@ static void vmbus_on_msg_dpc(unsigned long data)
1757 + * will not deliver any more messages since there is
1758 + * no empty slot
1759 + */
1760 +- smp_mb();
1761 ++ mb();
1762 +
1763 + if (msg->header.message_flags.msg_pending) {
1764 + /*
1765 +diff --git a/drivers/input/mouse/bcm5974.c b/drivers/input/mouse/bcm5974.c
1766 +index 2baff1b..4ef4d5e 100644
1767 +--- a/drivers/input/mouse/bcm5974.c
1768 ++++ b/drivers/input/mouse/bcm5974.c
1769 +@@ -88,6 +88,10 @@
1770 + #define USB_DEVICE_ID_APPLE_WELLSPRING7A_ANSI 0x0259
1771 + #define USB_DEVICE_ID_APPLE_WELLSPRING7A_ISO 0x025a
1772 + #define USB_DEVICE_ID_APPLE_WELLSPRING7A_JIS 0x025b
1773 ++/* MacbookAir6,2 (unibody, June 2013) */
1774 ++#define USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI 0x0291
1775 ++#define USB_DEVICE_ID_APPLE_WELLSPRING8_ISO 0x0292
1776 ++#define USB_DEVICE_ID_APPLE_WELLSPRING8_JIS 0x0293
1777 +
1778 + #define BCM5974_DEVICE(prod) { \
1779 + .match_flags = (USB_DEVICE_ID_MATCH_DEVICE | \
1780 +@@ -145,6 +149,10 @@ static const struct usb_device_id bcm5974_table[] = {
1781 + BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING7A_ANSI),
1782 + BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING7A_ISO),
1783 + BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING7A_JIS),
1784 ++ /* MacbookAir6,2 */
1785 ++ BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI),
1786 ++ BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING8_ISO),
1787 ++ BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING8_JIS),
1788 + /* Terminating entry */
1789 + {}
1790 + };
1791 +@@ -172,15 +180,18 @@ struct bt_data {
1792 + /* trackpad header types */
1793 + enum tp_type {
1794 + TYPE1, /* plain trackpad */
1795 +- TYPE2 /* button integrated in trackpad */
1796 ++ TYPE2, /* button integrated in trackpad */
1797 ++ TYPE3 /* additional header fields since June 2013 */
1798 + };
1799 +
1800 + /* trackpad finger data offsets, le16-aligned */
1801 + #define FINGER_TYPE1 (13 * sizeof(__le16))
1802 + #define FINGER_TYPE2 (15 * sizeof(__le16))
1803 ++#define FINGER_TYPE3 (19 * sizeof(__le16))
1804 +
1805 + /* trackpad button data offsets */
1806 + #define BUTTON_TYPE2 15
1807 ++#define BUTTON_TYPE3 23
1808 +
1809 + /* list of device capability bits */
1810 + #define HAS_INTEGRATED_BUTTON 1
1811 +@@ -400,6 +411,19 @@ static const struct bcm5974_config bcm5974_config_table[] = {
1812 + { SN_COORD, -150, 6730 },
1813 + { SN_ORIENT, -MAX_FINGER_ORIENTATION, MAX_FINGER_ORIENTATION }
1814 + },
1815 ++ {
1816 ++ USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI,
1817 ++ USB_DEVICE_ID_APPLE_WELLSPRING8_ISO,
1818 ++ USB_DEVICE_ID_APPLE_WELLSPRING8_JIS,
1819 ++ HAS_INTEGRATED_BUTTON,
1820 ++ 0, sizeof(struct bt_data),
1821 ++ 0x83, TYPE3, FINGER_TYPE3, FINGER_TYPE3 + SIZEOF_ALL_FINGERS,
1822 ++ { SN_PRESSURE, 0, 300 },
1823 ++ { SN_WIDTH, 0, 2048 },
1824 ++ { SN_COORD, -4620, 5140 },
1825 ++ { SN_COORD, -150, 6600 },
1826 ++ { SN_ORIENT, -MAX_FINGER_ORIENTATION, MAX_FINGER_ORIENTATION }
1827 ++ },
1828 + {}
1829 + };
1830 +
1831 +@@ -557,6 +581,9 @@ static int report_tp_state(struct bcm5974 *dev, int size)
1832 + input_report_key(input, BTN_LEFT, ibt);
1833 + }
1834 +
1835 ++ if (c->tp_type == TYPE3)
1836 ++ input_report_key(input, BTN_LEFT, dev->tp_data[BUTTON_TYPE3]);
1837 ++
1838 + input_sync(input);
1839 +
1840 + return 0;
1841 +@@ -572,9 +599,14 @@ static int report_tp_state(struct bcm5974 *dev, int size)
1842 +
1843 + static int bcm5974_wellspring_mode(struct bcm5974 *dev, bool on)
1844 + {
1845 +- char *data = kmalloc(8, GFP_KERNEL);
1846 + int retval = 0, size;
1847 ++ char *data;
1848 ++
1849 ++ /* Type 3 does not require a mode switch */
1850 ++ if (dev->cfg.tp_type == TYPE3)
1851 ++ return 0;
1852 +
1853 ++ data = kmalloc(8, GFP_KERNEL);
1854 + if (!data) {
1855 + dev_err(&dev->intf->dev, "out of memory\n");
1856 + retval = -ENOMEM;
1857 +diff --git a/drivers/net/wireless/iwlwifi/pcie/tx.c b/drivers/net/wireless/iwlwifi/pcie/tx.c
1858 +index c5e3029..48acfc6 100644
1859 +--- a/drivers/net/wireless/iwlwifi/pcie/tx.c
1860 ++++ b/drivers/net/wireless/iwlwifi/pcie/tx.c
1861 +@@ -576,10 +576,16 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
1862 +
1863 + spin_lock_bh(&txq->lock);
1864 + while (q->write_ptr != q->read_ptr) {
1865 ++ IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
1866 ++ txq_id, q->read_ptr);
1867 + iwl_pcie_txq_free_tfd(trans, txq);
1868 + q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd);
1869 + }
1870 ++ txq->active = false;
1871 + spin_unlock_bh(&txq->lock);
1872 ++
1873 ++ /* just in case - this queue may have been stopped */
1874 ++ iwl_wake_queue(trans, txq);
1875 + }
1876 +
1877 + /*
1878 +@@ -927,6 +933,12 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
1879 +
1880 + spin_lock_bh(&txq->lock);
1881 +
1882 ++ if (!txq->active) {
1883 ++ IWL_DEBUG_TX_QUEUES(trans, "Q %d inactive - ignoring idx %d\n",
1884 ++ txq_id, ssn);
1885 ++ goto out;
1886 ++ }
1887 ++
1888 + if (txq->q.read_ptr == tfd_num)
1889 + goto out;
1890 +
1891 +@@ -1103,6 +1115,7 @@ void iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, int fifo,
1892 + (fifo << SCD_QUEUE_STTS_REG_POS_TXF) |
1893 + (1 << SCD_QUEUE_STTS_REG_POS_WSL) |
1894 + SCD_QUEUE_STTS_REG_MSK);
1895 ++ trans_pcie->txq[txq_id].active = true;
1896 + IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d on FIFO %d WrPtr: %d\n",
1897 + txq_id, fifo, ssn & 0xff);
1898 + }
1899 +diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/rf.c b/drivers/net/wireless/rtlwifi/rtl8192cu/rf.c
1900 +index 953f1a0..2119313 100644
1901 +--- a/drivers/net/wireless/rtlwifi/rtl8192cu/rf.c
1902 ++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/rf.c
1903 +@@ -104,7 +104,7 @@ void rtl92cu_phy_rf6052_set_cck_txpower(struct ieee80211_hw *hw,
1904 + tx_agc[RF90_PATH_A] = 0x10101010;
1905 + tx_agc[RF90_PATH_B] = 0x10101010;
1906 + } else if (rtlpriv->dm.dynamic_txhighpower_lvl ==
1907 +- TXHIGHPWRLEVEL_LEVEL1) {
1908 ++ TXHIGHPWRLEVEL_LEVEL2) {
1909 + tx_agc[RF90_PATH_A] = 0x00000000;
1910 + tx_agc[RF90_PATH_B] = 0x00000000;
1911 + } else{
1912 +diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
1913 +index 826f085..2bd5985 100644
1914 +--- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
1915 ++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
1916 +@@ -359,6 +359,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
1917 + {RTL_USB_DEVICE(0x2001, 0x330a, rtl92cu_hal_cfg)}, /*D-Link-Alpha*/
1918 + {RTL_USB_DEVICE(0x2019, 0xab2b, rtl92cu_hal_cfg)}, /*Planex -Abocom*/
1919 + {RTL_USB_DEVICE(0x20f4, 0x624d, rtl92cu_hal_cfg)}, /*TRENDNet*/
1920 ++ {RTL_USB_DEVICE(0x2357, 0x0100, rtl92cu_hal_cfg)}, /*TP-Link WN8200ND*/
1921 + {RTL_USB_DEVICE(0x7392, 0x7822, rtl92cu_hal_cfg)}, /*Edimax -Edimax*/
1922 + {}
1923 + };
1924 +diff --git a/drivers/net/wireless/rtlwifi/rtl8723ae/sw.c b/drivers/net/wireless/rtlwifi/rtl8723ae/sw.c
1925 +index e4c4cdc..d9ee2ef 100644
1926 +--- a/drivers/net/wireless/rtlwifi/rtl8723ae/sw.c
1927 ++++ b/drivers/net/wireless/rtlwifi/rtl8723ae/sw.c
1928 +@@ -251,7 +251,7 @@ static struct rtl_hal_cfg rtl8723ae_hal_cfg = {
1929 + .bar_id = 2,
1930 + .write_readback = true,
1931 + .name = "rtl8723ae_pci",
1932 +- .fw_name = "rtlwifi/rtl8723aefw.bin",
1933 ++ .fw_name = "rtlwifi/rtl8723fw.bin",
1934 + .ops = &rtl8723ae_hal_ops,
1935 + .mod_params = &rtl8723ae_mod_params,
1936 + .maps[SYS_ISO_CTRL] = REG_SYS_ISO_CTRL,
1937 +@@ -353,8 +353,8 @@ MODULE_AUTHOR("Realtek WlanFAE <wlanfae@×××××××.com>");
1938 + MODULE_AUTHOR("Larry Finger <Larry.Finger@××××××××.net>");
1939 + MODULE_LICENSE("GPL");
1940 + MODULE_DESCRIPTION("Realtek 8723E 802.11n PCI wireless");
1941 +-MODULE_FIRMWARE("rtlwifi/rtl8723aefw.bin");
1942 +-MODULE_FIRMWARE("rtlwifi/rtl8723aefw_B.bin");
1943 ++MODULE_FIRMWARE("rtlwifi/rtl8723fw.bin");
1944 ++MODULE_FIRMWARE("rtlwifi/rtl8723fw_B.bin");
1945 +
1946 + module_param_named(swenc, rtl8723ae_mod_params.sw_crypto, bool, 0444);
1947 + module_param_named(debug, rtl8723ae_mod_params.debug, int, 0444);
1948 +diff --git a/drivers/parisc/lba_pci.c b/drivers/parisc/lba_pci.c
1949 +index 1f05913..19f6f70 100644
1950 +--- a/drivers/parisc/lba_pci.c
1951 ++++ b/drivers/parisc/lba_pci.c
1952 +@@ -613,6 +613,54 @@ truncate_pat_collision(struct resource *root, struct resource *new)
1953 + return 0; /* truncation successful */
1954 + }
1955 +
1956 ++/*
1957 ++ * extend_lmmio_len: extend lmmio range to maximum length
1958 ++ *
1959 ++ * This is needed at least on C8000 systems to get the ATI FireGL card
1960 ++ * working. On other systems we will currently not extend the lmmio space.
1961 ++ */
1962 ++static unsigned long
1963 ++extend_lmmio_len(unsigned long start, unsigned long end, unsigned long lba_len)
1964 ++{
1965 ++ struct resource *tmp;
1966 ++
1967 ++ pr_debug("LMMIO mismatch: PAT length = 0x%lx, MASK register = 0x%lx\n",
1968 ++ end - start, lba_len);
1969 ++
1970 ++ lba_len = min(lba_len+1, 256UL*1024*1024); /* limit to 256 MB */
1971 ++
1972 ++ pr_debug("LBA: lmmio_space [0x%lx-0x%lx] - original\n", start, end);
1973 ++
1974 ++ if (boot_cpu_data.cpu_type < mako) {
1975 ++ pr_info("LBA: Not a C8000 system - not extending LMMIO range.\n");
1976 ++ return end;
1977 ++ }
1978 ++
1979 ++ end += lba_len;
1980 ++ if (end < start) /* fix overflow */
1981 ++ end = -1ULL;
1982 ++
1983 ++ pr_debug("LBA: lmmio_space [0x%lx-0x%lx] - current\n", start, end);
1984 ++
1985 ++ /* first overlap */
1986 ++ for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) {
1987 ++ pr_debug("LBA: testing %pR\n", tmp);
1988 ++ if (tmp->start == start)
1989 ++ continue; /* ignore ourself */
1990 ++ if (tmp->end < start)
1991 ++ continue;
1992 ++ if (tmp->start > end)
1993 ++ continue;
1994 ++ if (end >= tmp->start)
1995 ++ end = tmp->start - 1;
1996 ++ }
1997 ++
1998 ++ pr_info("LBA: lmmio_space [0x%lx-0x%lx] - new\n", start, end);
1999 ++
2000 ++ /* return new end */
2001 ++ return end;
2002 ++}
2003 ++
2004 + #else
2005 + #define truncate_pat_collision(r,n) (0)
2006 + #endif
2007 +@@ -994,6 +1042,14 @@ lba_pat_resources(struct parisc_device *pa_dev, struct lba_device *lba_dev)
2008 + case PAT_LMMIO:
2009 + /* used to fix up pre-initialized MEM BARs */
2010 + if (!lba_dev->hba.lmmio_space.flags) {
2011 ++ unsigned long lba_len;
2012 ++
2013 ++ lba_len = ~READ_REG32(lba_dev->hba.base_addr
2014 ++ + LBA_LMMIO_MASK);
2015 ++ if ((p->end - p->start) != lba_len)
2016 ++ p->end = extend_lmmio_len(p->start,
2017 ++ p->end, lba_len);
2018 ++
2019 + sprintf(lba_dev->hba.lmmio_name,
2020 + "PCI%02x LMMIO",
2021 + (int)lba_dev->hba.bus_num.start);
2022 +diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
2023 +index c93071d..a971a6f 100644
2024 +--- a/drivers/pci/iov.c
2025 ++++ b/drivers/pci/iov.c
2026 +@@ -92,6 +92,8 @@ static int virtfn_add(struct pci_dev *dev, int id, int reset)
2027 + pci_read_config_word(dev, iov->pos + PCI_SRIOV_VF_DID, &virtfn->device);
2028 + pci_setup_device(virtfn);
2029 + virtfn->dev.parent = dev->dev.parent;
2030 ++ virtfn->physfn = pci_dev_get(dev);
2031 ++ virtfn->is_virtfn = 1;
2032 +
2033 + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
2034 + res = dev->resource + PCI_IOV_RESOURCES + i;
2035 +@@ -113,9 +115,6 @@ static int virtfn_add(struct pci_dev *dev, int id, int reset)
2036 + pci_device_add(virtfn, virtfn->bus);
2037 + mutex_unlock(&iov->dev->sriov->lock);
2038 +
2039 +- virtfn->physfn = pci_dev_get(dev);
2040 +- virtfn->is_virtfn = 1;
2041 +-
2042 + rc = pci_bus_add_device(virtfn);
2043 + sprintf(buf, "virtfn%u", id);
2044 + rc = sysfs_create_link(&dev->dev.kobj, &virtfn->dev.kobj, buf);
2045 +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
2046 +index 70f10fa..ea37072 100644
2047 +--- a/drivers/pci/probe.c
2048 ++++ b/drivers/pci/probe.c
2049 +@@ -1703,12 +1703,16 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
2050 + bridge->dev.release = pci_release_bus_bridge_dev;
2051 + dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus);
2052 + error = pcibios_root_bridge_prepare(bridge);
2053 +- if (error)
2054 +- goto bridge_dev_reg_err;
2055 ++ if (error) {
2056 ++ kfree(bridge);
2057 ++ goto err_out;
2058 ++ }
2059 +
2060 + error = device_register(&bridge->dev);
2061 +- if (error)
2062 +- goto bridge_dev_reg_err;
2063 ++ if (error) {
2064 ++ put_device(&bridge->dev);
2065 ++ goto err_out;
2066 ++ }
2067 + b->bridge = get_device(&bridge->dev);
2068 + device_enable_async_suspend(b->bridge);
2069 + pci_set_bus_of_node(b);
2070 +@@ -1764,8 +1768,6 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
2071 + class_dev_reg_err:
2072 + put_device(&bridge->dev);
2073 + device_unregister(&bridge->dev);
2074 +-bridge_dev_reg_err:
2075 +- kfree(bridge);
2076 + err_out:
2077 + kfree(b);
2078 + return NULL;
2079 +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
2080 +index 7d68aee..df4655c 100644
2081 +--- a/drivers/pci/quirks.c
2082 ++++ b/drivers/pci/quirks.c
2083 +@@ -1022,6 +1022,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP700_SATA, quirk
2084 + DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP700_SATA, quirk_amd_ide_mode);
2085 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_HUDSON2_SATA_IDE, quirk_amd_ide_mode);
2086 + DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_HUDSON2_SATA_IDE, quirk_amd_ide_mode);
2087 ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, 0x7900, quirk_amd_ide_mode);
2088 ++DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_AMD, 0x7900, quirk_amd_ide_mode);
2089 +
2090 + /*
2091 + * Serverworks CSB5 IDE does not fully support native mode
2092 +diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
2093 +index 966abc6..f7197a7 100644
2094 +--- a/drivers/pci/xen-pcifront.c
2095 ++++ b/drivers/pci/xen-pcifront.c
2096 +@@ -678,10 +678,9 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
2097 + if (!pcifront_dev) {
2098 + dev_info(&pdev->xdev->dev, "Installing PCI frontend\n");
2099 + pcifront_dev = pdev;
2100 +- } else {
2101 +- dev_err(&pdev->xdev->dev, "PCI frontend already installed!\n");
2102 ++ } else
2103 + err = -EEXIST;
2104 +- }
2105 ++
2106 + spin_unlock(&pcifront_dev_lock);
2107 +
2108 + if (!err && !swiotlb_nr_tbl()) {
2109 +@@ -848,7 +847,7 @@ static int pcifront_try_connect(struct pcifront_device *pdev)
2110 + goto out;
2111 +
2112 + err = pcifront_connect_and_init_dma(pdev);
2113 +- if (err) {
2114 ++ if (err && err != -EEXIST) {
2115 + xenbus_dev_fatal(pdev->xdev, err,
2116 + "Error setting up PCI Frontend");
2117 + goto out;
2118 +diff --git a/drivers/pcmcia/at91_cf.c b/drivers/pcmcia/at91_cf.c
2119 +index 01463c7..1b2c631 100644
2120 +--- a/drivers/pcmcia/at91_cf.c
2121 ++++ b/drivers/pcmcia/at91_cf.c
2122 +@@ -100,9 +100,9 @@ static int at91_cf_get_status(struct pcmcia_socket *s, u_int *sp)
2123 + int vcc = gpio_is_valid(cf->board->vcc_pin);
2124 +
2125 + *sp = SS_DETECT | SS_3VCARD;
2126 +- if (!rdy || gpio_get_value(rdy))
2127 ++ if (!rdy || gpio_get_value(cf->board->irq_pin))
2128 + *sp |= SS_READY;
2129 +- if (!vcc || gpio_get_value(vcc))
2130 ++ if (!vcc || gpio_get_value(cf->board->vcc_pin))
2131 + *sp |= SS_POWERON;
2132 + } else
2133 + *sp = 0;
2134 +diff --git a/drivers/rtc/rtc-rv3029c2.c b/drivers/rtc/rtc-rv3029c2.c
2135 +index 5032c24..9100a34 100644
2136 +--- a/drivers/rtc/rtc-rv3029c2.c
2137 ++++ b/drivers/rtc/rtc-rv3029c2.c
2138 +@@ -310,7 +310,7 @@ static int rv3029c2_rtc_i2c_set_alarm(struct i2c_client *client,
2139 + dev_dbg(&client->dev, "alarm IRQ armed\n");
2140 + } else {
2141 + /* disable AIE irq */
2142 +- ret = rv3029c2_rtc_i2c_alarm_set_irq(client, 1);
2143 ++ ret = rv3029c2_rtc_i2c_alarm_set_irq(client, 0);
2144 + if (ret)
2145 + return ret;
2146 +
2147 +diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
2148 +index 21a7e17..572d481 100644
2149 +--- a/drivers/tty/serial/pch_uart.c
2150 ++++ b/drivers/tty/serial/pch_uart.c
2151 +@@ -217,6 +217,7 @@ enum {
2152 + #define FRI2_64_UARTCLK 64000000 /* 64.0000 MHz */
2153 + #define FRI2_48_UARTCLK 48000000 /* 48.0000 MHz */
2154 + #define NTC1_UARTCLK 64000000 /* 64.0000 MHz */
2155 ++#define MINNOW_UARTCLK 50000000 /* 50.0000 MHz */
2156 +
2157 + struct pch_uart_buffer {
2158 + unsigned char *buf;
2159 +@@ -398,6 +399,10 @@ static int pch_uart_get_uartclk(void)
2160 + strstr(cmp, "nanoETXexpress-TT")))
2161 + return NTC1_UARTCLK;
2162 +
2163 ++ cmp = dmi_get_system_info(DMI_BOARD_NAME);
2164 ++ if (cmp && strstr(cmp, "MinnowBoard"))
2165 ++ return MINNOW_UARTCLK;
2166 ++
2167 + return DEFAULT_UARTCLK;
2168 + }
2169 +
2170 +diff --git a/drivers/usb/gadget/f_mass_storage.c b/drivers/usb/gadget/f_mass_storage.c
2171 +index 97666e8..c35a9ec 100644
2172 +--- a/drivers/usb/gadget/f_mass_storage.c
2173 ++++ b/drivers/usb/gadget/f_mass_storage.c
2174 +@@ -413,6 +413,7 @@ static int fsg_set_halt(struct fsg_dev *fsg, struct usb_ep *ep)
2175 + /* Caller must hold fsg->lock */
2176 + static void wakeup_thread(struct fsg_common *common)
2177 + {
2178 ++ smp_wmb(); /* ensure the write of bh->state is complete */
2179 + /* Tell the main thread that something has happened */
2180 + common->thread_wakeup_needed = 1;
2181 + if (common->thread_task)
2182 +@@ -632,6 +633,7 @@ static int sleep_thread(struct fsg_common *common)
2183 + }
2184 + __set_current_state(TASK_RUNNING);
2185 + common->thread_wakeup_needed = 0;
2186 ++ smp_rmb(); /* ensure the latest bh->state is visible */
2187 + return rc;
2188 + }
2189 +
2190 +diff --git a/drivers/usb/host/ehci-omap.c b/drivers/usb/host/ehci-omap.c
2191 +index 16d7150..dda408f 100644
2192 +--- a/drivers/usb/host/ehci-omap.c
2193 ++++ b/drivers/usb/host/ehci-omap.c
2194 +@@ -187,6 +187,12 @@ static int ehci_hcd_omap_probe(struct platform_device *pdev)
2195 + }
2196 +
2197 + omap->phy[i] = phy;
2198 ++
2199 ++ if (pdata->port_mode[i] == OMAP_EHCI_PORT_MODE_PHY) {
2200 ++ usb_phy_init(omap->phy[i]);
2201 ++ /* bring PHY out of suspend */
2202 ++ usb_phy_set_suspend(omap->phy[i], 0);
2203 ++ }
2204 + }
2205 +
2206 + pm_runtime_enable(dev);
2207 +@@ -211,13 +217,14 @@ static int ehci_hcd_omap_probe(struct platform_device *pdev)
2208 + }
2209 +
2210 + /*
2211 +- * Bring PHYs out of reset.
2212 ++ * Bring PHYs out of reset for non PHY modes.
2213 + * Even though HSIC mode is a PHY-less mode, the reset
2214 + * line exists between the chips and can be modelled
2215 + * as a PHY device for reset control.
2216 + */
2217 + for (i = 0; i < omap->nports; i++) {
2218 +- if (!omap->phy[i])
2219 ++ if (!omap->phy[i] ||
2220 ++ pdata->port_mode[i] == OMAP_EHCI_PORT_MODE_PHY)
2221 + continue;
2222 +
2223 + usb_phy_init(omap->phy[i]);
2224 +diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
2225 +index fbf75e5..f2e57a1 100644
2226 +--- a/drivers/usb/host/xhci-mem.c
2227 ++++ b/drivers/usb/host/xhci-mem.c
2228 +@@ -369,6 +369,10 @@ static struct xhci_container_ctx *xhci_alloc_container_ctx(struct xhci_hcd *xhci
2229 + ctx->size += CTX_SIZE(xhci->hcc_params);
2230 +
2231 + ctx->bytes = dma_pool_alloc(xhci->device_pool, flags, &ctx->dma);
2232 ++ if (!ctx->bytes) {
2233 ++ kfree(ctx);
2234 ++ return NULL;
2235 ++ }
2236 + memset(ctx->bytes, 0, ctx->size);
2237 + return ctx;
2238 + }
2239 +diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
2240 +index df90fe5..93ad67e 100644
2241 +--- a/drivers/usb/host/xhci-plat.c
2242 ++++ b/drivers/usb/host/xhci-plat.c
2243 +@@ -179,6 +179,7 @@ static int xhci_plat_remove(struct platform_device *dev)
2244 +
2245 + usb_remove_hcd(hcd);
2246 + iounmap(hcd->regs);
2247 ++ release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
2248 + usb_put_hcd(hcd);
2249 + kfree(xhci);
2250 +
2251 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
2252 +index bd4323d..5dd857d 100644
2253 +--- a/drivers/usb/serial/option.c
2254 ++++ b/drivers/usb/serial/option.c
2255 +@@ -159,8 +159,6 @@ static void option_instat_callback(struct urb *urb);
2256 + #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_FULLSPEED 0x9000
2257 + #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED 0x9001
2258 + #define NOVATELWIRELESS_PRODUCT_E362 0x9010
2259 +-#define NOVATELWIRELESS_PRODUCT_G1 0xA001
2260 +-#define NOVATELWIRELESS_PRODUCT_G1_M 0xA002
2261 + #define NOVATELWIRELESS_PRODUCT_G2 0xA010
2262 + #define NOVATELWIRELESS_PRODUCT_MC551 0xB001
2263 +
2264 +@@ -730,8 +728,6 @@ static const struct usb_device_id option_ids[] = {
2265 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC547) },
2266 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_EVDO_EMBEDDED_HIGHSPEED) },
2267 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED) },
2268 +- { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1) },
2269 +- { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1_M) },
2270 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G2) },
2271 + /* Novatel Ovation MC551 a.k.a. Verizon USB551L */
2272 + { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) },
2273 +diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
2274 +index bd794b4..c65437c 100644
2275 +--- a/drivers/usb/serial/qcserial.c
2276 ++++ b/drivers/usb/serial/qcserial.c
2277 +@@ -35,7 +35,13 @@ static const struct usb_device_id id_table[] = {
2278 + {DEVICE_G1K(0x04da, 0x250c)}, /* Panasonic Gobi QDL device */
2279 + {DEVICE_G1K(0x413c, 0x8172)}, /* Dell Gobi Modem device */
2280 + {DEVICE_G1K(0x413c, 0x8171)}, /* Dell Gobi QDL device */
2281 +- {DEVICE_G1K(0x1410, 0xa001)}, /* Novatel Gobi Modem device */
2282 ++ {DEVICE_G1K(0x1410, 0xa001)}, /* Novatel/Verizon USB-1000 */
2283 ++ {DEVICE_G1K(0x1410, 0xa002)}, /* Novatel Gobi Modem device */
2284 ++ {DEVICE_G1K(0x1410, 0xa003)}, /* Novatel Gobi Modem device */
2285 ++ {DEVICE_G1K(0x1410, 0xa004)}, /* Novatel Gobi Modem device */
2286 ++ {DEVICE_G1K(0x1410, 0xa005)}, /* Novatel Gobi Modem device */
2287 ++ {DEVICE_G1K(0x1410, 0xa006)}, /* Novatel Gobi Modem device */
2288 ++ {DEVICE_G1K(0x1410, 0xa007)}, /* Novatel Gobi Modem device */
2289 + {DEVICE_G1K(0x1410, 0xa008)}, /* Novatel Gobi QDL device */
2290 + {DEVICE_G1K(0x0b05, 0x1776)}, /* Asus Gobi Modem device */
2291 + {DEVICE_G1K(0x0b05, 0x1774)}, /* Asus Gobi QDL device */
2292 +diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
2293 +index 02fae7f..7fb054b 100644
2294 +--- a/fs/btrfs/ctree.c
2295 ++++ b/fs/btrfs/ctree.c
2296 +@@ -1089,7 +1089,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
2297 + btrfs_set_node_ptr_generation(parent, parent_slot,
2298 + trans->transid);
2299 + btrfs_mark_buffer_dirty(parent);
2300 +- tree_mod_log_free_eb(root->fs_info, buf);
2301 ++ if (last_ref)
2302 ++ tree_mod_log_free_eb(root->fs_info, buf);
2303 + btrfs_free_tree_block(trans, root, buf, parent_start,
2304 + last_ref);
2305 + }
2306 +@@ -1161,8 +1162,8 @@ __tree_mod_log_oldest_root(struct btrfs_fs_info *fs_info,
2307 + * time_seq).
2308 + */
2309 + static void
2310 +-__tree_mod_log_rewind(struct extent_buffer *eb, u64 time_seq,
2311 +- struct tree_mod_elem *first_tm)
2312 ++__tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct extent_buffer *eb,
2313 ++ u64 time_seq, struct tree_mod_elem *first_tm)
2314 + {
2315 + u32 n;
2316 + struct rb_node *next;
2317 +@@ -1172,6 +1173,7 @@ __tree_mod_log_rewind(struct extent_buffer *eb, u64 time_seq,
2318 + unsigned long p_size = sizeof(struct btrfs_key_ptr);
2319 +
2320 + n = btrfs_header_nritems(eb);
2321 ++ tree_mod_log_read_lock(fs_info);
2322 + while (tm && tm->seq >= time_seq) {
2323 + /*
2324 + * all the operations are recorded with the operator used for
2325 +@@ -1226,6 +1228,7 @@ __tree_mod_log_rewind(struct extent_buffer *eb, u64 time_seq,
2326 + if (tm->index != first_tm->index)
2327 + break;
2328 + }
2329 ++ tree_mod_log_read_unlock(fs_info);
2330 + btrfs_set_header_nritems(eb, n);
2331 + }
2332 +
2333 +@@ -1274,7 +1277,7 @@ tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct extent_buffer *eb,
2334 +
2335 + extent_buffer_get(eb_rewin);
2336 + btrfs_tree_read_lock(eb_rewin);
2337 +- __tree_mod_log_rewind(eb_rewin, time_seq, tm);
2338 ++ __tree_mod_log_rewind(fs_info, eb_rewin, time_seq, tm);
2339 + WARN_ON(btrfs_header_nritems(eb_rewin) >
2340 + BTRFS_NODEPTRS_PER_BLOCK(fs_info->tree_root));
2341 +
2342 +@@ -1350,7 +1353,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
2343 + btrfs_set_header_generation(eb, old_generation);
2344 + }
2345 + if (tm)
2346 +- __tree_mod_log_rewind(eb, time_seq, tm);
2347 ++ __tree_mod_log_rewind(root->fs_info, eb, time_seq, tm);
2348 + else
2349 + WARN_ON(btrfs_header_level(eb) != 0);
2350 + WARN_ON(btrfs_header_nritems(eb) > BTRFS_NODEPTRS_PER_BLOCK(root));
2351 +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
2352 +index ff40f1c..ba9690b 100644
2353 +--- a/fs/btrfs/send.c
2354 ++++ b/fs/btrfs/send.c
2355 +@@ -4579,6 +4579,41 @@ long btrfs_ioctl_send(struct file *mnt_file, void __user *arg_)
2356 + send_root = BTRFS_I(file_inode(mnt_file))->root;
2357 + fs_info = send_root->fs_info;
2358 +
2359 ++ /*
2360 ++ * This is done when we lookup the root, it should already be complete
2361 ++ * by the time we get here.
2362 ++ */
2363 ++ WARN_ON(send_root->orphan_cleanup_state != ORPHAN_CLEANUP_DONE);
2364 ++
2365 ++ /*
2366 ++ * If we just created this root we need to make sure that the orphan
2367 ++ * cleanup has been done and committed since we search the commit root,
2368 ++ * so check its commit root transid with our otransid and if they match
2369 ++ * commit the transaction to make sure everything is updated.
2370 ++ */
2371 ++ down_read(&send_root->fs_info->extent_commit_sem);
2372 ++ if (btrfs_header_generation(send_root->commit_root) ==
2373 ++ btrfs_root_otransid(&send_root->root_item)) {
2374 ++ struct btrfs_trans_handle *trans;
2375 ++
2376 ++ up_read(&send_root->fs_info->extent_commit_sem);
2377 ++
2378 ++ trans = btrfs_attach_transaction_barrier(send_root);
2379 ++ if (IS_ERR(trans)) {
2380 ++ if (PTR_ERR(trans) != -ENOENT) {
2381 ++ ret = PTR_ERR(trans);
2382 ++ goto out;
2383 ++ }
2384 ++ /* ENOENT means theres no transaction */
2385 ++ } else {
2386 ++ ret = btrfs_commit_transaction(trans, send_root);
2387 ++ if (ret)
2388 ++ goto out;
2389 ++ }
2390 ++ } else {
2391 ++ up_read(&send_root->fs_info->extent_commit_sem);
2392 ++ }
2393 ++
2394 + arg = memdup_user(arg_, sizeof(*arg));
2395 + if (IS_ERR(arg)) {
2396 + ret = PTR_ERR(arg);
2397 +diff --git a/fs/cifs/cifs_unicode.h b/fs/cifs/cifs_unicode.h
2398 +index 4fb0974..fe8d627 100644
2399 +--- a/fs/cifs/cifs_unicode.h
2400 ++++ b/fs/cifs/cifs_unicode.h
2401 +@@ -327,14 +327,14 @@ UniToupper(register wchar_t uc)
2402 + /*
2403 + * UniStrupr: Upper case a unicode string
2404 + */
2405 +-static inline wchar_t *
2406 +-UniStrupr(register wchar_t *upin)
2407 ++static inline __le16 *
2408 ++UniStrupr(register __le16 *upin)
2409 + {
2410 +- register wchar_t *up;
2411 ++ register __le16 *up;
2412 +
2413 + up = upin;
2414 + while (*up) { /* For all characters */
2415 +- *up = UniToupper(*up);
2416 ++ *up = cpu_to_le16(UniToupper(le16_to_cpu(*up)));
2417 + up++;
2418 + }
2419 + return upin; /* Return input pointer */
2420 +diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
2421 +index 71436d1..f59d0d5 100644
2422 +--- a/fs/cifs/cifsencrypt.c
2423 ++++ b/fs/cifs/cifsencrypt.c
2424 +@@ -414,7 +414,7 @@ static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
2425 + int rc = 0;
2426 + int len;
2427 + char nt_hash[CIFS_NTHASH_SIZE];
2428 +- wchar_t *user;
2429 ++ __le16 *user;
2430 + wchar_t *domain;
2431 + wchar_t *server;
2432 +
2433 +@@ -439,7 +439,7 @@ static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
2434 + return rc;
2435 + }
2436 +
2437 +- /* convert ses->user_name to unicode and uppercase */
2438 ++ /* convert ses->user_name to unicode */
2439 + len = ses->user_name ? strlen(ses->user_name) : 0;
2440 + user = kmalloc(2 + (len * 2), GFP_KERNEL);
2441 + if (user == NULL) {
2442 +@@ -448,7 +448,7 @@ static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
2443 + }
2444 +
2445 + if (len) {
2446 +- len = cifs_strtoUTF16((__le16 *)user, ses->user_name, len, nls_cp);
2447 ++ len = cifs_strtoUTF16(user, ses->user_name, len, nls_cp);
2448 + UniStrupr(user);
2449 + } else {
2450 + memset(user, '\0', 2);
2451 +diff --git a/fs/cifs/file.c b/fs/cifs/file.c
2452 +index 48b29d2..c2934f8 100644
2453 +--- a/fs/cifs/file.c
2454 ++++ b/fs/cifs/file.c
2455 +@@ -553,11 +553,10 @@ cifs_relock_file(struct cifsFileInfo *cfile)
2456 + struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
2457 + int rc = 0;
2458 +
2459 +- /* we are going to update can_cache_brlcks here - need a write access */
2460 +- down_write(&cinode->lock_sem);
2461 ++ down_read(&cinode->lock_sem);
2462 + if (cinode->can_cache_brlcks) {
2463 +- /* can cache locks - no need to push them */
2464 +- up_write(&cinode->lock_sem);
2465 ++ /* can cache locks - no need to relock */
2466 ++ up_read(&cinode->lock_sem);
2467 + return rc;
2468 + }
2469 +
2470 +@@ -568,7 +567,7 @@ cifs_relock_file(struct cifsFileInfo *cfile)
2471 + else
2472 + rc = tcon->ses->server->ops->push_mand_locks(cfile);
2473 +
2474 +- up_write(&cinode->lock_sem);
2475 ++ up_read(&cinode->lock_sem);
2476 + return rc;
2477 + }
2478 +
2479 +diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
2480 +index 20efd81..449b6cf 100644
2481 +--- a/fs/cifs/inode.c
2482 ++++ b/fs/cifs/inode.c
2483 +@@ -558,6 +558,11 @@ cifs_all_info_to_fattr(struct cifs_fattr *fattr, FILE_ALL_INFO *info,
2484 + fattr->cf_mode &= ~(S_IWUGO);
2485 +
2486 + fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks);
2487 ++ if (fattr->cf_nlink < 1) {
2488 ++ cifs_dbg(1, "replacing bogus file nlink value %u\n",
2489 ++ fattr->cf_nlink);
2490 ++ fattr->cf_nlink = 1;
2491 ++ }
2492 + }
2493 +
2494 + fattr->cf_uid = cifs_sb->mnt_uid;
2495 +diff --git a/fs/ext3/namei.c b/fs/ext3/namei.c
2496 +index 692de13..cea8ecf 100644
2497 +--- a/fs/ext3/namei.c
2498 ++++ b/fs/ext3/namei.c
2499 +@@ -576,11 +576,8 @@ static int htree_dirblock_to_tree(struct file *dir_file,
2500 + if (!ext3_check_dir_entry("htree_dirblock_to_tree", dir, de, bh,
2501 + (block<<EXT3_BLOCK_SIZE_BITS(dir->i_sb))
2502 + +((char *)de - bh->b_data))) {
2503 +- /* On error, skip the f_pos to the next block. */
2504 +- dir_file->f_pos = (dir_file->f_pos |
2505 +- (dir->i_sb->s_blocksize - 1)) + 1;
2506 +- brelse (bh);
2507 +- return count;
2508 ++ /* silently ignore the rest of the block */
2509 ++ break;
2510 + }
2511 + ext3fs_dirhash(de->name, de->name_len, hinfo);
2512 + if ((hinfo->hash < start_hash) ||
2513 +diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
2514 +index d0f13ea..3742e4c 100644
2515 +--- a/fs/ext4/balloc.c
2516 ++++ b/fs/ext4/balloc.c
2517 +@@ -38,8 +38,8 @@ ext4_group_t ext4_get_group_number(struct super_block *sb,
2518 + ext4_group_t group;
2519 +
2520 + if (test_opt2(sb, STD_GROUP_SIZE))
2521 +- group = (le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block) +
2522 +- block) >>
2523 ++ group = (block -
2524 ++ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) >>
2525 + (EXT4_BLOCK_SIZE_BITS(sb) + EXT4_CLUSTER_BITS(sb) + 3);
2526 + else
2527 + ext4_get_group_no_and_offset(sb, block, &group, NULL);
2528 +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
2529 +index bc0f191..e49da58 100644
2530 +--- a/fs/ext4/extents.c
2531 ++++ b/fs/ext4/extents.c
2532 +@@ -4659,7 +4659,7 @@ static int ext4_xattr_fiemap(struct inode *inode,
2533 + error = ext4_get_inode_loc(inode, &iloc);
2534 + if (error)
2535 + return error;
2536 +- physical = iloc.bh->b_blocknr << blockbits;
2537 ++ physical = (__u64)iloc.bh->b_blocknr << blockbits;
2538 + offset = EXT4_GOOD_OLD_INODE_SIZE +
2539 + EXT4_I(inode)->i_extra_isize;
2540 + physical += offset;
2541 +@@ -4667,7 +4667,7 @@ static int ext4_xattr_fiemap(struct inode *inode,
2542 + flags |= FIEMAP_EXTENT_DATA_INLINE;
2543 + brelse(iloc.bh);
2544 + } else { /* external block */
2545 +- physical = EXT4_I(inode)->i_file_acl << blockbits;
2546 ++ physical = (__u64)EXT4_I(inode)->i_file_acl << blockbits;
2547 + length = inode->i_sb->s_blocksize;
2548 + }
2549 +
2550 +diff --git a/fs/ext4/file.c b/fs/ext4/file.c
2551 +index b1b4d51..b19f0a4 100644
2552 +--- a/fs/ext4/file.c
2553 ++++ b/fs/ext4/file.c
2554 +@@ -312,7 +312,7 @@ static int ext4_find_unwritten_pgoff(struct inode *inode,
2555 + blkbits = inode->i_sb->s_blocksize_bits;
2556 + startoff = *offset;
2557 + lastoff = startoff;
2558 +- endoff = (map->m_lblk + map->m_len) << blkbits;
2559 ++ endoff = (loff_t)(map->m_lblk + map->m_len) << blkbits;
2560 +
2561 + index = startoff >> PAGE_CACHE_SHIFT;
2562 + end = endoff >> PAGE_CACHE_SHIFT;
2563 +@@ -457,7 +457,7 @@ static loff_t ext4_seek_data(struct file *file, loff_t offset, loff_t maxsize)
2564 + ret = ext4_map_blocks(NULL, inode, &map, 0);
2565 + if (ret > 0 && !(map.m_flags & EXT4_MAP_UNWRITTEN)) {
2566 + if (last != start)
2567 +- dataoff = last << blkbits;
2568 ++ dataoff = (loff_t)last << blkbits;
2569 + break;
2570 + }
2571 +
2572 +@@ -468,7 +468,7 @@ static loff_t ext4_seek_data(struct file *file, loff_t offset, loff_t maxsize)
2573 + ext4_es_find_delayed_extent_range(inode, last, last, &es);
2574 + if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {
2575 + if (last != start)
2576 +- dataoff = last << blkbits;
2577 ++ dataoff = (loff_t)last << blkbits;
2578 + break;
2579 + }
2580 +
2581 +@@ -486,7 +486,7 @@ static loff_t ext4_seek_data(struct file *file, loff_t offset, loff_t maxsize)
2582 + }
2583 +
2584 + last++;
2585 +- dataoff = last << blkbits;
2586 ++ dataoff = (loff_t)last << blkbits;
2587 + } while (last <= end);
2588 +
2589 + mutex_unlock(&inode->i_mutex);
2590 +@@ -540,7 +540,7 @@ static loff_t ext4_seek_hole(struct file *file, loff_t offset, loff_t maxsize)
2591 + ret = ext4_map_blocks(NULL, inode, &map, 0);
2592 + if (ret > 0 && !(map.m_flags & EXT4_MAP_UNWRITTEN)) {
2593 + last += ret;
2594 +- holeoff = last << blkbits;
2595 ++ holeoff = (loff_t)last << blkbits;
2596 + continue;
2597 + }
2598 +
2599 +@@ -551,7 +551,7 @@ static loff_t ext4_seek_hole(struct file *file, loff_t offset, loff_t maxsize)
2600 + ext4_es_find_delayed_extent_range(inode, last, last, &es);
2601 + if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {
2602 + last = es.es_lblk + es.es_len;
2603 +- holeoff = last << blkbits;
2604 ++ holeoff = (loff_t)last << blkbits;
2605 + continue;
2606 + }
2607 +
2608 +@@ -566,7 +566,7 @@ static loff_t ext4_seek_hole(struct file *file, loff_t offset, loff_t maxsize)
2609 + &map, &holeoff);
2610 + if (!unwritten) {
2611 + last += ret;
2612 +- holeoff = last << blkbits;
2613 ++ holeoff = (loff_t)last << blkbits;
2614 + continue;
2615 + }
2616 + }
2617 +diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
2618 +index 3e2bf87..33331b4 100644
2619 +--- a/fs/ext4/inline.c
2620 ++++ b/fs/ext4/inline.c
2621 +@@ -1842,7 +1842,7 @@ int ext4_inline_data_fiemap(struct inode *inode,
2622 + if (error)
2623 + goto out;
2624 +
2625 +- physical = iloc.bh->b_blocknr << inode->i_sb->s_blocksize_bits;
2626 ++ physical = (__u64)iloc.bh->b_blocknr << inode->i_sb->s_blocksize_bits;
2627 + physical += (char *)ext4_raw_inode(&iloc) - iloc.bh->b_data;
2628 + physical += offsetof(struct ext4_inode, i_block);
2629 + length = i_size_read(inode);
2630 +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
2631 +index d6382b8..ccbfbbb 100644
2632 +--- a/fs/ext4/inode.c
2633 ++++ b/fs/ext4/inode.c
2634 +@@ -1118,10 +1118,13 @@ static int ext4_write_end(struct file *file,
2635 + }
2636 + }
2637 +
2638 +- if (ext4_has_inline_data(inode))
2639 +- copied = ext4_write_inline_data_end(inode, pos, len,
2640 +- copied, page);
2641 +- else
2642 ++ if (ext4_has_inline_data(inode)) {
2643 ++ ret = ext4_write_inline_data_end(inode, pos, len,
2644 ++ copied, page);
2645 ++ if (ret < 0)
2646 ++ goto errout;
2647 ++ copied = ret;
2648 ++ } else
2649 + copied = block_write_end(file, mapping, pos,
2650 + len, copied, page, fsdata);
2651 +
2652 +@@ -4805,7 +4808,7 @@ int ext4_getattr(struct vfsmount *mnt, struct dentry *dentry,
2653 + struct kstat *stat)
2654 + {
2655 + struct inode *inode;
2656 +- unsigned long delalloc_blocks;
2657 ++ unsigned long long delalloc_blocks;
2658 +
2659 + inode = dentry->d_inode;
2660 + generic_fillattr(inode, stat);
2661 +@@ -4823,7 +4826,7 @@ int ext4_getattr(struct vfsmount *mnt, struct dentry *dentry,
2662 + delalloc_blocks = EXT4_C2B(EXT4_SB(inode->i_sb),
2663 + EXT4_I(inode)->i_reserved_data_blocks);
2664 +
2665 +- stat->blocks += (delalloc_blocks << inode->i_sb->s_blocksize_bits)>>9;
2666 ++ stat->blocks += delalloc_blocks << (inode->i_sb->s_blocksize_bits-9);
2667 + return 0;
2668 + }
2669 +
2670 +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
2671 +index def8408..59c6750 100644
2672 +--- a/fs/ext4/mballoc.c
2673 ++++ b/fs/ext4/mballoc.c
2674 +@@ -4735,11 +4735,16 @@ do_more:
2675 + * blocks being freed are metadata. these blocks shouldn't
2676 + * be used until this transaction is committed
2677 + */
2678 ++ retry:
2679 + new_entry = kmem_cache_alloc(ext4_free_data_cachep, GFP_NOFS);
2680 + if (!new_entry) {
2681 +- ext4_mb_unload_buddy(&e4b);
2682 +- err = -ENOMEM;
2683 +- goto error_return;
2684 ++ /*
2685 ++ * We use a retry loop because
2686 ++ * ext4_free_blocks() is not allowed to fail.
2687 ++ */
2688 ++ cond_resched();
2689 ++ congestion_wait(BLK_RW_ASYNC, HZ/50);
2690 ++ goto retry;
2691 + }
2692 + new_entry->efd_start_cluster = bit;
2693 + new_entry->efd_group = block_group;
2694 +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
2695 +index 6653fc3..ab2f6dc 100644
2696 +--- a/fs/ext4/namei.c
2697 ++++ b/fs/ext4/namei.c
2698 +@@ -918,11 +918,8 @@ static int htree_dirblock_to_tree(struct file *dir_file,
2699 + bh->b_data, bh->b_size,
2700 + (block<<EXT4_BLOCK_SIZE_BITS(dir->i_sb))
2701 + + ((char *)de - bh->b_data))) {
2702 +- /* On error, skip the f_pos to the next block. */
2703 +- dir_file->f_pos = (dir_file->f_pos |
2704 +- (dir->i_sb->s_blocksize - 1)) + 1;
2705 +- brelse(bh);
2706 +- return count;
2707 ++ /* silently ignore the rest of the block */
2708 ++ break;
2709 + }
2710 + ext4fs_dirhash(de->name, de->name_len, hinfo);
2711 + if ((hinfo->hash < start_hash) ||
2712 +diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
2713 +index b27c96d..49d3c01 100644
2714 +--- a/fs/ext4/resize.c
2715 ++++ b/fs/ext4/resize.c
2716 +@@ -1656,12 +1656,10 @@ errout:
2717 + err = err2;
2718 +
2719 + if (!err) {
2720 +- ext4_fsblk_t first_block;
2721 +- first_block = ext4_group_first_block_no(sb, 0);
2722 + if (test_opt(sb, DEBUG))
2723 + printk(KERN_DEBUG "EXT4-fs: extended group to %llu "
2724 + "blocks\n", ext4_blocks_count(es));
2725 +- update_backups(sb, EXT4_SB(sb)->s_sbh->b_blocknr - first_block,
2726 ++ update_backups(sb, EXT4_SB(sb)->s_sbh->b_blocknr,
2727 + (char *)es, sizeof(struct ext4_super_block), 0);
2728 + }
2729 + return err;
2730 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
2731 +index 94cc84d..6681c03 100644
2732 +--- a/fs/ext4/super.c
2733 ++++ b/fs/ext4/super.c
2734 +@@ -1684,12 +1684,6 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
2735 +
2736 + if (sbi->s_qf_names[GRPQUOTA])
2737 + seq_printf(seq, ",grpjquota=%s", sbi->s_qf_names[GRPQUOTA]);
2738 +-
2739 +- if (test_opt(sb, USRQUOTA))
2740 +- seq_puts(seq, ",usrquota");
2741 +-
2742 +- if (test_opt(sb, GRPQUOTA))
2743 +- seq_puts(seq, ",grpquota");
2744 + #endif
2745 + }
2746 +
2747 +@@ -3586,10 +3580,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
2748 + sbi->s_addr_per_block_bits = ilog2(EXT4_ADDR_PER_BLOCK(sb));
2749 + sbi->s_desc_per_block_bits = ilog2(EXT4_DESC_PER_BLOCK(sb));
2750 +
2751 +- /* Do we have standard group size of blocksize * 8 blocks ? */
2752 +- if (sbi->s_blocks_per_group == blocksize << 3)
2753 +- set_opt2(sb, STD_GROUP_SIZE);
2754 +-
2755 + for (i = 0; i < 4; i++)
2756 + sbi->s_hash_seed[i] = le32_to_cpu(es->s_hash_seed[i]);
2757 + sbi->s_def_hash_version = es->s_def_hash_version;
2758 +@@ -3659,6 +3649,10 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
2759 + goto failed_mount;
2760 + }
2761 +
2762 ++ /* Do we have standard group size of clustersize * 8 blocks ? */
2763 ++ if (sbi->s_blocks_per_group == clustersize << 3)
2764 ++ set_opt2(sb, STD_GROUP_SIZE);
2765 ++
2766 + /*
2767 + * Test whether we have more sectors than will fit in sector_t,
2768 + * and whether the max offset is addressable by the page cache.
2769 +diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
2770 +index 9545757..aaa1a3f 100644
2771 +--- a/fs/jbd2/journal.c
2772 ++++ b/fs/jbd2/journal.c
2773 +@@ -1318,6 +1318,7 @@ static int journal_reset(journal_t *journal)
2774 + static void jbd2_write_superblock(journal_t *journal, int write_op)
2775 + {
2776 + struct buffer_head *bh = journal->j_sb_buffer;
2777 ++ journal_superblock_t *sb = journal->j_superblock;
2778 + int ret;
2779 +
2780 + trace_jbd2_write_superblock(journal, write_op);
2781 +@@ -1339,6 +1340,7 @@ static void jbd2_write_superblock(journal_t *journal, int write_op)
2782 + clear_buffer_write_io_error(bh);
2783 + set_buffer_uptodate(bh);
2784 + }
2785 ++ jbd2_superblock_csum_set(journal, sb);
2786 + get_bh(bh);
2787 + bh->b_end_io = end_buffer_write_sync;
2788 + ret = submit_bh(write_op, bh);
2789 +@@ -1435,7 +1437,6 @@ void jbd2_journal_update_sb_errno(journal_t *journal)
2790 + jbd_debug(1, "JBD2: updating superblock error (errno %d)\n",
2791 + journal->j_errno);
2792 + sb->s_errno = cpu_to_be32(journal->j_errno);
2793 +- jbd2_superblock_csum_set(journal, sb);
2794 + read_unlock(&journal->j_state_lock);
2795 +
2796 + jbd2_write_superblock(journal, WRITE_SYNC);
2797 +diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
2798 +index 10f524c..e0c0bc2 100644
2799 +--- a/fs/jbd2/transaction.c
2800 ++++ b/fs/jbd2/transaction.c
2801 +@@ -517,10 +517,10 @@ int jbd2__journal_restart(handle_t *handle, int nblocks, gfp_t gfp_mask)
2802 + &transaction->t_outstanding_credits);
2803 + if (atomic_dec_and_test(&transaction->t_updates))
2804 + wake_up(&journal->j_wait_updates);
2805 ++ tid = transaction->t_tid;
2806 + spin_unlock(&transaction->t_handle_lock);
2807 +
2808 + jbd_debug(2, "restarting handle %p\n", handle);
2809 +- tid = transaction->t_tid;
2810 + need_to_start = !tid_geq(journal->j_commit_request, tid);
2811 + read_unlock(&journal->j_state_lock);
2812 + if (need_to_start)
2813 +diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
2814 +index 2e3ea30..5b8d944 100644
2815 +--- a/fs/ocfs2/xattr.c
2816 ++++ b/fs/ocfs2/xattr.c
2817 +@@ -6499,6 +6499,16 @@ static int ocfs2_reflink_xattr_inline(struct ocfs2_xattr_reflink *args)
2818 + }
2819 +
2820 + new_oi = OCFS2_I(args->new_inode);
2821 ++ /*
2822 ++ * Adjust extent record count to reserve space for extended attribute.
2823 ++ * Inline data count had been adjusted in ocfs2_duplicate_inline_data().
2824 ++ */
2825 ++ if (!(new_oi->ip_dyn_features & OCFS2_INLINE_DATA_FL) &&
2826 ++ !(ocfs2_inode_is_fast_symlink(args->new_inode))) {
2827 ++ struct ocfs2_extent_list *el = &new_di->id2.i_list;
2828 ++ le16_add_cpu(&el->l_count, -(inline_size /
2829 ++ sizeof(struct ocfs2_extent_rec)));
2830 ++ }
2831 + spin_lock(&new_oi->ip_lock);
2832 + new_oi->ip_dyn_features |= OCFS2_HAS_XATTR_FL | OCFS2_INLINE_XATTR_FL;
2833 + new_di->i_dyn_features = cpu_to_le16(new_oi->ip_dyn_features);
2834 +diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
2835 +index f21acf0..879b997 100644
2836 +--- a/fs/ubifs/super.c
2837 ++++ b/fs/ubifs/super.c
2838 +@@ -1412,7 +1412,7 @@ static int mount_ubifs(struct ubifs_info *c)
2839 +
2840 + ubifs_msg("mounted UBI device %d, volume %d, name \"%s\"%s",
2841 + c->vi.ubi_num, c->vi.vol_id, c->vi.name,
2842 +- c->ro_mount ? ", R/O mode" : NULL);
2843 ++ c->ro_mount ? ", R/O mode" : "");
2844 + x = (long long)c->main_lebs * c->leb_size;
2845 + y = (long long)c->log_lebs * c->leb_size + c->max_bud_bytes;
2846 + ubifs_msg("LEB size: %d bytes (%d KiB), min./max. I/O unit sizes: %d bytes/%d bytes",
2847 +diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
2848 +index 8bda129..8852d37 100644
2849 +--- a/include/linux/cgroup.h
2850 ++++ b/include/linux/cgroup.h
2851 +@@ -646,22 +646,60 @@ static inline struct cgroup_subsys_state *cgroup_subsys_state(
2852 + return cgrp->subsys[subsys_id];
2853 + }
2854 +
2855 +-/*
2856 +- * function to get the cgroup_subsys_state which allows for extra
2857 +- * rcu_dereference_check() conditions, such as locks used during the
2858 +- * cgroup_subsys::attach() methods.
2859 ++/**
2860 ++ * task_css_set_check - obtain a task's css_set with extra access conditions
2861 ++ * @task: the task to obtain css_set for
2862 ++ * @__c: extra condition expression to be passed to rcu_dereference_check()
2863 ++ *
2864 ++ * A task's css_set is RCU protected, initialized and exited while holding
2865 ++ * task_lock(), and can only be modified while holding both cgroup_mutex
2866 ++ * and task_lock() while the task is alive. This macro verifies that the
2867 ++ * caller is inside proper critical section and returns @task's css_set.
2868 ++ *
2869 ++ * The caller can also specify additional allowed conditions via @__c, such
2870 ++ * as locks used during the cgroup_subsys::attach() methods.
2871 + */
2872 + #ifdef CONFIG_PROVE_RCU
2873 + extern struct mutex cgroup_mutex;
2874 +-#define task_subsys_state_check(task, subsys_id, __c) \
2875 +- rcu_dereference_check((task)->cgroups->subsys[(subsys_id)], \
2876 +- lockdep_is_held(&(task)->alloc_lock) || \
2877 +- lockdep_is_held(&cgroup_mutex) || (__c))
2878 ++#define task_css_set_check(task, __c) \
2879 ++ rcu_dereference_check((task)->cgroups, \
2880 ++ lockdep_is_held(&(task)->alloc_lock) || \
2881 ++ lockdep_is_held(&cgroup_mutex) || (__c))
2882 + #else
2883 +-#define task_subsys_state_check(task, subsys_id, __c) \
2884 +- rcu_dereference((task)->cgroups->subsys[(subsys_id)])
2885 ++#define task_css_set_check(task, __c) \
2886 ++ rcu_dereference((task)->cgroups)
2887 + #endif
2888 +
2889 ++/**
2890 ++ * task_subsys_state_check - obtain css for (task, subsys) w/ extra access conds
2891 ++ * @task: the target task
2892 ++ * @subsys_id: the target subsystem ID
2893 ++ * @__c: extra condition expression to be passed to rcu_dereference_check()
2894 ++ *
2895 ++ * Return the cgroup_subsys_state for the (@task, @subsys_id) pair. The
2896 ++ * synchronization rules are the same as task_css_set_check().
2897 ++ */
2898 ++#define task_subsys_state_check(task, subsys_id, __c) \
2899 ++ task_css_set_check((task), (__c))->subsys[(subsys_id)]
2900 ++
2901 ++/**
2902 ++ * task_css_set - obtain a task's css_set
2903 ++ * @task: the task to obtain css_set for
2904 ++ *
2905 ++ * See task_css_set_check().
2906 ++ */
2907 ++static inline struct css_set *task_css_set(struct task_struct *task)
2908 ++{
2909 ++ return task_css_set_check(task, false);
2910 ++}
2911 ++
2912 ++/**
2913 ++ * task_subsys_state - obtain css for (task, subsys)
2914 ++ * @task: the target task
2915 ++ * @subsys_id: the target subsystem ID
2916 ++ *
2917 ++ * See task_subsys_state_check().
2918 ++ */
2919 + static inline struct cgroup_subsys_state *
2920 + task_subsys_state(struct task_struct *task, int subsys_id)
2921 + {
2922 +diff --git a/include/linux/nbd.h b/include/linux/nbd.h
2923 +index 4871170..ae4981e 100644
2924 +--- a/include/linux/nbd.h
2925 ++++ b/include/linux/nbd.h
2926 +@@ -41,6 +41,7 @@ struct nbd_device {
2927 + u64 bytesize;
2928 + pid_t pid; /* pid of nbd-client, if attached */
2929 + int xmit_timeout;
2930 ++ int disconnect; /* a disconnect has been requested by user */
2931 + };
2932 +
2933 + #endif
2934 +diff --git a/kernel/cgroup.c b/kernel/cgroup.c
2935 +index a7c9e6d..c6e77ef 100644
2936 +--- a/kernel/cgroup.c
2937 ++++ b/kernel/cgroup.c
2938 +@@ -3727,6 +3727,23 @@ static int cgroup_write_notify_on_release(struct cgroup *cgrp,
2939 + }
2940 +
2941 + /*
2942 ++ * When dput() is called asynchronously, if umount has been done and
2943 ++ * then deactivate_super() in cgroup_free_fn() kills the superblock,
2944 ++ * there's a small window that vfs will see the root dentry with non-zero
2945 ++ * refcnt and trigger BUG().
2946 ++ *
2947 ++ * That's why we hold a reference before dput() and drop it right after.
2948 ++ */
2949 ++static void cgroup_dput(struct cgroup *cgrp)
2950 ++{
2951 ++ struct super_block *sb = cgrp->root->sb;
2952 ++
2953 ++ atomic_inc(&sb->s_active);
2954 ++ dput(cgrp->dentry);
2955 ++ deactivate_super(sb);
2956 ++}
2957 ++
2958 ++/*
2959 + * Unregister event and free resources.
2960 + *
2961 + * Gets called from workqueue.
2962 +@@ -3746,7 +3763,7 @@ static void cgroup_event_remove(struct work_struct *work)
2963 +
2964 + eventfd_ctx_put(event->eventfd);
2965 + kfree(event);
2966 +- dput(cgrp->dentry);
2967 ++ cgroup_dput(cgrp);
2968 + }
2969 +
2970 + /*
2971 +@@ -4031,12 +4048,8 @@ static void css_dput_fn(struct work_struct *work)
2972 + {
2973 + struct cgroup_subsys_state *css =
2974 + container_of(work, struct cgroup_subsys_state, dput_work);
2975 +- struct dentry *dentry = css->cgroup->dentry;
2976 +- struct super_block *sb = dentry->d_sb;
2977 +
2978 +- atomic_inc(&sb->s_active);
2979 +- dput(dentry);
2980 +- deactivate_super(sb);
2981 ++ cgroup_dput(css->cgroup);
2982 + }
2983 +
2984 + static void init_cgroup_css(struct cgroup_subsys_state *css,
2985 +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
2986 +index fa17855..dc4db32 100644
2987 +--- a/kernel/irq/manage.c
2988 ++++ b/kernel/irq/manage.c
2989 +@@ -555,9 +555,9 @@ int can_request_irq(unsigned int irq, unsigned long irqflags)
2990 + return 0;
2991 +
2992 + if (irq_settings_can_request(desc)) {
2993 +- if (desc->action)
2994 +- if (irqflags & desc->action->flags & IRQF_SHARED)
2995 +- canrequest =1;
2996 ++ if (!desc->action ||
2997 ++ irqflags & desc->action->flags & IRQF_SHARED)
2998 ++ canrequest = 1;
2999 + }
3000 + irq_put_desc_unlock(desc, flags);
3001 + return canrequest;
3002 +diff --git a/kernel/timer.c b/kernel/timer.c
3003 +index 15ffdb3..15bc1b4 100644
3004 +--- a/kernel/timer.c
3005 ++++ b/kernel/timer.c
3006 +@@ -149,9 +149,11 @@ static unsigned long round_jiffies_common(unsigned long j, int cpu,
3007 + /* now that we have rounded, subtract the extra skew again */
3008 + j -= cpu * 3;
3009 +
3010 +- if (j <= jiffies) /* rounding ate our timeout entirely; */
3011 +- return original;
3012 +- return j;
3013 ++ /*
3014 ++ * Make sure j is still in the future. Otherwise return the
3015 ++ * unmodified value.
3016 ++ */
3017 ++ return time_is_after_jiffies(j) ? j : original;
3018 + }
3019 +
3020 + /**
3021 +diff --git a/mm/memcontrol.c b/mm/memcontrol.c
3022 +index fd79df5..15b0409 100644
3023 +--- a/mm/memcontrol.c
3024 ++++ b/mm/memcontrol.c
3025 +@@ -6296,14 +6296,6 @@ mem_cgroup_css_online(struct cgroup *cont)
3026 +
3027 + error = memcg_init_kmem(memcg, &mem_cgroup_subsys);
3028 + mutex_unlock(&memcg_create_mutex);
3029 +- if (error) {
3030 +- /*
3031 +- * We call put now because our (and parent's) refcnts
3032 +- * are already in place. mem_cgroup_put() will internally
3033 +- * call __mem_cgroup_free, so return directly
3034 +- */
3035 +- mem_cgroup_put(memcg);
3036 +- }
3037 + return error;
3038 + }
3039 +
3040 +diff --git a/mm/page_alloc.c b/mm/page_alloc.c
3041 +index c3edb62..2ee0fd3 100644
3042 +--- a/mm/page_alloc.c
3043 ++++ b/mm/page_alloc.c
3044 +@@ -6142,6 +6142,10 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
3045 + list_del(&page->lru);
3046 + rmv_page_order(page);
3047 + zone->free_area[order].nr_free--;
3048 ++#ifdef CONFIG_HIGHMEM
3049 ++ if (PageHighMem(page))
3050 ++ totalhigh_pages -= 1 << order;
3051 ++#endif
3052 + for (i = 0; i < (1 << order); i++)
3053 + SetPageReserved((page+i));
3054 + pfn += (1 << order);
3055 +diff --git a/mm/slab.c b/mm/slab.c
3056 +index 8ccd296..bd88411 100644
3057 +--- a/mm/slab.c
3058 ++++ b/mm/slab.c
3059 +@@ -565,7 +565,7 @@ static void init_node_lock_keys(int q)
3060 + if (slab_state < UP)
3061 + return;
3062 +
3063 +- for (i = 1; i < PAGE_SHIFT + MAX_ORDER; i++) {
3064 ++ for (i = 1; i <= KMALLOC_SHIFT_HIGH; i++) {
3065 + struct kmem_cache_node *n;
3066 + struct kmem_cache *cache = kmalloc_caches[i];
3067 +
3068
3069 Added: genpatches-2.6/trunk/3.10.7/1002_linux-3.10.3.patch
3070 ===================================================================
3071 --- genpatches-2.6/trunk/3.10.7/1002_linux-3.10.3.patch (rev 0)
3072 +++ genpatches-2.6/trunk/3.10.7/1002_linux-3.10.3.patch 2013-08-29 12:09:12 UTC (rev 2497)
3073 @@ -0,0 +1,4037 @@
3074 +diff --git a/Documentation/i2c/busses/i2c-piix4 b/Documentation/i2c/busses/i2c-piix4
3075 +index 1e6634f5..a370b204 100644
3076 +--- a/Documentation/i2c/busses/i2c-piix4
3077 ++++ b/Documentation/i2c/busses/i2c-piix4
3078 +@@ -13,7 +13,7 @@ Supported adapters:
3079 + * AMD SP5100 (SB700 derivative found on some server mainboards)
3080 + Datasheet: Publicly available at the AMD website
3081 + http://support.amd.com/us/Embedded_TechDocs/44413.pdf
3082 +- * AMD Hudson-2
3083 ++ * AMD Hudson-2, CZ
3084 + Datasheet: Not publicly available
3085 + * Standard Microsystems (SMSC) SLC90E66 (Victory66) southbridge
3086 + Datasheet: Publicly available at the SMSC website http://www.smsc.com
3087 +diff --git a/Makefile b/Makefile
3088 +index 43367309..b5485529 100644
3089 +--- a/Makefile
3090 ++++ b/Makefile
3091 +@@ -1,6 +1,6 @@
3092 + VERSION = 3
3093 + PATCHLEVEL = 10
3094 +-SUBLEVEL = 2
3095 ++SUBLEVEL = 3
3096 + EXTRAVERSION =
3097 + NAME = Unicycling Gorilla
3098 +
3099 +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
3100 +index 1426468b..f51d669c 100644
3101 +--- a/arch/arm64/mm/fault.c
3102 ++++ b/arch/arm64/mm/fault.c
3103 +@@ -152,25 +152,8 @@ void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *regs)
3104 + #define ESR_CM (1 << 8)
3105 + #define ESR_LNX_EXEC (1 << 24)
3106 +
3107 +-/*
3108 +- * Check that the permissions on the VMA allow for the fault which occurred.
3109 +- * If we encountered a write fault, we must have write permission, otherwise
3110 +- * we allow any permission.
3111 +- */
3112 +-static inline bool access_error(unsigned int esr, struct vm_area_struct *vma)
3113 +-{
3114 +- unsigned int mask = VM_READ | VM_WRITE | VM_EXEC;
3115 +-
3116 +- if (esr & ESR_WRITE)
3117 +- mask = VM_WRITE;
3118 +- if (esr & ESR_LNX_EXEC)
3119 +- mask = VM_EXEC;
3120 +-
3121 +- return vma->vm_flags & mask ? false : true;
3122 +-}
3123 +-
3124 + static int __do_page_fault(struct mm_struct *mm, unsigned long addr,
3125 +- unsigned int esr, unsigned int flags,
3126 ++ unsigned int mm_flags, unsigned long vm_flags,
3127 + struct task_struct *tsk)
3128 + {
3129 + struct vm_area_struct *vma;
3130 +@@ -188,12 +171,17 @@ static int __do_page_fault(struct mm_struct *mm, unsigned long addr,
3131 + * it.
3132 + */
3133 + good_area:
3134 +- if (access_error(esr, vma)) {
3135 ++ /*
3136 ++ * Check that the permissions on the VMA allow for the fault which
3137 ++ * occurred. If we encountered a write or exec fault, we must have
3138 ++ * appropriate permissions, otherwise we allow any permission.
3139 ++ */
3140 ++ if (!(vma->vm_flags & vm_flags)) {
3141 + fault = VM_FAULT_BADACCESS;
3142 + goto out;
3143 + }
3144 +
3145 +- return handle_mm_fault(mm, vma, addr & PAGE_MASK, flags);
3146 ++ return handle_mm_fault(mm, vma, addr & PAGE_MASK, mm_flags);
3147 +
3148 + check_stack:
3149 + if (vma->vm_flags & VM_GROWSDOWN && !expand_stack(vma, addr))
3150 +@@ -208,9 +196,15 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
3151 + struct task_struct *tsk;
3152 + struct mm_struct *mm;
3153 + int fault, sig, code;
3154 +- bool write = (esr & ESR_WRITE) && !(esr & ESR_CM);
3155 +- unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE |
3156 +- (write ? FAULT_FLAG_WRITE : 0);
3157 ++ unsigned long vm_flags = VM_READ | VM_WRITE | VM_EXEC;
3158 ++ unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
3159 ++
3160 ++ if (esr & ESR_LNX_EXEC) {
3161 ++ vm_flags = VM_EXEC;
3162 ++ } else if ((esr & ESR_WRITE) && !(esr & ESR_CM)) {
3163 ++ vm_flags = VM_WRITE;
3164 ++ mm_flags |= FAULT_FLAG_WRITE;
3165 ++ }
3166 +
3167 + tsk = current;
3168 + mm = tsk->mm;
3169 +@@ -248,7 +242,7 @@ retry:
3170 + #endif
3171 + }
3172 +
3173 +- fault = __do_page_fault(mm, addr, esr, flags, tsk);
3174 ++ fault = __do_page_fault(mm, addr, mm_flags, vm_flags, tsk);
3175 +
3176 + /*
3177 + * If we need to retry but a fatal signal is pending, handle the
3178 +@@ -265,7 +259,7 @@ retry:
3179 + */
3180 +
3181 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
3182 +- if (flags & FAULT_FLAG_ALLOW_RETRY) {
3183 ++ if (mm_flags & FAULT_FLAG_ALLOW_RETRY) {
3184 + if (fault & VM_FAULT_MAJOR) {
3185 + tsk->maj_flt++;
3186 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs,
3187 +@@ -280,7 +274,7 @@ retry:
3188 + * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of
3189 + * starvation.
3190 + */
3191 +- flags &= ~FAULT_FLAG_ALLOW_RETRY;
3192 ++ mm_flags &= ~FAULT_FLAG_ALLOW_RETRY;
3193 + goto retry;
3194 + }
3195 + }
3196 +diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c
3197 +index 01b1b3f9..1e1e18c5 100644
3198 +--- a/arch/mips/cavium-octeon/setup.c
3199 ++++ b/arch/mips/cavium-octeon/setup.c
3200 +@@ -996,7 +996,7 @@ void __init plat_mem_setup(void)
3201 + cvmx_bootmem_unlock();
3202 + /* Add the memory region for the kernel. */
3203 + kernel_start = (unsigned long) _text;
3204 +- kernel_size = ALIGN(_end - _text, 0x100000);
3205 ++ kernel_size = _end - _text;
3206 +
3207 + /* Adjust for physical offset. */
3208 + kernel_start &= ~0xffffffff80000000ULL;
3209 +diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
3210 +index 46793b58..07ca627e 100644
3211 +--- a/arch/powerpc/include/asm/exception-64s.h
3212 ++++ b/arch/powerpc/include/asm/exception-64s.h
3213 +@@ -358,12 +358,12 @@ label##_relon_pSeries: \
3214 + /* No guest interrupts come through here */ \
3215 + SET_SCRATCH0(r13); /* save r13 */ \
3216 + EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
3217 +- EXC_STD, KVMTEST_PR, vec)
3218 ++ EXC_STD, NOTEST, vec)
3219 +
3220 + #define STD_RELON_EXCEPTION_PSERIES_OOL(vec, label) \
3221 + .globl label##_relon_pSeries; \
3222 + label##_relon_pSeries: \
3223 +- EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, vec); \
3224 ++ EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec); \
3225 + EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, EXC_STD)
3226 +
3227 + #define STD_RELON_EXCEPTION_HV(loc, vec, label) \
3228 +@@ -374,12 +374,12 @@ label##_relon_hv: \
3229 + /* No guest interrupts come through here */ \
3230 + SET_SCRATCH0(r13); /* save r13 */ \
3231 + EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
3232 +- EXC_HV, KVMTEST, vec)
3233 ++ EXC_HV, NOTEST, vec)
3234 +
3235 + #define STD_RELON_EXCEPTION_HV_OOL(vec, label) \
3236 + .globl label##_relon_hv; \
3237 + label##_relon_hv: \
3238 +- EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec); \
3239 ++ EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec); \
3240 + EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, EXC_HV)
3241 +
3242 + /* This associate vector numbers with bits in paca->irq_happened */
3243 +diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
3244 +index 4a9e4086..362142b6 100644
3245 +--- a/arch/powerpc/include/asm/reg.h
3246 ++++ b/arch/powerpc/include/asm/reg.h
3247 +@@ -626,6 +626,7 @@
3248 + #define MMCR0_TRIGGER 0x00002000UL /* TRIGGER enable */
3249 + #define MMCR0_PMAO 0x00000080UL /* performance monitor alert has occurred, set to 0 after handling exception */
3250 + #define MMCR0_SHRFC 0x00000040UL /* SHRre freeze conditions between threads */
3251 ++#define MMCR0_FC56 0x00000010UL /* freeze counters 5 and 6 */
3252 + #define MMCR0_FCTI 0x00000008UL /* freeze counters in tags inactive mode */
3253 + #define MMCR0_FCTA 0x00000004UL /* freeze counters in tags active mode */
3254 + #define MMCR0_FCWAIT 0x00000002UL /* freeze counter in WAIT state */
3255 +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
3256 +index 40e4a17c..4e00d223 100644
3257 +--- a/arch/powerpc/kernel/exceptions-64s.S
3258 ++++ b/arch/powerpc/kernel/exceptions-64s.S
3259 +@@ -341,10 +341,17 @@ vsx_unavailable_pSeries_1:
3260 + EXCEPTION_PROLOG_0(PACA_EXGEN)
3261 + b vsx_unavailable_pSeries
3262 +
3263 ++facility_unavailable_trampoline:
3264 + . = 0xf60
3265 + SET_SCRATCH0(r13)
3266 + EXCEPTION_PROLOG_0(PACA_EXGEN)
3267 +- b tm_unavailable_pSeries
3268 ++ b facility_unavailable_pSeries
3269 ++
3270 ++hv_facility_unavailable_trampoline:
3271 ++ . = 0xf80
3272 ++ SET_SCRATCH0(r13)
3273 ++ EXCEPTION_PROLOG_0(PACA_EXGEN)
3274 ++ b facility_unavailable_hv
3275 +
3276 + #ifdef CONFIG_CBE_RAS
3277 + STD_EXCEPTION_HV(0x1200, 0x1202, cbe_system_error)
3278 +@@ -522,8 +529,10 @@ denorm_done:
3279 + KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf20)
3280 + STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
3281 + KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf40)
3282 +- STD_EXCEPTION_PSERIES_OOL(0xf60, tm_unavailable)
3283 ++ STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
3284 + KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf60)
3285 ++ STD_EXCEPTION_HV_OOL(0xf82, facility_unavailable)
3286 ++ KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
3287 +
3288 + /*
3289 + * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
3290 +@@ -793,14 +802,10 @@ system_call_relon_pSeries:
3291 + STD_RELON_EXCEPTION_PSERIES(0x4d00, 0xd00, single_step)
3292 +
3293 + . = 0x4e00
3294 +- SET_SCRATCH0(r13)
3295 +- EXCEPTION_PROLOG_0(PACA_EXGEN)
3296 +- b h_data_storage_relon_hv
3297 ++ b . /* Can't happen, see v2.07 Book III-S section 6.5 */
3298 +
3299 + . = 0x4e20
3300 +- SET_SCRATCH0(r13)
3301 +- EXCEPTION_PROLOG_0(PACA_EXGEN)
3302 +- b h_instr_storage_relon_hv
3303 ++ b . /* Can't happen, see v2.07 Book III-S section 6.5 */
3304 +
3305 + . = 0x4e40
3306 + SET_SCRATCH0(r13)
3307 +@@ -808,9 +813,7 @@ system_call_relon_pSeries:
3308 + b emulation_assist_relon_hv
3309 +
3310 + . = 0x4e60
3311 +- SET_SCRATCH0(r13)
3312 +- EXCEPTION_PROLOG_0(PACA_EXGEN)
3313 +- b hmi_exception_relon_hv
3314 ++ b . /* Can't happen, see v2.07 Book III-S section 6.5 */
3315 +
3316 + . = 0x4e80
3317 + SET_SCRATCH0(r13)
3318 +@@ -835,11 +838,17 @@ vsx_unavailable_relon_pSeries_1:
3319 + EXCEPTION_PROLOG_0(PACA_EXGEN)
3320 + b vsx_unavailable_relon_pSeries
3321 +
3322 +-tm_unavailable_relon_pSeries_1:
3323 ++facility_unavailable_relon_trampoline:
3324 + . = 0x4f60
3325 + SET_SCRATCH0(r13)
3326 + EXCEPTION_PROLOG_0(PACA_EXGEN)
3327 +- b tm_unavailable_relon_pSeries
3328 ++ b facility_unavailable_relon_pSeries
3329 ++
3330 ++hv_facility_unavailable_relon_trampoline:
3331 ++ . = 0x4f80
3332 ++ SET_SCRATCH0(r13)
3333 ++ EXCEPTION_PROLOG_0(PACA_EXGEN)
3334 ++ b facility_unavailable_relon_hv
3335 +
3336 + STD_RELON_EXCEPTION_PSERIES(0x5300, 0x1300, instruction_breakpoint)
3337 + #ifdef CONFIG_PPC_DENORMALISATION
3338 +@@ -1165,36 +1174,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
3339 + bl .vsx_unavailable_exception
3340 + b .ret_from_except
3341 +
3342 +- .align 7
3343 +- .globl tm_unavailable_common
3344 +-tm_unavailable_common:
3345 +- EXCEPTION_PROLOG_COMMON(0xf60, PACA_EXGEN)
3346 +- bl .save_nvgprs
3347 +- DISABLE_INTS
3348 +- addi r3,r1,STACK_FRAME_OVERHEAD
3349 +- bl .tm_unavailable_exception
3350 +- b .ret_from_except
3351 ++ STD_EXCEPTION_COMMON(0xf60, facility_unavailable, .facility_unavailable_exception)
3352 +
3353 + .align 7
3354 + .globl __end_handlers
3355 + __end_handlers:
3356 +
3357 + /* Equivalents to the above handlers for relocation-on interrupt vectors */
3358 +- STD_RELON_EXCEPTION_HV_OOL(0xe00, h_data_storage)
3359 +- KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe00)
3360 +- STD_RELON_EXCEPTION_HV_OOL(0xe20, h_instr_storage)
3361 +- KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe20)
3362 + STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
3363 +- KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe40)
3364 +- STD_RELON_EXCEPTION_HV_OOL(0xe60, hmi_exception)
3365 +- KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe60)
3366 + MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
3367 +- KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe80)
3368 +
3369 + STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
3370 + STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
3371 + STD_RELON_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
3372 +- STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, tm_unavailable)
3373 ++ STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
3374 ++ STD_RELON_EXCEPTION_HV_OOL(0xf80, facility_unavailable)
3375 +
3376 + #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
3377 + /*
3378 +diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
3379 +index a949bdfc..f0b47d1a 100644
3380 +--- a/arch/powerpc/kernel/hw_breakpoint.c
3381 ++++ b/arch/powerpc/kernel/hw_breakpoint.c
3382 +@@ -176,7 +176,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
3383 + length_max = 512 ; /* 64 doublewords */
3384 + /* DAWR region can't cross 512 boundary */
3385 + if ((bp->attr.bp_addr >> 10) !=
3386 +- ((bp->attr.bp_addr + bp->attr.bp_len) >> 10))
3387 ++ ((bp->attr.bp_addr + bp->attr.bp_len - 1) >> 10))
3388 + return -EINVAL;
3389 + }
3390 + if (info->len >
3391 +@@ -250,6 +250,7 @@ int __kprobes hw_breakpoint_handler(struct die_args *args)
3392 + * we still need to single-step the instruction, but we don't
3393 + * generate an event.
3394 + */
3395 ++ info->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ;
3396 + if (!((bp->attr.bp_addr <= dar) &&
3397 + (dar - bp->attr.bp_addr < bp->attr.bp_len)))
3398 + info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
3399 +diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
3400 +index 98c2fc19..64f7bd5b 100644
3401 +--- a/arch/powerpc/kernel/ptrace.c
3402 ++++ b/arch/powerpc/kernel/ptrace.c
3403 +@@ -1449,7 +1449,9 @@ static long ppc_set_hwdebug(struct task_struct *child,
3404 + */
3405 + if (bp_info->addr_mode == PPC_BREAKPOINT_MODE_RANGE_INCLUSIVE) {
3406 + len = bp_info->addr2 - bp_info->addr;
3407 +- } else if (bp_info->addr_mode != PPC_BREAKPOINT_MODE_EXACT) {
3408 ++ } else if (bp_info->addr_mode == PPC_BREAKPOINT_MODE_EXACT)
3409 ++ len = 1;
3410 ++ else {
3411 + ptrace_put_breakpoints(child);
3412 + return -EINVAL;
3413 + }
3414 +diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
3415 +index e379d3fd..389fb807 100644
3416 +--- a/arch/powerpc/kernel/setup_64.c
3417 ++++ b/arch/powerpc/kernel/setup_64.c
3418 +@@ -76,7 +76,7 @@
3419 + #endif
3420 +
3421 + int boot_cpuid = 0;
3422 +-int __initdata spinning_secondaries;
3423 ++int spinning_secondaries;
3424 + u64 ppc64_pft_size;
3425 +
3426 + /* Pick defaults since we might want to patch instructions
3427 +diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
3428 +index 201385c3..0f83122e 100644
3429 +--- a/arch/powerpc/kernel/signal_32.c
3430 ++++ b/arch/powerpc/kernel/signal_32.c
3431 +@@ -407,7 +407,8 @@ inline unsigned long copy_transact_fpr_from_user(struct task_struct *task,
3432 + * altivec/spe instructions at some point.
3433 + */
3434 + static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
3435 +- int sigret, int ctx_has_vsx_region)
3436 ++ struct mcontext __user *tm_frame, int sigret,
3437 ++ int ctx_has_vsx_region)
3438 + {
3439 + unsigned long msr = regs->msr;
3440 +
3441 +@@ -475,6 +476,12 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
3442 +
3443 + if (__put_user(msr, &frame->mc_gregs[PT_MSR]))
3444 + return 1;
3445 ++ /* We need to write 0 the MSR top 32 bits in the tm frame so that we
3446 ++ * can check it on the restore to see if TM is active
3447 ++ */
3448 ++ if (tm_frame && __put_user(0, &tm_frame->mc_gregs[PT_MSR]))
3449 ++ return 1;
3450 ++
3451 + if (sigret) {
3452 + /* Set up the sigreturn trampoline: li r0,sigret; sc */
3453 + if (__put_user(0x38000000UL + sigret, &frame->tramp[0])
3454 +@@ -747,7 +754,7 @@ static long restore_tm_user_regs(struct pt_regs *regs,
3455 + struct mcontext __user *tm_sr)
3456 + {
3457 + long err;
3458 +- unsigned long msr;
3459 ++ unsigned long msr, msr_hi;
3460 + #ifdef CONFIG_VSX
3461 + int i;
3462 + #endif
3463 +@@ -852,8 +859,11 @@ static long restore_tm_user_regs(struct pt_regs *regs,
3464 + tm_enable();
3465 + /* This loads the checkpointed FP/VEC state, if used */
3466 + tm_recheckpoint(&current->thread, msr);
3467 +- /* The task has moved into TM state S, so ensure MSR reflects this */
3468 +- regs->msr = (regs->msr & ~MSR_TS_MASK) | MSR_TS_S;
3469 ++ /* Get the top half of the MSR */
3470 ++ if (__get_user(msr_hi, &tm_sr->mc_gregs[PT_MSR]))
3471 ++ return 1;
3472 ++ /* Pull in MSR TM from user context */
3473 ++ regs->msr = (regs->msr & ~MSR_TS_MASK) | ((msr_hi<<32) & MSR_TS_MASK);
3474 +
3475 + /* This loads the speculative FP/VEC state, if used */
3476 + if (msr & MSR_FP) {
3477 +@@ -952,6 +962,7 @@ int handle_rt_signal32(unsigned long sig, struct k_sigaction *ka,
3478 + {
3479 + struct rt_sigframe __user *rt_sf;
3480 + struct mcontext __user *frame;
3481 ++ struct mcontext __user *tm_frame = NULL;
3482 + void __user *addr;
3483 + unsigned long newsp = 0;
3484 + int sigret;
3485 +@@ -985,23 +996,24 @@ int handle_rt_signal32(unsigned long sig, struct k_sigaction *ka,
3486 + }
3487 +
3488 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3489 ++ tm_frame = &rt_sf->uc_transact.uc_mcontext;
3490 + if (MSR_TM_ACTIVE(regs->msr)) {
3491 +- if (save_tm_user_regs(regs, &rt_sf->uc.uc_mcontext,
3492 +- &rt_sf->uc_transact.uc_mcontext, sigret))
3493 ++ if (save_tm_user_regs(regs, frame, tm_frame, sigret))
3494 + goto badframe;
3495 + }
3496 + else
3497 + #endif
3498 +- if (save_user_regs(regs, frame, sigret, 1))
3499 ++ {
3500 ++ if (save_user_regs(regs, frame, tm_frame, sigret, 1))
3501 + goto badframe;
3502 ++ }
3503 + regs->link = tramp;
3504 +
3505 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3506 + if (MSR_TM_ACTIVE(regs->msr)) {
3507 + if (__put_user((unsigned long)&rt_sf->uc_transact,
3508 + &rt_sf->uc.uc_link)
3509 +- || __put_user(to_user_ptr(&rt_sf->uc_transact.uc_mcontext),
3510 +- &rt_sf->uc_transact.uc_regs))
3511 ++ || __put_user((unsigned long)tm_frame, &rt_sf->uc_transact.uc_regs))
3512 + goto badframe;
3513 + }
3514 + else
3515 +@@ -1170,7 +1182,7 @@ long sys_swapcontext(struct ucontext __user *old_ctx,
3516 + mctx = (struct mcontext __user *)
3517 + ((unsigned long) &old_ctx->uc_mcontext & ~0xfUL);
3518 + if (!access_ok(VERIFY_WRITE, old_ctx, ctx_size)
3519 +- || save_user_regs(regs, mctx, 0, ctx_has_vsx_region)
3520 ++ || save_user_regs(regs, mctx, NULL, 0, ctx_has_vsx_region)
3521 + || put_sigset_t(&old_ctx->uc_sigmask, &current->blocked)
3522 + || __put_user(to_user_ptr(mctx), &old_ctx->uc_regs))
3523 + return -EFAULT;
3524 +@@ -1233,7 +1245,7 @@ long sys_rt_sigreturn(int r3, int r4, int r5, int r6, int r7, int r8,
3525 + if (__get_user(msr_hi, &mcp->mc_gregs[PT_MSR]))
3526 + goto bad;
3527 +
3528 +- if (MSR_TM_SUSPENDED(msr_hi<<32)) {
3529 ++ if (MSR_TM_ACTIVE(msr_hi<<32)) {
3530 + /* We only recheckpoint on return if we're
3531 + * transaction.
3532 + */
3533 +@@ -1392,6 +1404,7 @@ int handle_signal32(unsigned long sig, struct k_sigaction *ka,
3534 + {
3535 + struct sigcontext __user *sc;
3536 + struct sigframe __user *frame;
3537 ++ struct mcontext __user *tm_mctx = NULL;
3538 + unsigned long newsp = 0;
3539 + int sigret;
3540 + unsigned long tramp;
3541 +@@ -1425,6 +1438,7 @@ int handle_signal32(unsigned long sig, struct k_sigaction *ka,
3542 + }
3543 +
3544 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3545 ++ tm_mctx = &frame->mctx_transact;
3546 + if (MSR_TM_ACTIVE(regs->msr)) {
3547 + if (save_tm_user_regs(regs, &frame->mctx, &frame->mctx_transact,
3548 + sigret))
3549 +@@ -1432,8 +1446,10 @@ int handle_signal32(unsigned long sig, struct k_sigaction *ka,
3550 + }
3551 + else
3552 + #endif
3553 +- if (save_user_regs(regs, &frame->mctx, sigret, 1))
3554 ++ {
3555 ++ if (save_user_regs(regs, &frame->mctx, tm_mctx, sigret, 1))
3556 + goto badframe;
3557 ++ }
3558 +
3559 + regs->link = tramp;
3560 +
3561 +@@ -1481,16 +1497,22 @@ badframe:
3562 + long sys_sigreturn(int r3, int r4, int r5, int r6, int r7, int r8,
3563 + struct pt_regs *regs)
3564 + {
3565 ++ struct sigframe __user *sf;
3566 + struct sigcontext __user *sc;
3567 + struct sigcontext sigctx;
3568 + struct mcontext __user *sr;
3569 + void __user *addr;
3570 + sigset_t set;
3571 ++#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3572 ++ struct mcontext __user *mcp, *tm_mcp;
3573 ++ unsigned long msr_hi;
3574 ++#endif
3575 +
3576 + /* Always make any pending restarted system calls return -EINTR */
3577 + current_thread_info()->restart_block.fn = do_no_restart_syscall;
3578 +
3579 +- sc = (struct sigcontext __user *)(regs->gpr[1] + __SIGNAL_FRAMESIZE);
3580 ++ sf = (struct sigframe __user *)(regs->gpr[1] + __SIGNAL_FRAMESIZE);
3581 ++ sc = &sf->sctx;
3582 + addr = sc;
3583 + if (copy_from_user(&sigctx, sc, sizeof(sigctx)))
3584 + goto badframe;
3585 +@@ -1507,11 +1529,25 @@ long sys_sigreturn(int r3, int r4, int r5, int r6, int r7, int r8,
3586 + #endif
3587 + set_current_blocked(&set);
3588 +
3589 +- sr = (struct mcontext __user *)from_user_ptr(sigctx.regs);
3590 +- addr = sr;
3591 +- if (!access_ok(VERIFY_READ, sr, sizeof(*sr))
3592 +- || restore_user_regs(regs, sr, 1))
3593 ++#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3594 ++ mcp = (struct mcontext __user *)&sf->mctx;
3595 ++ tm_mcp = (struct mcontext __user *)&sf->mctx_transact;
3596 ++ if (__get_user(msr_hi, &tm_mcp->mc_gregs[PT_MSR]))
3597 + goto badframe;
3598 ++ if (MSR_TM_ACTIVE(msr_hi<<32)) {
3599 ++ if (!cpu_has_feature(CPU_FTR_TM))
3600 ++ goto badframe;
3601 ++ if (restore_tm_user_regs(regs, mcp, tm_mcp))
3602 ++ goto badframe;
3603 ++ } else
3604 ++#endif
3605 ++ {
3606 ++ sr = (struct mcontext __user *)from_user_ptr(sigctx.regs);
3607 ++ addr = sr;
3608 ++ if (!access_ok(VERIFY_READ, sr, sizeof(*sr))
3609 ++ || restore_user_regs(regs, sr, 1))
3610 ++ goto badframe;
3611 ++ }
3612 +
3613 + set_thread_flag(TIF_RESTOREALL);
3614 + return 0;
3615 +diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
3616 +index 34594736..887e99d8 100644
3617 +--- a/arch/powerpc/kernel/signal_64.c
3618 ++++ b/arch/powerpc/kernel/signal_64.c
3619 +@@ -410,6 +410,10 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
3620 +
3621 + /* get MSR separately, transfer the LE bit if doing signal return */
3622 + err |= __get_user(msr, &sc->gp_regs[PT_MSR]);
3623 ++ /* pull in MSR TM from user context */
3624 ++ regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr & MSR_TS_MASK);
3625 ++
3626 ++ /* pull in MSR LE from user context */
3627 + regs->msr = (regs->msr & ~MSR_LE) | (msr & MSR_LE);
3628 +
3629 + /* The following non-GPR non-FPR non-VR state is also checkpointed: */
3630 +@@ -505,8 +509,6 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
3631 + tm_enable();
3632 + /* This loads the checkpointed FP/VEC state, if used */
3633 + tm_recheckpoint(&current->thread, msr);
3634 +- /* The task has moved into TM state S, so ensure MSR reflects this: */
3635 +- regs->msr = (regs->msr & ~MSR_TS_MASK) | __MASK(33);
3636 +
3637 + /* This loads the speculative FP/VEC state, if used */
3638 + if (msr & MSR_FP) {
3639 +@@ -654,7 +656,7 @@ int sys_rt_sigreturn(unsigned long r3, unsigned long r4, unsigned long r5,
3640 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3641 + if (__get_user(msr, &uc->uc_mcontext.gp_regs[PT_MSR]))
3642 + goto badframe;
3643 +- if (MSR_TM_SUSPENDED(msr)) {
3644 ++ if (MSR_TM_ACTIVE(msr)) {
3645 + /* We recheckpoint on return. */
3646 + struct ucontext __user *uc_transact;
3647 + if (__get_user(uc_transact, &uc->uc_link))
3648 +diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
3649 +index c0e5caf8..e4f205a2 100644
3650 +--- a/arch/powerpc/kernel/traps.c
3651 ++++ b/arch/powerpc/kernel/traps.c
3652 +@@ -1282,25 +1282,50 @@ void vsx_unavailable_exception(struct pt_regs *regs)
3653 + die("Unrecoverable VSX Unavailable Exception", regs, SIGABRT);
3654 + }
3655 +
3656 +-void tm_unavailable_exception(struct pt_regs *regs)
3657 ++void facility_unavailable_exception(struct pt_regs *regs)
3658 + {
3659 ++ static char *facility_strings[] = {
3660 ++ "FPU",
3661 ++ "VMX/VSX",
3662 ++ "DSCR",
3663 ++ "PMU SPRs",
3664 ++ "BHRB",
3665 ++ "TM",
3666 ++ "AT",
3667 ++ "EBB",
3668 ++ "TAR",
3669 ++ };
3670 ++ char *facility, *prefix;
3671 ++ u64 value;
3672 ++
3673 ++ if (regs->trap == 0xf60) {
3674 ++ value = mfspr(SPRN_FSCR);
3675 ++ prefix = "";
3676 ++ } else {
3677 ++ value = mfspr(SPRN_HFSCR);
3678 ++ prefix = "Hypervisor ";
3679 ++ }
3680 ++
3681 ++ value = value >> 56;
3682 ++
3683 + /* We restore the interrupt state now */
3684 + if (!arch_irq_disabled_regs(regs))
3685 + local_irq_enable();
3686 +
3687 +- /* Currently we never expect a TMU exception. Catch
3688 +- * this and kill the process!
3689 +- */
3690 +- printk(KERN_EMERG "Unexpected TM unavailable exception at %lx "
3691 +- "(msr %lx)\n",
3692 +- regs->nip, regs->msr);
3693 ++ if (value < ARRAY_SIZE(facility_strings))
3694 ++ facility = facility_strings[value];
3695 ++ else
3696 ++ facility = "unknown";
3697 ++
3698 ++ pr_err("%sFacility '%s' unavailable, exception at 0x%lx, MSR=%lx\n",
3699 ++ prefix, facility, regs->nip, regs->msr);
3700 +
3701 + if (user_mode(regs)) {
3702 + _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
3703 + return;
3704 + }
3705 +
3706 +- die("Unexpected TM unavailable exception", regs, SIGABRT);
3707 ++ die("Unexpected facility unavailable exception", regs, SIGABRT);
3708 + }
3709 +
3710 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
3711 +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
3712 +index 88c0425d..2859a1f5 100644
3713 +--- a/arch/powerpc/mm/numa.c
3714 ++++ b/arch/powerpc/mm/numa.c
3715 +@@ -1433,11 +1433,9 @@ static int update_cpu_topology(void *data)
3716 + if (cpu != update->cpu)
3717 + continue;
3718 +
3719 +- unregister_cpu_under_node(update->cpu, update->old_nid);
3720 + unmap_cpu_from_node(update->cpu);
3721 + map_cpu_to_node(update->cpu, update->new_nid);
3722 + vdso_getcpu_init();
3723 +- register_cpu_under_node(update->cpu, update->new_nid);
3724 + }
3725 +
3726 + return 0;
3727 +@@ -1485,6 +1483,9 @@ int arch_update_cpu_topology(void)
3728 + stop_machine(update_cpu_topology, &updates[0], &updated_cpus);
3729 +
3730 + for (ud = &updates[0]; ud; ud = ud->next) {
3731 ++ unregister_cpu_under_node(ud->cpu, ud->old_nid);
3732 ++ register_cpu_under_node(ud->cpu, ud->new_nid);
3733 ++
3734 + dev = get_cpu_device(ud->cpu);
3735 + if (dev)
3736 + kobject_uevent(&dev->kobj, KOBJ_CHANGE);
3737 +diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
3738 +index 29c64828..d3ee2e50 100644
3739 +--- a/arch/powerpc/perf/core-book3s.c
3740 ++++ b/arch/powerpc/perf/core-book3s.c
3741 +@@ -75,6 +75,8 @@ static unsigned int freeze_events_kernel = MMCR0_FCS;
3742 +
3743 + #define MMCR0_FCHV 0
3744 + #define MMCR0_PMCjCE MMCR0_PMCnCE
3745 ++#define MMCR0_FC56 0
3746 ++#define MMCR0_PMAO 0
3747 +
3748 + #define SPRN_MMCRA SPRN_MMCR2
3749 + #define MMCRA_SAMPLE_ENABLE 0
3750 +@@ -852,7 +854,7 @@ static void write_mmcr0(struct cpu_hw_events *cpuhw, unsigned long mmcr0)
3751 + static void power_pmu_disable(struct pmu *pmu)
3752 + {
3753 + struct cpu_hw_events *cpuhw;
3754 +- unsigned long flags;
3755 ++ unsigned long flags, val;
3756 +
3757 + if (!ppmu)
3758 + return;
3759 +@@ -860,9 +862,6 @@ static void power_pmu_disable(struct pmu *pmu)
3760 + cpuhw = &__get_cpu_var(cpu_hw_events);
3761 +
3762 + if (!cpuhw->disabled) {
3763 +- cpuhw->disabled = 1;
3764 +- cpuhw->n_added = 0;
3765 +-
3766 + /*
3767 + * Check if we ever enabled the PMU on this cpu.
3768 + */
3769 +@@ -872,6 +871,21 @@ static void power_pmu_disable(struct pmu *pmu)
3770 + }
3771 +
3772 + /*
3773 ++ * Set the 'freeze counters' bit, clear PMAO/FC56.
3774 ++ */
3775 ++ val = mfspr(SPRN_MMCR0);
3776 ++ val |= MMCR0_FC;
3777 ++ val &= ~(MMCR0_PMAO | MMCR0_FC56);
3778 ++
3779 ++ /*
3780 ++ * The barrier is to make sure the mtspr has been
3781 ++ * executed and the PMU has frozen the events etc.
3782 ++ * before we return.
3783 ++ */
3784 ++ write_mmcr0(cpuhw, val);
3785 ++ mb();
3786 ++
3787 ++ /*
3788 + * Disable instruction sampling if it was enabled
3789 + */
3790 + if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE) {
3791 +@@ -880,14 +894,8 @@ static void power_pmu_disable(struct pmu *pmu)
3792 + mb();
3793 + }
3794 +
3795 +- /*
3796 +- * Set the 'freeze counters' bit.
3797 +- * The barrier is to make sure the mtspr has been
3798 +- * executed and the PMU has frozen the events
3799 +- * before we return.
3800 +- */
3801 +- write_mmcr0(cpuhw, mfspr(SPRN_MMCR0) | MMCR0_FC);
3802 +- mb();
3803 ++ cpuhw->disabled = 1;
3804 ++ cpuhw->n_added = 0;
3805 + }
3806 + local_irq_restore(flags);
3807 + }
3808 +@@ -911,12 +919,18 @@ static void power_pmu_enable(struct pmu *pmu)
3809 +
3810 + if (!ppmu)
3811 + return;
3812 ++
3813 + local_irq_save(flags);
3814 ++
3815 + cpuhw = &__get_cpu_var(cpu_hw_events);
3816 +- if (!cpuhw->disabled) {
3817 +- local_irq_restore(flags);
3818 +- return;
3819 ++ if (!cpuhw->disabled)
3820 ++ goto out;
3821 ++
3822 ++ if (cpuhw->n_events == 0) {
3823 ++ ppc_set_pmu_inuse(0);
3824 ++ goto out;
3825 + }
3826 ++
3827 + cpuhw->disabled = 0;
3828 +
3829 + /*
3830 +@@ -928,8 +942,6 @@ static void power_pmu_enable(struct pmu *pmu)
3831 + if (!cpuhw->n_added) {
3832 + mtspr(SPRN_MMCRA, cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE);
3833 + mtspr(SPRN_MMCR1, cpuhw->mmcr[1]);
3834 +- if (cpuhw->n_events == 0)
3835 +- ppc_set_pmu_inuse(0);
3836 + goto out_enable;
3837 + }
3838 +
3839 +diff --git a/arch/powerpc/perf/power8-pmu.c b/arch/powerpc/perf/power8-pmu.c
3840 +index f7d1c4ff..d59f5b2d 100644
3841 +--- a/arch/powerpc/perf/power8-pmu.c
3842 ++++ b/arch/powerpc/perf/power8-pmu.c
3843 +@@ -109,6 +109,16 @@
3844 + #define EVENT_IS_MARKED (EVENT_MARKED_MASK << EVENT_MARKED_SHIFT)
3845 + #define EVENT_PSEL_MASK 0xff /* PMCxSEL value */
3846 +
3847 ++#define EVENT_VALID_MASK \
3848 ++ ((EVENT_THRESH_MASK << EVENT_THRESH_SHIFT) | \
3849 ++ (EVENT_SAMPLE_MASK << EVENT_SAMPLE_SHIFT) | \
3850 ++ (EVENT_CACHE_SEL_MASK << EVENT_CACHE_SEL_SHIFT) | \
3851 ++ (EVENT_PMC_MASK << EVENT_PMC_SHIFT) | \
3852 ++ (EVENT_UNIT_MASK << EVENT_UNIT_SHIFT) | \
3853 ++ (EVENT_COMBINE_MASK << EVENT_COMBINE_SHIFT) | \
3854 ++ (EVENT_MARKED_MASK << EVENT_MARKED_SHIFT) | \
3855 ++ EVENT_PSEL_MASK)
3856 ++
3857 + /* MMCRA IFM bits - POWER8 */
3858 + #define POWER8_MMCRA_IFM1 0x0000000040000000UL
3859 + #define POWER8_MMCRA_IFM2 0x0000000080000000UL
3860 +@@ -212,6 +222,9 @@ static int power8_get_constraint(u64 event, unsigned long *maskp, unsigned long
3861 +
3862 + mask = value = 0;
3863 +
3864 ++ if (event & ~EVENT_VALID_MASK)
3865 ++ return -1;
3866 ++
3867 + pmc = (event >> EVENT_PMC_SHIFT) & EVENT_PMC_MASK;
3868 + unit = (event >> EVENT_UNIT_SHIFT) & EVENT_UNIT_MASK;
3869 + cache = (event >> EVENT_CACHE_SEL_SHIFT) & EVENT_CACHE_SEL_MASK;
3870 +@@ -378,6 +391,10 @@ static int power8_compute_mmcr(u64 event[], int n_ev,
3871 + if (pmc_inuse & 0x7c)
3872 + mmcr[0] |= MMCR0_PMCjCE;
3873 +
3874 ++ /* If we're not using PMC 5 or 6, freeze them */
3875 ++ if (!(pmc_inuse & 0x60))
3876 ++ mmcr[0] |= MMCR0_FC56;
3877 ++
3878 + mmcr[1] = mmcr1;
3879 + mmcr[2] = mmcra;
3880 +
3881 +diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
3882 +index 9c9d15e4..7816beff 100644
3883 +--- a/arch/powerpc/platforms/powernv/pci-ioda.c
3884 ++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
3885 +@@ -441,6 +441,17 @@ static void pnv_pci_ioda_dma_dev_setup(struct pnv_phb *phb, struct pci_dev *pdev
3886 + set_iommu_table_base(&pdev->dev, &pe->tce32_table);
3887 + }
3888 +
3889 ++static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, struct pci_bus *bus)
3890 ++{
3891 ++ struct pci_dev *dev;
3892 ++
3893 ++ list_for_each_entry(dev, &bus->devices, bus_list) {
3894 ++ set_iommu_table_base(&dev->dev, &pe->tce32_table);
3895 ++ if (dev->subordinate)
3896 ++ pnv_ioda_setup_bus_dma(pe, dev->subordinate);
3897 ++ }
3898 ++}
3899 ++
3900 + static void pnv_pci_ioda1_tce_invalidate(struct iommu_table *tbl,
3901 + u64 *startp, u64 *endp)
3902 + {
3903 +@@ -596,6 +607,11 @@ static void pnv_pci_ioda_setup_dma_pe(struct pnv_phb *phb,
3904 + }
3905 + iommu_init_table(tbl, phb->hose->node);
3906 +
3907 ++ if (pe->pdev)
3908 ++ set_iommu_table_base(&pe->pdev->dev, tbl);
3909 ++ else
3910 ++ pnv_ioda_setup_bus_dma(pe, pe->pbus);
3911 ++
3912 + return;
3913 + fail:
3914 + /* XXX Failure: Try to fallback to 64-bit only ? */
3915 +@@ -667,6 +683,11 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
3916 + }
3917 + iommu_init_table(tbl, phb->hose->node);
3918 +
3919 ++ if (pe->pdev)
3920 ++ set_iommu_table_base(&pe->pdev->dev, tbl);
3921 ++ else
3922 ++ pnv_ioda_setup_bus_dma(pe, pe->pbus);
3923 ++
3924 + return;
3925 + fail:
3926 + if (pe->tce32_seg >= 0)
3927 +diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S
3928 +index ef12c0e6..7d740ebb 100644
3929 +--- a/arch/xtensa/kernel/head.S
3930 ++++ b/arch/xtensa/kernel/head.S
3931 +@@ -68,6 +68,15 @@ _SetupMMU:
3932 +
3933 + #ifdef CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
3934 + initialize_mmu
3935 ++#if defined(CONFIG_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
3936 ++ rsr a2, excsave1
3937 ++ movi a3, 0x08000000
3938 ++ bgeu a2, a3, 1f
3939 ++ movi a3, 0xd0000000
3940 ++ add a2, a2, a3
3941 ++ wsr a2, excsave1
3942 ++1:
3943 ++#endif
3944 + #endif
3945 + .end no-absolute-literals
3946 +
3947 +diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
3948 +index 6dd25ecd..14c6c3a6 100644
3949 +--- a/arch/xtensa/kernel/setup.c
3950 ++++ b/arch/xtensa/kernel/setup.c
3951 +@@ -152,8 +152,8 @@ static int __init parse_tag_initrd(const bp_tag_t* tag)
3952 + {
3953 + meminfo_t* mi;
3954 + mi = (meminfo_t*)(tag->data);
3955 +- initrd_start = (void*)(mi->start);
3956 +- initrd_end = (void*)(mi->end);
3957 ++ initrd_start = __va(mi->start);
3958 ++ initrd_end = __va(mi->end);
3959 +
3960 + return 0;
3961 + }
3962 +@@ -164,7 +164,7 @@ __tagtable(BP_TAG_INITRD, parse_tag_initrd);
3963 +
3964 + static int __init parse_tag_fdt(const bp_tag_t *tag)
3965 + {
3966 +- dtb_start = (void *)(tag->data[0]);
3967 ++ dtb_start = __va(tag->data[0]);
3968 + return 0;
3969 + }
3970 +
3971 +diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
3972 +index 9a8a674e..8eae6590 100644
3973 +--- a/drivers/ata/ata_piix.c
3974 ++++ b/drivers/ata/ata_piix.c
3975 +@@ -338,6 +338,8 @@ static const struct pci_device_id piix_pci_tbl[] = {
3976 + /* SATA Controller IDE (BayTrail) */
3977 + { 0x8086, 0x0F20, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_byt },
3978 + { 0x8086, 0x0F21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_byt },
3979 ++ /* SATA Controller IDE (Coleto Creek) */
3980 ++ { 0x8086, 0x23a6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
3981 +
3982 + { } /* terminate list */
3983 + };
3984 +diff --git a/drivers/ata/libata-pmp.c b/drivers/ata/libata-pmp.c
3985 +index 61c59ee4..1c41722b 100644
3986 +--- a/drivers/ata/libata-pmp.c
3987 ++++ b/drivers/ata/libata-pmp.c
3988 +@@ -389,9 +389,13 @@ static void sata_pmp_quirks(struct ata_port *ap)
3989 + /* link reports offline after LPM */
3990 + link->flags |= ATA_LFLAG_NO_LPM;
3991 +
3992 +- /* Class code report is unreliable. */
3993 ++ /*
3994 ++ * Class code report is unreliable and SRST times
3995 ++ * out under certain configurations.
3996 ++ */
3997 + if (link->pmp < 5)
3998 +- link->flags |= ATA_LFLAG_ASSUME_ATA;
3999 ++ link->flags |= ATA_LFLAG_NO_SRST |
4000 ++ ATA_LFLAG_ASSUME_ATA;
4001 +
4002 + /* port 5 is for SEMB device and it doesn't like SRST */
4003 + if (link->pmp == 5)
4004 +@@ -399,20 +403,17 @@ static void sata_pmp_quirks(struct ata_port *ap)
4005 + ATA_LFLAG_ASSUME_SEMB;
4006 + }
4007 + } else if (vendor == 0x1095 && devid == 0x4723) {
4008 +- /* sil4723 quirks */
4009 +- ata_for_each_link(link, ap, EDGE) {
4010 +- /* link reports offline after LPM */
4011 +- link->flags |= ATA_LFLAG_NO_LPM;
4012 +-
4013 +- /* class code report is unreliable */
4014 +- if (link->pmp < 2)
4015 +- link->flags |= ATA_LFLAG_ASSUME_ATA;
4016 +-
4017 +- /* the config device at port 2 locks up on SRST */
4018 +- if (link->pmp == 2)
4019 +- link->flags |= ATA_LFLAG_NO_SRST |
4020 +- ATA_LFLAG_ASSUME_ATA;
4021 +- }
4022 ++ /*
4023 ++ * sil4723 quirks
4024 ++ *
4025 ++ * Link reports offline after LPM. Class code report is
4026 ++ * unreliable. SIMG PMPs never got SRST reliable and the
4027 ++ * config device at port 2 locks up on SRST.
4028 ++ */
4029 ++ ata_for_each_link(link, ap, EDGE)
4030 ++ link->flags |= ATA_LFLAG_NO_LPM |
4031 ++ ATA_LFLAG_NO_SRST |
4032 ++ ATA_LFLAG_ASSUME_ATA;
4033 + } else if (vendor == 0x1095 && devid == 0x4726) {
4034 + /* sil4726 quirks */
4035 + ata_for_each_link(link, ap, EDGE) {
4036 +diff --git a/drivers/ata/libata-zpodd.c b/drivers/ata/libata-zpodd.c
4037 +index 90b159b7..cd8daf47 100644
4038 +--- a/drivers/ata/libata-zpodd.c
4039 ++++ b/drivers/ata/libata-zpodd.c
4040 +@@ -32,13 +32,14 @@ struct zpodd {
4041 +
4042 + static int eject_tray(struct ata_device *dev)
4043 + {
4044 +- struct ata_taskfile tf = {};
4045 ++ struct ata_taskfile tf;
4046 + const char cdb[] = { GPCMD_START_STOP_UNIT,
4047 + 0, 0, 0,
4048 + 0x02, /* LoEj */
4049 + 0, 0, 0, 0, 0, 0, 0,
4050 + };
4051 +
4052 ++ ata_tf_init(dev, &tf);
4053 + tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
4054 + tf.command = ATA_CMD_PACKET;
4055 + tf.protocol = ATAPI_PROT_NODATA;
4056 +@@ -52,8 +53,7 @@ static enum odd_mech_type zpodd_get_mech_type(struct ata_device *dev)
4057 + char buf[16];
4058 + unsigned int ret;
4059 + struct rm_feature_desc *desc = (void *)(buf + 8);
4060 +- struct ata_taskfile tf = {};
4061 +-
4062 ++ struct ata_taskfile tf;
4063 + char cdb[] = { GPCMD_GET_CONFIGURATION,
4064 + 2, /* only 1 feature descriptor requested */
4065 + 0, 3, /* 3, removable medium feature */
4066 +@@ -62,6 +62,7 @@ static enum odd_mech_type zpodd_get_mech_type(struct ata_device *dev)
4067 + 0, 0, 0,
4068 + };
4069 +
4070 ++ ata_tf_init(dev, &tf);
4071 + tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
4072 + tf.command = ATA_CMD_PACKET;
4073 + tf.protocol = ATAPI_PROT_PIO;
4074 +diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
4075 +index b20aa96b..c846fd3c 100644
4076 +--- a/drivers/ata/sata_highbank.c
4077 ++++ b/drivers/ata/sata_highbank.c
4078 +@@ -196,10 +196,26 @@ static int highbank_initialize_phys(struct device *dev, void __iomem *addr)
4079 + return 0;
4080 + }
4081 +
4082 ++/*
4083 ++ * The Calxeda SATA phy intermittently fails to bring up a link with Gen3
4084 ++ * Retrying the phy hard reset can work around the issue, but the drive
4085 ++ * may fail again. In less than 150 out of 15000 test runs, it took more
4086 ++ * than 10 tries for the link to be established (but never more than 35).
4087 ++ * Triple the maximum observed retry count to provide plenty of margin for
4088 ++ * rare events and to guarantee that the link is established.
4089 ++ *
4090 ++ * Also, the default 2 second time-out on a failed drive is too long in
4091 ++ * this situation. The uboot implementation of the same driver function
4092 ++ * uses a much shorter time-out period and never experiences a time out
4093 ++ * issue. Reducing the time-out to 500ms improves the responsiveness.
4094 ++ * The other timing constants were kept the same as the stock AHCI driver.
4095 ++ * This change was also tested 15000 times on 24 drives and none of them
4096 ++ * experienced a time out.
4097 ++ */
4098 + static int ahci_highbank_hardreset(struct ata_link *link, unsigned int *class,
4099 + unsigned long deadline)
4100 + {
4101 +- const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
4102 ++ static const unsigned long timing[] = { 5, 100, 500};
4103 + struct ata_port *ap = link->ap;
4104 + struct ahci_port_priv *pp = ap->private_data;
4105 + u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
4106 +@@ -207,7 +223,7 @@ static int ahci_highbank_hardreset(struct ata_link *link, unsigned int *class,
4107 + bool online;
4108 + u32 sstatus;
4109 + int rc;
4110 +- int retry = 10;
4111 ++ int retry = 100;
4112 +
4113 + ahci_stop_engine(ap);
4114 +
4115 +diff --git a/drivers/clocksource/dw_apb_timer_of.c b/drivers/clocksource/dw_apb_timer_of.c
4116 +index ab09ed37..6b02eddc 100644
4117 +--- a/drivers/clocksource/dw_apb_timer_of.c
4118 ++++ b/drivers/clocksource/dw_apb_timer_of.c
4119 +@@ -44,7 +44,7 @@ static void add_clockevent(struct device_node *event_timer)
4120 + u32 irq, rate;
4121 +
4122 + irq = irq_of_parse_and_map(event_timer, 0);
4123 +- if (irq == NO_IRQ)
4124 ++ if (irq == 0)
4125 + panic("No IRQ for clock event timer");
4126 +
4127 + timer_get_base_and_rate(event_timer, &iobase, &rate);
4128 +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
4129 +index 2d53f47d..178fe7a6 100644
4130 +--- a/drivers/cpufreq/cpufreq.c
4131 ++++ b/drivers/cpufreq/cpufreq.c
4132 +@@ -1837,13 +1837,15 @@ static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb,
4133 + if (dev) {
4134 + switch (action) {
4135 + case CPU_ONLINE:
4136 ++ case CPU_ONLINE_FROZEN:
4137 + cpufreq_add_dev(dev, NULL);
4138 + break;
4139 + case CPU_DOWN_PREPARE:
4140 +- case CPU_UP_CANCELED_FROZEN:
4141 ++ case CPU_DOWN_PREPARE_FROZEN:
4142 + __cpufreq_remove_dev(dev, NULL);
4143 + break;
4144 + case CPU_DOWN_FAILED:
4145 ++ case CPU_DOWN_FAILED_FROZEN:
4146 + cpufreq_add_dev(dev, NULL);
4147 + break;
4148 + }
4149 +diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
4150 +index dc9b72e2..5af40ad8 100644
4151 +--- a/drivers/cpufreq/cpufreq_governor.c
4152 ++++ b/drivers/cpufreq/cpufreq_governor.c
4153 +@@ -26,7 +26,6 @@
4154 + #include <linux/tick.h>
4155 + #include <linux/types.h>
4156 + #include <linux/workqueue.h>
4157 +-#include <linux/cpu.h>
4158 +
4159 + #include "cpufreq_governor.h"
4160 +
4161 +@@ -181,10 +180,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
4162 + if (!all_cpus) {
4163 + __gov_queue_work(smp_processor_id(), dbs_data, delay);
4164 + } else {
4165 +- get_online_cpus();
4166 + for_each_cpu(i, policy->cpus)
4167 + __gov_queue_work(i, dbs_data, delay);
4168 +- put_online_cpus();
4169 + }
4170 + }
4171 + EXPORT_SYMBOL_GPL(gov_queue_work);
4172 +diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
4173 +index 591b6fb6..bfd6273f 100644
4174 +--- a/drivers/cpufreq/cpufreq_stats.c
4175 ++++ b/drivers/cpufreq/cpufreq_stats.c
4176 +@@ -353,13 +353,11 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
4177 + cpufreq_update_policy(cpu);
4178 + break;
4179 + case CPU_DOWN_PREPARE:
4180 ++ case CPU_DOWN_PREPARE_FROZEN:
4181 + cpufreq_stats_free_sysfs(cpu);
4182 + break;
4183 + case CPU_DEAD:
4184 +- cpufreq_stats_free_table(cpu);
4185 +- break;
4186 +- case CPU_UP_CANCELED_FROZEN:
4187 +- cpufreq_stats_free_sysfs(cpu);
4188 ++ case CPU_DEAD_FROZEN:
4189 + cpufreq_stats_free_table(cpu);
4190 + break;
4191 + }
4192 +diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
4193 +index cf919e36..239ef30f 100644
4194 +--- a/drivers/gpu/drm/drm_gem.c
4195 ++++ b/drivers/gpu/drm/drm_gem.c
4196 +@@ -453,25 +453,21 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data,
4197 + spin_lock(&dev->object_name_lock);
4198 + if (!obj->name) {
4199 + ret = idr_alloc(&dev->object_name_idr, obj, 1, 0, GFP_NOWAIT);
4200 +- obj->name = ret;
4201 +- args->name = (uint64_t) obj->name;
4202 +- spin_unlock(&dev->object_name_lock);
4203 +- idr_preload_end();
4204 +-
4205 + if (ret < 0)
4206 + goto err;
4207 +- ret = 0;
4208 ++
4209 ++ obj->name = ret;
4210 +
4211 + /* Allocate a reference for the name table. */
4212 + drm_gem_object_reference(obj);
4213 +- } else {
4214 +- args->name = (uint64_t) obj->name;
4215 +- spin_unlock(&dev->object_name_lock);
4216 +- idr_preload_end();
4217 +- ret = 0;
4218 + }
4219 +
4220 ++ args->name = (uint64_t) obj->name;
4221 ++ ret = 0;
4222 ++
4223 + err:
4224 ++ spin_unlock(&dev->object_name_lock);
4225 ++ idr_preload_end();
4226 + drm_gem_object_unreference_unlocked(obj);
4227 + return ret;
4228 + }
4229 +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
4230 +index 9e35dafc..34118b0c 100644
4231 +--- a/drivers/gpu/drm/i915/i915_gem.c
4232 ++++ b/drivers/gpu/drm/i915/i915_gem.c
4233 +@@ -1160,7 +1160,8 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
4234 + /* Manually manage the write flush as we may have not yet
4235 + * retired the buffer.
4236 + */
4237 +- if (obj->last_write_seqno &&
4238 ++ if (ret == 0 &&
4239 ++ obj->last_write_seqno &&
4240 + i915_seqno_passed(seqno, obj->last_write_seqno)) {
4241 + obj->last_write_seqno = 0;
4242 + obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
4243 +diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
4244 +index a1e8ecb6..3bc8a58a 100644
4245 +--- a/drivers/gpu/drm/i915/i915_gem_context.c
4246 ++++ b/drivers/gpu/drm/i915/i915_gem_context.c
4247 +@@ -113,7 +113,7 @@ static int get_context_size(struct drm_device *dev)
4248 + case 7:
4249 + reg = I915_READ(GEN7_CXT_SIZE);
4250 + if (IS_HASWELL(dev))
4251 +- ret = HSW_CXT_TOTAL_SIZE(reg) * 64;
4252 ++ ret = HSW_CXT_TOTAL_SIZE;
4253 + else
4254 + ret = GEN7_CXT_TOTAL_SIZE(reg) * 64;
4255 + break;
4256 +diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
4257 +index 0aa2ef0d..e5e32869 100644
4258 +--- a/drivers/gpu/drm/i915/i915_irq.c
4259 ++++ b/drivers/gpu/drm/i915/i915_irq.c
4260 +@@ -70,15 +70,6 @@ static const u32 hpd_status_gen4[] = {
4261 + [HPD_PORT_D] = PORTD_HOTPLUG_INT_STATUS
4262 + };
4263 +
4264 +-static const u32 hpd_status_i965[] = {
4265 +- [HPD_CRT] = CRT_HOTPLUG_INT_STATUS,
4266 +- [HPD_SDVO_B] = SDVOB_HOTPLUG_INT_STATUS_I965,
4267 +- [HPD_SDVO_C] = SDVOC_HOTPLUG_INT_STATUS_I965,
4268 +- [HPD_PORT_B] = PORTB_HOTPLUG_INT_STATUS,
4269 +- [HPD_PORT_C] = PORTC_HOTPLUG_INT_STATUS,
4270 +- [HPD_PORT_D] = PORTD_HOTPLUG_INT_STATUS
4271 +-};
4272 +-
4273 + static const u32 hpd_status_i915[] = { /* i915 and valleyview are the same */
4274 + [HPD_CRT] = CRT_HOTPLUG_INT_STATUS,
4275 + [HPD_SDVO_B] = SDVOB_HOTPLUG_INT_STATUS_I915,
4276 +@@ -2952,13 +2943,13 @@ static irqreturn_t i965_irq_handler(int irq, void *arg)
4277 + u32 hotplug_status = I915_READ(PORT_HOTPLUG_STAT);
4278 + u32 hotplug_trigger = hotplug_status & (IS_G4X(dev) ?
4279 + HOTPLUG_INT_STATUS_G4X :
4280 +- HOTPLUG_INT_STATUS_I965);
4281 ++ HOTPLUG_INT_STATUS_I915);
4282 +
4283 + DRM_DEBUG_DRIVER("hotplug event received, stat 0x%08x\n",
4284 + hotplug_status);
4285 + if (hotplug_trigger) {
4286 + if (hotplug_irq_storm_detect(dev, hotplug_trigger,
4287 +- IS_G4X(dev) ? hpd_status_gen4 : hpd_status_i965))
4288 ++ IS_G4X(dev) ? hpd_status_gen4 : hpd_status_i915))
4289 + i915_hpd_irq_setup(dev);
4290 + queue_work(dev_priv->wq,
4291 + &dev_priv->hotplug_work);
4292 +diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
4293 +index 2d6b62e4..80b0a662 100644
4294 +--- a/drivers/gpu/drm/i915/i915_reg.h
4295 ++++ b/drivers/gpu/drm/i915/i915_reg.h
4296 +@@ -1535,14 +1535,13 @@
4297 + GEN7_CXT_EXTENDED_SIZE(ctx_reg) + \
4298 + GEN7_CXT_GT1_SIZE(ctx_reg) + \
4299 + GEN7_CXT_VFSTATE_SIZE(ctx_reg))
4300 +-#define HSW_CXT_POWER_SIZE(ctx_reg) ((ctx_reg >> 26) & 0x3f)
4301 +-#define HSW_CXT_RING_SIZE(ctx_reg) ((ctx_reg >> 23) & 0x7)
4302 +-#define HSW_CXT_RENDER_SIZE(ctx_reg) ((ctx_reg >> 15) & 0xff)
4303 +-#define HSW_CXT_TOTAL_SIZE(ctx_reg) (HSW_CXT_POWER_SIZE(ctx_reg) + \
4304 +- HSW_CXT_RING_SIZE(ctx_reg) + \
4305 +- HSW_CXT_RENDER_SIZE(ctx_reg) + \
4306 +- GEN7_CXT_VFSTATE_SIZE(ctx_reg))
4307 +-
4308 ++/* Haswell does have the CXT_SIZE register however it does not appear to be
4309 ++ * valid. Now, docs explain in dwords what is in the context object. The full
4310 ++ * size is 70720 bytes, however, the power context and execlist context will
4311 ++ * never be saved (power context is stored elsewhere, and execlists don't work
4312 ++ * on HSW) - so the final size is 66944 bytes, which rounds to 17 pages.
4313 ++ */
4314 ++#define HSW_CXT_TOTAL_SIZE (17 * PAGE_SIZE)
4315 +
4316 + /*
4317 + * Overlay regs
4318 +@@ -1691,6 +1690,12 @@
4319 + /* SDVO is different across gen3/4 */
4320 + #define SDVOC_HOTPLUG_INT_STATUS_G4X (1 << 3)
4321 + #define SDVOB_HOTPLUG_INT_STATUS_G4X (1 << 2)
4322 ++/*
4323 ++ * Bspec seems to be seriously misleaded about the SDVO hpd bits on i965g/gm,
4324 ++ * since reality corrobates that they're the same as on gen3. But keep these
4325 ++ * bits here (and the comment!) to help any other lost wanderers back onto the
4326 ++ * right tracks.
4327 ++ */
4328 + #define SDVOC_HOTPLUG_INT_STATUS_I965 (3 << 4)
4329 + #define SDVOB_HOTPLUG_INT_STATUS_I965 (3 << 2)
4330 + #define SDVOC_HOTPLUG_INT_STATUS_I915 (1 << 7)
4331 +@@ -1702,13 +1707,6 @@
4332 + PORTC_HOTPLUG_INT_STATUS | \
4333 + PORTD_HOTPLUG_INT_STATUS)
4334 +
4335 +-#define HOTPLUG_INT_STATUS_I965 (CRT_HOTPLUG_INT_STATUS | \
4336 +- SDVOB_HOTPLUG_INT_STATUS_I965 | \
4337 +- SDVOC_HOTPLUG_INT_STATUS_I965 | \
4338 +- PORTB_HOTPLUG_INT_STATUS | \
4339 +- PORTC_HOTPLUG_INT_STATUS | \
4340 +- PORTD_HOTPLUG_INT_STATUS)
4341 +-
4342 + #define HOTPLUG_INT_STATUS_I915 (CRT_HOTPLUG_INT_STATUS | \
4343 + SDVOB_HOTPLUG_INT_STATUS_I915 | \
4344 + SDVOC_HOTPLUG_INT_STATUS_I915 | \
4345 +diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h
4346 +index bf29b2f4..988911af 100644
4347 +--- a/drivers/gpu/drm/mgag200/mgag200_drv.h
4348 ++++ b/drivers/gpu/drm/mgag200/mgag200_drv.h
4349 +@@ -198,7 +198,8 @@ struct mga_device {
4350 + struct ttm_bo_device bdev;
4351 + } ttm;
4352 +
4353 +- u32 reg_1e24; /* SE model number */
4354 ++ /* SE model number stored in reg 0x1e24 */
4355 ++ u32 unique_rev_id;
4356 + };
4357 +
4358 +
4359 +diff --git a/drivers/gpu/drm/mgag200/mgag200_main.c b/drivers/gpu/drm/mgag200/mgag200_main.c
4360 +index 99059237..dafe049f 100644
4361 +--- a/drivers/gpu/drm/mgag200/mgag200_main.c
4362 ++++ b/drivers/gpu/drm/mgag200/mgag200_main.c
4363 +@@ -176,7 +176,7 @@ static int mgag200_device_init(struct drm_device *dev,
4364 +
4365 + /* stash G200 SE model number for later use */
4366 + if (IS_G200_SE(mdev))
4367 +- mdev->reg_1e24 = RREG32(0x1e24);
4368 ++ mdev->unique_rev_id = RREG32(0x1e24);
4369 +
4370 + ret = mga_vram_init(mdev);
4371 + if (ret)
4372 +diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
4373 +index ee66badc..99e07b68 100644
4374 +--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
4375 ++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
4376 +@@ -1008,7 +1008,7 @@ static int mga_crtc_mode_set(struct drm_crtc *crtc,
4377 +
4378 +
4379 + if (IS_G200_SE(mdev)) {
4380 +- if (mdev->reg_1e24 >= 0x02) {
4381 ++ if (mdev->unique_rev_id >= 0x02) {
4382 + u8 hi_pri_lvl;
4383 + u32 bpp;
4384 + u32 mb;
4385 +@@ -1038,7 +1038,7 @@ static int mga_crtc_mode_set(struct drm_crtc *crtc,
4386 + WREG8(MGAREG_CRTCEXT_DATA, hi_pri_lvl);
4387 + } else {
4388 + WREG8(MGAREG_CRTCEXT_INDEX, 0x06);
4389 +- if (mdev->reg_1e24 >= 0x01)
4390 ++ if (mdev->unique_rev_id >= 0x01)
4391 + WREG8(MGAREG_CRTCEXT_DATA, 0x03);
4392 + else
4393 + WREG8(MGAREG_CRTCEXT_DATA, 0x04);
4394 +@@ -1410,6 +1410,32 @@ static int mga_vga_get_modes(struct drm_connector *connector)
4395 + return ret;
4396 + }
4397 +
4398 ++static uint32_t mga_vga_calculate_mode_bandwidth(struct drm_display_mode *mode,
4399 ++ int bits_per_pixel)
4400 ++{
4401 ++ uint32_t total_area, divisor;
4402 ++ int64_t active_area, pixels_per_second, bandwidth;
4403 ++ uint64_t bytes_per_pixel = (bits_per_pixel + 7) / 8;
4404 ++
4405 ++ divisor = 1024;
4406 ++
4407 ++ if (!mode->htotal || !mode->vtotal || !mode->clock)
4408 ++ return 0;
4409 ++
4410 ++ active_area = mode->hdisplay * mode->vdisplay;
4411 ++ total_area = mode->htotal * mode->vtotal;
4412 ++
4413 ++ pixels_per_second = active_area * mode->clock * 1000;
4414 ++ do_div(pixels_per_second, total_area);
4415 ++
4416 ++ bandwidth = pixels_per_second * bytes_per_pixel * 100;
4417 ++ do_div(bandwidth, divisor);
4418 ++
4419 ++ return (uint32_t)(bandwidth);
4420 ++}
4421 ++
4422 ++#define MODE_BANDWIDTH MODE_BAD
4423 ++
4424 + static int mga_vga_mode_valid(struct drm_connector *connector,
4425 + struct drm_display_mode *mode)
4426 + {
4427 +@@ -1421,7 +1447,45 @@ static int mga_vga_mode_valid(struct drm_connector *connector,
4428 + int bpp = 32;
4429 + int i = 0;
4430 +
4431 +- /* FIXME: Add bandwidth and g200se limitations */
4432 ++ if (IS_G200_SE(mdev)) {
4433 ++ if (mdev->unique_rev_id == 0x01) {
4434 ++ if (mode->hdisplay > 1600)
4435 ++ return MODE_VIRTUAL_X;
4436 ++ if (mode->vdisplay > 1200)
4437 ++ return MODE_VIRTUAL_Y;
4438 ++ if (mga_vga_calculate_mode_bandwidth(mode, bpp)
4439 ++ > (24400 * 1024))
4440 ++ return MODE_BANDWIDTH;
4441 ++ } else if (mdev->unique_rev_id >= 0x02) {
4442 ++ if (mode->hdisplay > 1920)
4443 ++ return MODE_VIRTUAL_X;
4444 ++ if (mode->vdisplay > 1200)
4445 ++ return MODE_VIRTUAL_Y;
4446 ++ if (mga_vga_calculate_mode_bandwidth(mode, bpp)
4447 ++ > (30100 * 1024))
4448 ++ return MODE_BANDWIDTH;
4449 ++ }
4450 ++ } else if (mdev->type == G200_WB) {
4451 ++ if (mode->hdisplay > 1280)
4452 ++ return MODE_VIRTUAL_X;
4453 ++ if (mode->vdisplay > 1024)
4454 ++ return MODE_VIRTUAL_Y;
4455 ++ if (mga_vga_calculate_mode_bandwidth(mode,
4456 ++ bpp > (31877 * 1024)))
4457 ++ return MODE_BANDWIDTH;
4458 ++ } else if (mdev->type == G200_EV &&
4459 ++ (mga_vga_calculate_mode_bandwidth(mode, bpp)
4460 ++ > (32700 * 1024))) {
4461 ++ return MODE_BANDWIDTH;
4462 ++ } else if (mode->type == G200_EH &&
4463 ++ (mga_vga_calculate_mode_bandwidth(mode, bpp)
4464 ++ > (37500 * 1024))) {
4465 ++ return MODE_BANDWIDTH;
4466 ++ } else if (mode->type == G200_ER &&
4467 ++ (mga_vga_calculate_mode_bandwidth(mode,
4468 ++ bpp) > (55000 * 1024))) {
4469 ++ return MODE_BANDWIDTH;
4470 ++ }
4471 +
4472 + if (mode->crtc_hdisplay > 2048 || mode->crtc_hsync_start > 4096 ||
4473 + mode->crtc_hsync_end > 4096 || mode->crtc_htotal > 4096 ||
4474 +diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/hdminva3.c b/drivers/gpu/drm/nouveau/core/engine/disp/hdminva3.c
4475 +index f065fc24..db8c6fd4 100644
4476 +--- a/drivers/gpu/drm/nouveau/core/engine/disp/hdminva3.c
4477 ++++ b/drivers/gpu/drm/nouveau/core/engine/disp/hdminva3.c
4478 +@@ -55,6 +55,10 @@ nva3_hdmi_ctrl(struct nv50_disp_priv *priv, int head, int or, u32 data)
4479 + nv_wr32(priv, 0x61c510 + soff, 0x00000000);
4480 + nv_mask(priv, 0x61c500 + soff, 0x00000001, 0x00000001);
4481 +
4482 ++ nv_mask(priv, 0x61c5d0 + soff, 0x00070001, 0x00010001); /* SPARE, HW_CTS */
4483 ++ nv_mask(priv, 0x61c568 + soff, 0x00010101, 0x00000000); /* ACR_CTRL, ?? */
4484 ++ nv_mask(priv, 0x61c578 + soff, 0x80000000, 0x80000000); /* ACR_0441_ENABLE */
4485 ++
4486 + /* ??? */
4487 + nv_mask(priv, 0x61733c, 0x00100000, 0x00100000); /* RESETF */
4488 + nv_mask(priv, 0x61733c, 0x10000000, 0x10000000); /* LOOKUP_EN */
4489 +diff --git a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
4490 +index 6a38402f..5680d3eb 100644
4491 +--- a/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
4492 ++++ b/drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
4493 +@@ -1107,6 +1107,7 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head)
4494 + u32 pclk = nv_rd32(priv, 0x610ad0 + (head * 0x540)) & 0x3fffff;
4495 + u32 hval, hreg = 0x614200 + (head * 0x800);
4496 + u32 oval, oreg;
4497 ++ u32 mask;
4498 + u32 conf = exec_clkcmp(priv, head, 0xff, pclk, &outp);
4499 + if (conf != ~0) {
4500 + if (outp.location == 0 && outp.type == DCB_OUTPUT_DP) {
4501 +@@ -1133,6 +1134,7 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head)
4502 + oreg = 0x614280 + (ffs(outp.or) - 1) * 0x800;
4503 + oval = 0x00000000;
4504 + hval = 0x00000000;
4505 ++ mask = 0xffffffff;
4506 + } else
4507 + if (!outp.location) {
4508 + if (outp.type == DCB_OUTPUT_DP)
4509 +@@ -1140,14 +1142,16 @@ nv50_disp_intr_unk20_2(struct nv50_disp_priv *priv, int head)
4510 + oreg = 0x614300 + (ffs(outp.or) - 1) * 0x800;
4511 + oval = (conf & 0x0100) ? 0x00000101 : 0x00000000;
4512 + hval = 0x00000000;
4513 ++ mask = 0x00000707;
4514 + } else {
4515 + oreg = 0x614380 + (ffs(outp.or) - 1) * 0x800;
4516 + oval = 0x00000001;
4517 + hval = 0x00000001;
4518 ++ mask = 0x00000707;
4519 + }
4520 +
4521 + nv_mask(priv, hreg, 0x0000000f, hval);
4522 +- nv_mask(priv, oreg, 0x00000707, oval);
4523 ++ nv_mask(priv, oreg, mask, oval);
4524 + }
4525 + }
4526 +
4527 +diff --git a/drivers/gpu/drm/nouveau/core/subdev/vm/base.c b/drivers/gpu/drm/nouveau/core/subdev/vm/base.c
4528 +index 77c67fc9..e66fb771 100644
4529 +--- a/drivers/gpu/drm/nouveau/core/subdev/vm/base.c
4530 ++++ b/drivers/gpu/drm/nouveau/core/subdev/vm/base.c
4531 +@@ -362,7 +362,7 @@ nouveau_vm_create(struct nouveau_vmmgr *vmm, u64 offset, u64 length,
4532 + vm->fpde = offset >> (vmm->pgt_bits + 12);
4533 + vm->lpde = (offset + length - 1) >> (vmm->pgt_bits + 12);
4534 +
4535 +- vm->pgt = kcalloc(vm->lpde - vm->fpde + 1, sizeof(*vm->pgt), GFP_KERNEL);
4536 ++ vm->pgt = vzalloc((vm->lpde - vm->fpde + 1) * sizeof(*vm->pgt));
4537 + if (!vm->pgt) {
4538 + kfree(vm);
4539 + return -ENOMEM;
4540 +@@ -371,7 +371,7 @@ nouveau_vm_create(struct nouveau_vmmgr *vmm, u64 offset, u64 length,
4541 + ret = nouveau_mm_init(&vm->mm, mm_offset >> 12, mm_length >> 12,
4542 + block >> 12);
4543 + if (ret) {
4544 +- kfree(vm->pgt);
4545 ++ vfree(vm->pgt);
4546 + kfree(vm);
4547 + return ret;
4548 + }
4549 +@@ -446,7 +446,7 @@ nouveau_vm_del(struct nouveau_vm *vm)
4550 + }
4551 +
4552 + nouveau_mm_fini(&vm->mm);
4553 +- kfree(vm->pgt);
4554 ++ vfree(vm->pgt);
4555 + kfree(vm);
4556 + }
4557 +
4558 +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
4559 +index 8406c825..4120d355 100644
4560 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c
4561 ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
4562 +@@ -186,6 +186,13 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder,
4563 + u8 backlight_level;
4564 + char bl_name[16];
4565 +
4566 ++ /* Mac laptops with multiple GPUs use the gmux driver for backlight
4567 ++ * so don't register a backlight device
4568 ++ */
4569 ++ if ((rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE) &&
4570 ++ (rdev->pdev->device == 0x6741))
4571 ++ return;
4572 ++
4573 + if (!radeon_encoder->enc_priv)
4574 + return;
4575 +
4576 +diff --git a/drivers/gpu/drm/radeon/evergreen_hdmi.c b/drivers/gpu/drm/radeon/evergreen_hdmi.c
4577 +index ed7c8a76..b9c6f767 100644
4578 +--- a/drivers/gpu/drm/radeon/evergreen_hdmi.c
4579 ++++ b/drivers/gpu/drm/radeon/evergreen_hdmi.c
4580 +@@ -128,14 +128,7 @@ static void evergreen_hdmi_update_avi_infoframe(struct drm_encoder *encoder,
4581 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
4582 + uint32_t offset = dig->afmt->offset;
4583 + uint8_t *frame = buffer + 3;
4584 +-
4585 +- /* Our header values (type, version, length) should be alright, Intel
4586 +- * is using the same. Checksum function also seems to be OK, it works
4587 +- * fine for audio infoframe. However calculated value is always lower
4588 +- * by 2 in comparison to fglrx. It breaks displaying anything in case
4589 +- * of TVs that strictly check the checksum. Hack it manually here to
4590 +- * workaround this issue. */
4591 +- frame[0x0] += 2;
4592 ++ uint8_t *header = buffer;
4593 +
4594 + WREG32(AFMT_AVI_INFO0 + offset,
4595 + frame[0x0] | (frame[0x1] << 8) | (frame[0x2] << 16) | (frame[0x3] << 24));
4596 +@@ -144,7 +137,7 @@ static void evergreen_hdmi_update_avi_infoframe(struct drm_encoder *encoder,
4597 + WREG32(AFMT_AVI_INFO2 + offset,
4598 + frame[0x8] | (frame[0x9] << 8) | (frame[0xA] << 16) | (frame[0xB] << 24));
4599 + WREG32(AFMT_AVI_INFO3 + offset,
4600 +- frame[0xC] | (frame[0xD] << 8));
4601 ++ frame[0xC] | (frame[0xD] << 8) | (header[1] << 24));
4602 + }
4603 +
4604 + static void evergreen_audio_set_dto(struct drm_encoder *encoder, u32 clock)
4605 +diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
4606 +index 456750a0..e73b2a73 100644
4607 +--- a/drivers/gpu/drm/radeon/r600_hdmi.c
4608 ++++ b/drivers/gpu/drm/radeon/r600_hdmi.c
4609 +@@ -133,14 +133,7 @@ static void r600_hdmi_update_avi_infoframe(struct drm_encoder *encoder,
4610 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
4611 + uint32_t offset = dig->afmt->offset;
4612 + uint8_t *frame = buffer + 3;
4613 +-
4614 +- /* Our header values (type, version, length) should be alright, Intel
4615 +- * is using the same. Checksum function also seems to be OK, it works
4616 +- * fine for audio infoframe. However calculated value is always lower
4617 +- * by 2 in comparison to fglrx. It breaks displaying anything in case
4618 +- * of TVs that strictly check the checksum. Hack it manually here to
4619 +- * workaround this issue. */
4620 +- frame[0x0] += 2;
4621 ++ uint8_t *header = buffer;
4622 +
4623 + WREG32(HDMI0_AVI_INFO0 + offset,
4624 + frame[0x0] | (frame[0x1] << 8) | (frame[0x2] << 16) | (frame[0x3] << 24));
4625 +@@ -149,7 +142,7 @@ static void r600_hdmi_update_avi_infoframe(struct drm_encoder *encoder,
4626 + WREG32(HDMI0_AVI_INFO2 + offset,
4627 + frame[0x8] | (frame[0x9] << 8) | (frame[0xA] << 16) | (frame[0xB] << 24));
4628 + WREG32(HDMI0_AVI_INFO3 + offset,
4629 +- frame[0xC] | (frame[0xD] << 8));
4630 ++ frame[0xC] | (frame[0xD] << 8) | (header[1] << 24));
4631 + }
4632 +
4633 + /*
4634 +diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
4635 +index 04638aee..99cec182 100644
4636 +--- a/drivers/hwmon/nct6775.c
4637 ++++ b/drivers/hwmon/nct6775.c
4638 +@@ -199,7 +199,7 @@ static const s8 NCT6775_ALARM_BITS[] = {
4639 + 0, 1, 2, 3, 8, 21, 20, 16, /* in0.. in7 */
4640 + 17, -1, -1, -1, -1, -1, -1, /* in8..in14 */
4641 + -1, /* unused */
4642 +- 6, 7, 11, 10, 23, /* fan1..fan5 */
4643 ++ 6, 7, 11, -1, -1, /* fan1..fan5 */
4644 + -1, -1, -1, /* unused */
4645 + 4, 5, 13, -1, -1, -1, /* temp1..temp6 */
4646 + 12, -1 }; /* intrusion0, intrusion1 */
4647 +@@ -625,6 +625,7 @@ struct nct6775_data {
4648 + u8 has_fan_min; /* some fans don't have min register */
4649 + bool has_fan_div;
4650 +
4651 ++ u8 num_temp_alarms; /* 2 or 3 */
4652 + u8 temp_fixed_num; /* 3 or 6 */
4653 + u8 temp_type[NUM_TEMP_FIXED];
4654 + s8 temp_offset[NUM_TEMP_FIXED];
4655 +@@ -1193,6 +1194,42 @@ show_alarm(struct device *dev, struct device_attribute *attr, char *buf)
4656 + (unsigned int)((data->alarms >> nr) & 0x01));
4657 + }
4658 +
4659 ++static int find_temp_source(struct nct6775_data *data, int index, int count)
4660 ++{
4661 ++ int source = data->temp_src[index];
4662 ++ int nr;
4663 ++
4664 ++ for (nr = 0; nr < count; nr++) {
4665 ++ int src;
4666 ++
4667 ++ src = nct6775_read_value(data,
4668 ++ data->REG_TEMP_SOURCE[nr]) & 0x1f;
4669 ++ if (src == source)
4670 ++ return nr;
4671 ++ }
4672 ++ return -1;
4673 ++}
4674 ++
4675 ++static ssize_t
4676 ++show_temp_alarm(struct device *dev, struct device_attribute *attr, char *buf)
4677 ++{
4678 ++ struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr);
4679 ++ struct nct6775_data *data = nct6775_update_device(dev);
4680 ++ unsigned int alarm = 0;
4681 ++ int nr;
4682 ++
4683 ++ /*
4684 ++ * For temperatures, there is no fixed mapping from registers to alarm
4685 ++ * bits. Alarm bits are determined by the temperature source mapping.
4686 ++ */
4687 ++ nr = find_temp_source(data, sattr->index, data->num_temp_alarms);
4688 ++ if (nr >= 0) {
4689 ++ int bit = data->ALARM_BITS[nr + TEMP_ALARM_BASE];
4690 ++ alarm = (data->alarms >> bit) & 0x01;
4691 ++ }
4692 ++ return sprintf(buf, "%u\n", alarm);
4693 ++}
4694 ++
4695 + static SENSOR_DEVICE_ATTR_2(in0_input, S_IRUGO, show_in_reg, NULL, 0, 0);
4696 + static SENSOR_DEVICE_ATTR_2(in1_input, S_IRUGO, show_in_reg, NULL, 1, 0);
4697 + static SENSOR_DEVICE_ATTR_2(in2_input, S_IRUGO, show_in_reg, NULL, 2, 0);
4698 +@@ -1874,22 +1911,18 @@ static struct sensor_device_attribute sda_temp_type[] = {
4699 + };
4700 +
4701 + static struct sensor_device_attribute sda_temp_alarm[] = {
4702 +- SENSOR_ATTR(temp1_alarm, S_IRUGO, show_alarm, NULL,
4703 +- TEMP_ALARM_BASE),
4704 +- SENSOR_ATTR(temp2_alarm, S_IRUGO, show_alarm, NULL,
4705 +- TEMP_ALARM_BASE + 1),
4706 +- SENSOR_ATTR(temp3_alarm, S_IRUGO, show_alarm, NULL,
4707 +- TEMP_ALARM_BASE + 2),
4708 +- SENSOR_ATTR(temp4_alarm, S_IRUGO, show_alarm, NULL,
4709 +- TEMP_ALARM_BASE + 3),
4710 +- SENSOR_ATTR(temp5_alarm, S_IRUGO, show_alarm, NULL,
4711 +- TEMP_ALARM_BASE + 4),
4712 +- SENSOR_ATTR(temp6_alarm, S_IRUGO, show_alarm, NULL,
4713 +- TEMP_ALARM_BASE + 5),
4714 ++ SENSOR_ATTR(temp1_alarm, S_IRUGO, show_temp_alarm, NULL, 0),
4715 ++ SENSOR_ATTR(temp2_alarm, S_IRUGO, show_temp_alarm, NULL, 1),
4716 ++ SENSOR_ATTR(temp3_alarm, S_IRUGO, show_temp_alarm, NULL, 2),
4717 ++ SENSOR_ATTR(temp4_alarm, S_IRUGO, show_temp_alarm, NULL, 3),
4718 ++ SENSOR_ATTR(temp5_alarm, S_IRUGO, show_temp_alarm, NULL, 4),
4719 ++ SENSOR_ATTR(temp6_alarm, S_IRUGO, show_temp_alarm, NULL, 5),
4720 ++ SENSOR_ATTR(temp7_alarm, S_IRUGO, show_temp_alarm, NULL, 6),
4721 ++ SENSOR_ATTR(temp8_alarm, S_IRUGO, show_temp_alarm, NULL, 7),
4722 ++ SENSOR_ATTR(temp9_alarm, S_IRUGO, show_temp_alarm, NULL, 8),
4723 ++ SENSOR_ATTR(temp10_alarm, S_IRUGO, show_temp_alarm, NULL, 9),
4724 + };
4725 +
4726 +-#define NUM_TEMP_ALARM ARRAY_SIZE(sda_temp_alarm)
4727 +-
4728 + static ssize_t
4729 + show_pwm_mode(struct device *dev, struct device_attribute *attr, char *buf)
4730 + {
4731 +@@ -3215,13 +3248,11 @@ static void nct6775_device_remove_files(struct device *dev)
4732 + device_remove_file(dev, &sda_temp_max[i].dev_attr);
4733 + device_remove_file(dev, &sda_temp_max_hyst[i].dev_attr);
4734 + device_remove_file(dev, &sda_temp_crit[i].dev_attr);
4735 ++ device_remove_file(dev, &sda_temp_alarm[i].dev_attr);
4736 + if (!(data->have_temp_fixed & (1 << i)))
4737 + continue;
4738 + device_remove_file(dev, &sda_temp_type[i].dev_attr);
4739 + device_remove_file(dev, &sda_temp_offset[i].dev_attr);
4740 +- if (i >= NUM_TEMP_ALARM)
4741 +- continue;
4742 +- device_remove_file(dev, &sda_temp_alarm[i].dev_attr);
4743 + }
4744 +
4745 + device_remove_file(dev, &sda_caseopen[0].dev_attr);
4746 +@@ -3419,6 +3450,7 @@ static int nct6775_probe(struct platform_device *pdev)
4747 + data->auto_pwm_num = 6;
4748 + data->has_fan_div = true;
4749 + data->temp_fixed_num = 3;
4750 ++ data->num_temp_alarms = 3;
4751 +
4752 + data->ALARM_BITS = NCT6775_ALARM_BITS;
4753 +
4754 +@@ -3483,6 +3515,7 @@ static int nct6775_probe(struct platform_device *pdev)
4755 + data->auto_pwm_num = 4;
4756 + data->has_fan_div = false;
4757 + data->temp_fixed_num = 3;
4758 ++ data->num_temp_alarms = 3;
4759 +
4760 + data->ALARM_BITS = NCT6776_ALARM_BITS;
4761 +
4762 +@@ -3547,6 +3580,7 @@ static int nct6775_probe(struct platform_device *pdev)
4763 + data->auto_pwm_num = 4;
4764 + data->has_fan_div = false;
4765 + data->temp_fixed_num = 6;
4766 ++ data->num_temp_alarms = 2;
4767 +
4768 + data->ALARM_BITS = NCT6779_ALARM_BITS;
4769 +
4770 +@@ -3843,10 +3877,12 @@ static int nct6775_probe(struct platform_device *pdev)
4771 + &sda_fan_input[i].dev_attr);
4772 + if (err)
4773 + goto exit_remove;
4774 +- err = device_create_file(dev,
4775 +- &sda_fan_alarm[i].dev_attr);
4776 +- if (err)
4777 +- goto exit_remove;
4778 ++ if (data->ALARM_BITS[FAN_ALARM_BASE + i] >= 0) {
4779 ++ err = device_create_file(dev,
4780 ++ &sda_fan_alarm[i].dev_attr);
4781 ++ if (err)
4782 ++ goto exit_remove;
4783 ++ }
4784 + if (data->kind != nct6776 &&
4785 + data->kind != nct6779) {
4786 + err = device_create_file(dev,
4787 +@@ -3897,6 +3933,12 @@ static int nct6775_probe(struct platform_device *pdev)
4788 + if (err)
4789 + goto exit_remove;
4790 + }
4791 ++ if (find_temp_source(data, i, data->num_temp_alarms) >= 0) {
4792 ++ err = device_create_file(dev,
4793 ++ &sda_temp_alarm[i].dev_attr);
4794 ++ if (err)
4795 ++ goto exit_remove;
4796 ++ }
4797 + if (!(data->have_temp_fixed & (1 << i)))
4798 + continue;
4799 + err = device_create_file(dev, &sda_temp_type[i].dev_attr);
4800 +@@ -3905,12 +3947,6 @@ static int nct6775_probe(struct platform_device *pdev)
4801 + err = device_create_file(dev, &sda_temp_offset[i].dev_attr);
4802 + if (err)
4803 + goto exit_remove;
4804 +- if (i >= NUM_TEMP_ALARM ||
4805 +- data->ALARM_BITS[TEMP_ALARM_BASE + i] < 0)
4806 +- continue;
4807 +- err = device_create_file(dev, &sda_temp_alarm[i].dev_attr);
4808 +- if (err)
4809 +- goto exit_remove;
4810 + }
4811 +
4812 + for (i = 0; i < ARRAY_SIZE(sda_caseopen); i++) {
4813 +diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
4814 +index 631736e2..4faf02b3 100644
4815 +--- a/drivers/i2c/busses/Kconfig
4816 ++++ b/drivers/i2c/busses/Kconfig
4817 +@@ -150,6 +150,7 @@ config I2C_PIIX4
4818 + ATI SB700/SP5100
4819 + ATI SB800
4820 + AMD Hudson-2
4821 ++ AMD CZ
4822 + Serverworks OSB4
4823 + Serverworks CSB5
4824 + Serverworks CSB6
4825 +diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c
4826 +index 39ab78c1..d05ad590 100644
4827 +--- a/drivers/i2c/busses/i2c-piix4.c
4828 ++++ b/drivers/i2c/busses/i2c-piix4.c
4829 +@@ -22,7 +22,7 @@
4830 + Intel PIIX4, 440MX
4831 + Serverworks OSB4, CSB5, CSB6, HT-1000, HT-1100
4832 + ATI IXP200, IXP300, IXP400, SB600, SB700/SP5100, SB800
4833 +- AMD Hudson-2
4834 ++ AMD Hudson-2, CZ
4835 + SMSC Victory66
4836 +
4837 + Note: we assume there can only be one device, with one or more
4838 +@@ -522,6 +522,7 @@ static DEFINE_PCI_DEVICE_TABLE(piix4_ids) = {
4839 + { PCI_DEVICE(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP400_SMBUS) },
4840 + { PCI_DEVICE(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_SBX00_SMBUS) },
4841 + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_HUDSON2_SMBUS) },
4842 ++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, 0x790b) },
4843 + { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS,
4844 + PCI_DEVICE_ID_SERVERWORKS_OSB4) },
4845 + { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS,
4846 +diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
4847 +index 98ddc323..0cf5f8e0 100644
4848 +--- a/drivers/iio/inkern.c
4849 ++++ b/drivers/iio/inkern.c
4850 +@@ -451,7 +451,7 @@ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan,
4851 + int ret;
4852 +
4853 + ret = iio_channel_read(chan, &offset, NULL, IIO_CHAN_INFO_OFFSET);
4854 +- if (ret == 0)
4855 ++ if (ret >= 0)
4856 + raw64 += offset;
4857 +
4858 + scale_type = iio_channel_read(chan, &scale_val, &scale_val2,
4859 +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
4860 +index 21d02b0d..a3c33894 100644
4861 +--- a/drivers/iommu/amd_iommu.c
4862 ++++ b/drivers/iommu/amd_iommu.c
4863 +@@ -1484,6 +1484,10 @@ static unsigned long iommu_unmap_page(struct protection_domain *dom,
4864 +
4865 + /* Large PTE found which maps this address */
4866 + unmap_size = PTE_PAGE_SIZE(*pte);
4867 ++
4868 ++ /* Only unmap from the first pte in the page */
4869 ++ if ((unmap_size - 1) & bus_addr)
4870 ++ break;
4871 + count = PAGE_SIZE_PTE_COUNT(unmap_size);
4872 + for (i = 0; i < count; i++)
4873 + pte[i] = 0ULL;
4874 +@@ -1493,7 +1497,7 @@ static unsigned long iommu_unmap_page(struct protection_domain *dom,
4875 + unmapped += unmap_size;
4876 + }
4877 +
4878 +- BUG_ON(!is_power_of_2(unmapped));
4879 ++ BUG_ON(unmapped && !is_power_of_2(unmapped));
4880 +
4881 + return unmapped;
4882 + }
4883 +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
4884 +index 6ddae250..d61eb7ea 100644
4885 +--- a/drivers/md/raid10.c
4886 ++++ b/drivers/md/raid10.c
4887 +@@ -2075,11 +2075,17 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
4888 + * both 'first' and 'i', so we just compare them.
4889 + * All vec entries are PAGE_SIZE;
4890 + */
4891 +- for (j = 0; j < vcnt; j++)
4892 ++ int sectors = r10_bio->sectors;
4893 ++ for (j = 0; j < vcnt; j++) {
4894 ++ int len = PAGE_SIZE;
4895 ++ if (sectors < (len / 512))
4896 ++ len = sectors * 512;
4897 + if (memcmp(page_address(fbio->bi_io_vec[j].bv_page),
4898 + page_address(tbio->bi_io_vec[j].bv_page),
4899 +- fbio->bi_io_vec[j].bv_len))
4900 ++ len))
4901 + break;
4902 ++ sectors -= len/512;
4903 ++ }
4904 + if (j == vcnt)
4905 + continue;
4906 + atomic64_add(r10_bio->sectors, &mddev->resync_mismatches);
4907 +@@ -2909,14 +2915,13 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr,
4908 + */
4909 + if (mddev->bitmap == NULL &&
4910 + mddev->recovery_cp == MaxSector &&
4911 ++ mddev->reshape_position == MaxSector &&
4912 ++ !test_bit(MD_RECOVERY_SYNC, &mddev->recovery) &&
4913 + !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
4914 ++ !test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) &&
4915 + conf->fullsync == 0) {
4916 + *skipped = 1;
4917 +- max_sector = mddev->dev_sectors;
4918 +- if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
4919 +- test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
4920 +- max_sector = mddev->resync_max_sectors;
4921 +- return max_sector - sector_nr;
4922 ++ return mddev->dev_sectors - sector_nr;
4923 + }
4924 +
4925 + skipped:
4926 +@@ -3386,6 +3391,7 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr,
4927 +
4928 + if (bio->bi_end_io == end_sync_read) {
4929 + md_sync_acct(bio->bi_bdev, nr_sectors);
4930 ++ set_bit(BIO_UPTODATE, &bio->bi_flags);
4931 + generic_make_request(bio);
4932 + }
4933 + }
4934 +@@ -3532,7 +3538,7 @@ static struct r10conf *setup_conf(struct mddev *mddev)
4935 +
4936 + /* FIXME calc properly */
4937 + conf->mirrors = kzalloc(sizeof(struct raid10_info)*(mddev->raid_disks +
4938 +- max(0,mddev->delta_disks)),
4939 ++ max(0,-mddev->delta_disks)),
4940 + GFP_KERNEL);
4941 + if (!conf->mirrors)
4942 + goto out;
4943 +@@ -3691,7 +3697,7 @@ static int run(struct mddev *mddev)
4944 + conf->geo.far_offset == 0)
4945 + goto out_free_conf;
4946 + if (conf->prev.far_copies != 1 &&
4947 +- conf->geo.far_offset == 0)
4948 ++ conf->prev.far_offset == 0)
4949 + goto out_free_conf;
4950 + }
4951 +
4952 +diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
4953 +index e6b92ff2..25b8bbbe 100644
4954 +--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
4955 ++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
4956 +@@ -3563,14 +3563,18 @@ static void ar9003_hw_ant_ctrl_apply(struct ath_hw *ah, bool is2ghz)
4957 + {
4958 + struct ath9k_hw_capabilities *pCap = &ah->caps;
4959 + int chain;
4960 +- u32 regval;
4961 ++ u32 regval, value;
4962 + static const u32 switch_chain_reg[AR9300_MAX_CHAINS] = {
4963 + AR_PHY_SWITCH_CHAIN_0,
4964 + AR_PHY_SWITCH_CHAIN_1,
4965 + AR_PHY_SWITCH_CHAIN_2,
4966 + };
4967 +
4968 +- u32 value = ar9003_hw_ant_ctrl_common_get(ah, is2ghz);
4969 ++ if (AR_SREV_9485(ah) && (ar9003_hw_get_rx_gain_idx(ah) == 0))
4970 ++ ath9k_hw_cfg_output(ah, AR9300_EXT_LNA_CTL_GPIO_AR9485,
4971 ++ AR_GPIO_OUTPUT_MUX_AS_PCIE_ATTENTION_LED);
4972 ++
4973 ++ value = ar9003_hw_ant_ctrl_common_get(ah, is2ghz);
4974 +
4975 + if (AR_SREV_9462(ah) || AR_SREV_9565(ah)) {
4976 + REG_RMW_FIELD(ah, AR_PHY_SWITCH_COM,
4977 +diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
4978 +index e7177419..5013c731 100644
4979 +--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h
4980 ++++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
4981 +@@ -351,6 +351,8 @@
4982 +
4983 + #define AR_PHY_CCA_NOM_VAL_9330_2GHZ -118
4984 +
4985 ++#define AR9300_EXT_LNA_CTL_GPIO_AR9485 9
4986 ++
4987 + /*
4988 + * AGC Field Definitions
4989 + */
4990 +diff --git a/drivers/net/wireless/ath/ath9k/calib.c b/drivers/net/wireless/ath/ath9k/calib.c
4991 +index 7304e758..5e8219a9 100644
4992 +--- a/drivers/net/wireless/ath/ath9k/calib.c
4993 ++++ b/drivers/net/wireless/ath/ath9k/calib.c
4994 +@@ -387,7 +387,6 @@ bool ath9k_hw_getnf(struct ath_hw *ah, struct ath9k_channel *chan)
4995 +
4996 + if (!caldata) {
4997 + chan->noisefloor = nf;
4998 +- ah->noise = ath9k_hw_getchan_noise(ah, chan);
4999 + return false;
5000 + }
5001 +
5002 +diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
5003 +index 15dfefcf..b1d5037b 100644
5004 +--- a/drivers/net/wireless/ath/ath9k/hw.c
5005 ++++ b/drivers/net/wireless/ath/ath9k/hw.c
5006 +@@ -1872,7 +1872,8 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
5007 +
5008 + ah->caldata = caldata;
5009 + if (caldata && (chan->channel != caldata->channel ||
5010 +- chan->channelFlags != caldata->channelFlags)) {
5011 ++ chan->channelFlags != caldata->channelFlags ||
5012 ++ chan->chanmode != caldata->chanmode)) {
5013 + /* Operating channel changed, reset channel calibration data */
5014 + memset(caldata, 0, sizeof(*caldata));
5015 + ath9k_init_nfcal_hist_buffer(ah, chan);
5016 +diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
5017 +index 5092ecae..35ced100 100644
5018 +--- a/drivers/net/wireless/ath/ath9k/main.c
5019 ++++ b/drivers/net/wireless/ath/ath9k/main.c
5020 +@@ -1211,13 +1211,6 @@ static int ath9k_config(struct ieee80211_hw *hw, u32 changed)
5021 + ath_update_survey_stats(sc);
5022 + spin_unlock_irqrestore(&common->cc_lock, flags);
5023 +
5024 +- /*
5025 +- * Preserve the current channel values, before updating
5026 +- * the same channel
5027 +- */
5028 +- if (ah->curchan && (old_pos == pos))
5029 +- ath9k_hw_getnf(ah, ah->curchan);
5030 +-
5031 + ath9k_cmn_update_ichannel(&sc->sc_ah->channels[pos],
5032 + curchan, channel_type);
5033 +
5034 +diff --git a/drivers/net/wireless/b43/Kconfig b/drivers/net/wireless/b43/Kconfig
5035 +index 078e6f34..13f91ac9 100644
5036 +--- a/drivers/net/wireless/b43/Kconfig
5037 ++++ b/drivers/net/wireless/b43/Kconfig
5038 +@@ -28,7 +28,7 @@ config B43
5039 +
5040 + config B43_BCMA
5041 + bool "Support for BCMA bus"
5042 +- depends on B43 && BCMA
5043 ++ depends on B43 && (BCMA = y || BCMA = B43)
5044 + default y
5045 +
5046 + config B43_BCMA_EXTRA
5047 +@@ -39,7 +39,7 @@ config B43_BCMA_EXTRA
5048 +
5049 + config B43_SSB
5050 + bool
5051 +- depends on B43 && SSB
5052 ++ depends on B43 && (SSB = y || SSB = B43)
5053 + default y
5054 +
5055 + # Auto-select SSB PCI-HOST support, if possible
5056 +diff --git a/drivers/net/wireless/rt2x00/rt2800lib.c b/drivers/net/wireless/rt2x00/rt2800lib.c
5057 +index 72f32e5c..705aa338 100644
5058 +--- a/drivers/net/wireless/rt2x00/rt2800lib.c
5059 ++++ b/drivers/net/wireless/rt2x00/rt2800lib.c
5060 +@@ -2392,7 +2392,7 @@ static void rt2800_config_channel_rf55xx(struct rt2x00_dev *rt2x00dev,
5061 + rt2800_rfcsr_write(rt2x00dev, 49, rfcsr);
5062 +
5063 + rt2800_rfcsr_read(rt2x00dev, 50, &rfcsr);
5064 +- if (info->default_power1 > power_bound)
5065 ++ if (info->default_power2 > power_bound)
5066 + rt2x00_set_field8(&rfcsr, RFCSR50_TX, power_bound);
5067 + else
5068 + rt2x00_set_field8(&rfcsr, RFCSR50_TX, info->default_power2);
5069 +@@ -6056,8 +6056,8 @@ static int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev)
5070 + default_power2 = rt2x00_eeprom_addr(rt2x00dev, EEPROM_TXPOWER_A2);
5071 +
5072 + for (i = 14; i < spec->num_channels; i++) {
5073 +- info[i].default_power1 = default_power1[i];
5074 +- info[i].default_power2 = default_power2[i];
5075 ++ info[i].default_power1 = default_power1[i - 14];
5076 ++ info[i].default_power2 = default_power2[i - 14];
5077 + }
5078 + }
5079 +
5080 +diff --git a/drivers/net/wireless/rt2x00/rt61pci.c b/drivers/net/wireless/rt2x00/rt61pci.c
5081 +index 0dc8180e..883a54c8 100644
5082 +--- a/drivers/net/wireless/rt2x00/rt61pci.c
5083 ++++ b/drivers/net/wireless/rt2x00/rt61pci.c
5084 +@@ -2825,7 +2825,8 @@ static int rt61pci_probe_hw_mode(struct rt2x00_dev *rt2x00dev)
5085 + tx_power = rt2x00_eeprom_addr(rt2x00dev, EEPROM_TXPOWER_A_START);
5086 + for (i = 14; i < spec->num_channels; i++) {
5087 + info[i].max_power = MAX_TXPOWER;
5088 +- info[i].default_power1 = TXPOWER_FROM_DEV(tx_power[i]);
5089 ++ info[i].default_power1 =
5090 ++ TXPOWER_FROM_DEV(tx_power[i - 14]);
5091 + }
5092 + }
5093 +
5094 +diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c
5095 +index 377e09bb..2bbca183 100644
5096 +--- a/drivers/net/wireless/rt2x00/rt73usb.c
5097 ++++ b/drivers/net/wireless/rt2x00/rt73usb.c
5098 +@@ -2167,7 +2167,8 @@ static int rt73usb_probe_hw_mode(struct rt2x00_dev *rt2x00dev)
5099 + tx_power = rt2x00_eeprom_addr(rt2x00dev, EEPROM_TXPOWER_A_START);
5100 + for (i = 14; i < spec->num_channels; i++) {
5101 + info[i].max_power = MAX_TXPOWER;
5102 +- info[i].default_power1 = TXPOWER_FROM_DEV(tx_power[i]);
5103 ++ info[i].default_power1 =
5104 ++ TXPOWER_FROM_DEV(tx_power[i - 14]);
5105 + }
5106 + }
5107 +
5108 +diff --git a/drivers/of/address.c b/drivers/of/address.c
5109 +index 04da786c..7c8221d3 100644
5110 +--- a/drivers/of/address.c
5111 ++++ b/drivers/of/address.c
5112 +@@ -106,8 +106,12 @@ static unsigned int of_bus_default_get_flags(const __be32 *addr)
5113 +
5114 + static int of_bus_pci_match(struct device_node *np)
5115 + {
5116 +- /* "vci" is for the /chaos bridge on 1st-gen PCI powermacs */
5117 +- return !strcmp(np->type, "pci") || !strcmp(np->type, "vci");
5118 ++ /*
5119 ++ * "vci" is for the /chaos bridge on 1st-gen PCI powermacs
5120 ++ * "ht" is hypertransport
5121 ++ */
5122 ++ return !strcmp(np->type, "pci") || !strcmp(np->type, "vci") ||
5123 ++ !strcmp(np->type, "ht");
5124 + }
5125 +
5126 + static void of_bus_pci_count_cells(struct device_node *np,
5127 +diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
5128 +index f6adde44..3743ac93 100644
5129 +--- a/drivers/s390/scsi/zfcp_aux.c
5130 ++++ b/drivers/s390/scsi/zfcp_aux.c
5131 +@@ -3,7 +3,7 @@
5132 + *
5133 + * Module interface and handling of zfcp data structures.
5134 + *
5135 +- * Copyright IBM Corp. 2002, 2010
5136 ++ * Copyright IBM Corp. 2002, 2013
5137 + */
5138 +
5139 + /*
5140 +@@ -23,6 +23,7 @@
5141 + * Christof Schmitt
5142 + * Martin Petermann
5143 + * Sven Schuetz
5144 ++ * Steffen Maier
5145 + */
5146 +
5147 + #define KMSG_COMPONENT "zfcp"
5148 +@@ -415,6 +416,8 @@ struct zfcp_adapter *zfcp_adapter_enqueue(struct ccw_device *ccw_device)
5149 + adapter->dma_parms.max_segment_size = ZFCP_QDIO_SBALE_LEN;
5150 + adapter->ccw_device->dev.dma_parms = &adapter->dma_parms;
5151 +
5152 ++ adapter->stat_read_buf_num = FSF_STATUS_READS_RECOM;
5153 ++
5154 + if (!zfcp_scsi_adapter_register(adapter))
5155 + return adapter;
5156 +
5157 +diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
5158 +index c7e148f3..9152999a 100644
5159 +--- a/drivers/s390/scsi/zfcp_fsf.c
5160 ++++ b/drivers/s390/scsi/zfcp_fsf.c
5161 +@@ -3,7 +3,7 @@
5162 + *
5163 + * Implementation of FSF commands.
5164 + *
5165 +- * Copyright IBM Corp. 2002, 2010
5166 ++ * Copyright IBM Corp. 2002, 2013
5167 + */
5168 +
5169 + #define KMSG_COMPONENT "zfcp"
5170 +@@ -483,12 +483,8 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
5171 +
5172 + fc_host_port_name(shost) = nsp->fl_wwpn;
5173 + fc_host_node_name(shost) = nsp->fl_wwnn;
5174 +- fc_host_port_id(shost) = ntoh24(bottom->s_id);
5175 +- fc_host_speed(shost) =
5176 +- zfcp_fsf_convert_portspeed(bottom->fc_link_speed);
5177 + fc_host_supported_classes(shost) = FC_COS_CLASS2 | FC_COS_CLASS3;
5178 +
5179 +- adapter->hydra_version = bottom->adapter_type;
5180 + adapter->timer_ticks = bottom->timer_interval & ZFCP_FSF_TIMER_INT_MASK;
5181 + adapter->stat_read_buf_num = max(bottom->status_read_buf_num,
5182 + (u16)FSF_STATUS_READS_RECOM);
5183 +@@ -496,6 +492,19 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
5184 + if (fc_host_permanent_port_name(shost) == -1)
5185 + fc_host_permanent_port_name(shost) = fc_host_port_name(shost);
5186 +
5187 ++ zfcp_scsi_set_prot(adapter);
5188 ++
5189 ++ /* no error return above here, otherwise must fix call chains */
5190 ++ /* do not evaluate invalid fields */
5191 ++ if (req->qtcb->header.fsf_status == FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE)
5192 ++ return 0;
5193 ++
5194 ++ fc_host_port_id(shost) = ntoh24(bottom->s_id);
5195 ++ fc_host_speed(shost) =
5196 ++ zfcp_fsf_convert_portspeed(bottom->fc_link_speed);
5197 ++
5198 ++ adapter->hydra_version = bottom->adapter_type;
5199 ++
5200 + switch (bottom->fc_topology) {
5201 + case FSF_TOPO_P2P:
5202 + adapter->peer_d_id = ntoh24(bottom->peer_d_id);
5203 +@@ -517,8 +526,6 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
5204 + return -EIO;
5205 + }
5206 +
5207 +- zfcp_scsi_set_prot(adapter);
5208 +-
5209 + return 0;
5210 + }
5211 +
5212 +@@ -563,8 +570,14 @@ static void zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *req)
5213 + fc_host_port_type(shost) = FC_PORTTYPE_UNKNOWN;
5214 + adapter->hydra_version = 0;
5215 +
5216 ++ /* avoids adapter shutdown to be able to recognize
5217 ++ * events such as LINK UP */
5218 ++ atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
5219 ++ &adapter->status);
5220 + zfcp_fsf_link_down_info_eval(req,
5221 + &qtcb->header.fsf_status_qual.link_down_info);
5222 ++ if (zfcp_fsf_exchange_config_evaluate(req))
5223 ++ return;
5224 + break;
5225 + default:
5226 + zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh3");
5227 +diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
5228 +index 7b31e3f4..7b353647 100644
5229 +--- a/drivers/s390/scsi/zfcp_scsi.c
5230 ++++ b/drivers/s390/scsi/zfcp_scsi.c
5231 +@@ -3,7 +3,7 @@
5232 + *
5233 + * Interface to Linux SCSI midlayer.
5234 + *
5235 +- * Copyright IBM Corp. 2002, 2010
5236 ++ * Copyright IBM Corp. 2002, 2013
5237 + */
5238 +
5239 + #define KMSG_COMPONENT "zfcp"
5240 +@@ -311,8 +311,12 @@ static struct scsi_host_template zfcp_scsi_host_template = {
5241 + .proc_name = "zfcp",
5242 + .can_queue = 4096,
5243 + .this_id = -1,
5244 +- .sg_tablesize = 1, /* adjusted later */
5245 +- .max_sectors = 8, /* adjusted later */
5246 ++ .sg_tablesize = (((QDIO_MAX_ELEMENTS_PER_BUFFER - 1)
5247 ++ * ZFCP_QDIO_MAX_SBALS_PER_REQ) - 2),
5248 ++ /* GCD, adjusted later */
5249 ++ .max_sectors = (((QDIO_MAX_ELEMENTS_PER_BUFFER - 1)
5250 ++ * ZFCP_QDIO_MAX_SBALS_PER_REQ) - 2) * 8,
5251 ++ /* GCD, adjusted later */
5252 + .dma_boundary = ZFCP_QDIO_SBALE_LEN - 1,
5253 + .cmd_per_lun = 1,
5254 + .use_clustering = 1,
5255 +diff --git a/drivers/scsi/aacraid/src.c b/drivers/scsi/aacraid/src.c
5256 +index 0f56d8d7..7e171076 100644
5257 +--- a/drivers/scsi/aacraid/src.c
5258 ++++ b/drivers/scsi/aacraid/src.c
5259 +@@ -93,6 +93,9 @@ static irqreturn_t aac_src_intr_message(int irq, void *dev_id)
5260 + int send_it = 0;
5261 + extern int aac_sync_mode;
5262 +
5263 ++ src_writel(dev, MUnit.ODR_C, bellbits);
5264 ++ src_readl(dev, MUnit.ODR_C);
5265 ++
5266 + if (!aac_sync_mode) {
5267 + src_writel(dev, MUnit.ODR_C, bellbits);
5268 + src_readl(dev, MUnit.ODR_C);
5269 +diff --git a/drivers/scsi/mpt2sas/mpt2sas_base.c b/drivers/scsi/mpt2sas/mpt2sas_base.c
5270 +index bcb23d28..c76b18bb 100644
5271 +--- a/drivers/scsi/mpt2sas/mpt2sas_base.c
5272 ++++ b/drivers/scsi/mpt2sas/mpt2sas_base.c
5273 +@@ -80,10 +80,6 @@ static int msix_disable = -1;
5274 + module_param(msix_disable, int, 0);
5275 + MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)");
5276 +
5277 +-static int missing_delay[2] = {-1, -1};
5278 +-module_param_array(missing_delay, int, NULL, 0);
5279 +-MODULE_PARM_DESC(missing_delay, " device missing delay , io missing delay");
5280 +-
5281 + static int mpt2sas_fwfault_debug;
5282 + MODULE_PARM_DESC(mpt2sas_fwfault_debug, " enable detection of firmware fault "
5283 + "and halt firmware - (default=0)");
5284 +@@ -2199,7 +2195,7 @@ _base_display_ioc_capabilities(struct MPT2SAS_ADAPTER *ioc)
5285 + }
5286 +
5287 + /**
5288 +- * _base_update_missing_delay - change the missing delay timers
5289 ++ * mpt2sas_base_update_missing_delay - change the missing delay timers
5290 + * @ioc: per adapter object
5291 + * @device_missing_delay: amount of time till device is reported missing
5292 + * @io_missing_delay: interval IO is returned when there is a missing device
5293 +@@ -2210,8 +2206,8 @@ _base_display_ioc_capabilities(struct MPT2SAS_ADAPTER *ioc)
5294 + * delay, as well as the io missing delay. This should be called at driver
5295 + * load time.
5296 + */
5297 +-static void
5298 +-_base_update_missing_delay(struct MPT2SAS_ADAPTER *ioc,
5299 ++void
5300 ++mpt2sas_base_update_missing_delay(struct MPT2SAS_ADAPTER *ioc,
5301 + u16 device_missing_delay, u8 io_missing_delay)
5302 + {
5303 + u16 dmd, dmd_new, dmd_orignal;
5304 +@@ -4407,9 +4403,6 @@ mpt2sas_base_attach(struct MPT2SAS_ADAPTER *ioc)
5305 + if (r)
5306 + goto out_free_resources;
5307 +
5308 +- if (missing_delay[0] != -1 && missing_delay[1] != -1)
5309 +- _base_update_missing_delay(ioc, missing_delay[0],
5310 +- missing_delay[1]);
5311 + ioc->non_operational_loop = 0;
5312 +
5313 + return 0;
5314 +diff --git a/drivers/scsi/mpt2sas/mpt2sas_base.h b/drivers/scsi/mpt2sas/mpt2sas_base.h
5315 +index 4caaac13..11301974 100644
5316 +--- a/drivers/scsi/mpt2sas/mpt2sas_base.h
5317 ++++ b/drivers/scsi/mpt2sas/mpt2sas_base.h
5318 +@@ -1055,6 +1055,9 @@ void mpt2sas_base_validate_event_type(struct MPT2SAS_ADAPTER *ioc, u32 *event_ty
5319 +
5320 + void mpt2sas_halt_firmware(struct MPT2SAS_ADAPTER *ioc);
5321 +
5322 ++void mpt2sas_base_update_missing_delay(struct MPT2SAS_ADAPTER *ioc,
5323 ++ u16 device_missing_delay, u8 io_missing_delay);
5324 ++
5325 + int mpt2sas_port_enable(struct MPT2SAS_ADAPTER *ioc);
5326 +
5327 + /* scsih shared API */
5328 +diff --git a/drivers/scsi/mpt2sas/mpt2sas_scsih.c b/drivers/scsi/mpt2sas/mpt2sas_scsih.c
5329 +index c6bdc926..8dbe500c 100644
5330 +--- a/drivers/scsi/mpt2sas/mpt2sas_scsih.c
5331 ++++ b/drivers/scsi/mpt2sas/mpt2sas_scsih.c
5332 +@@ -101,6 +101,10 @@ static ushort max_sectors = 0xFFFF;
5333 + module_param(max_sectors, ushort, 0);
5334 + MODULE_PARM_DESC(max_sectors, "max sectors, range 64 to 32767 default=32767");
5335 +
5336 ++static int missing_delay[2] = {-1, -1};
5337 ++module_param_array(missing_delay, int, NULL, 0);
5338 ++MODULE_PARM_DESC(missing_delay, " device missing delay , io missing delay");
5339 ++
5340 + /* scsi-mid layer global parmeter is max_report_luns, which is 511 */
5341 + #define MPT2SAS_MAX_LUN (16895)
5342 + static int max_lun = MPT2SAS_MAX_LUN;
5343 +@@ -3994,11 +3998,7 @@ _scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))
5344 + else
5345 + mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
5346 + } else
5347 +-/* MPI Revision I (UNIT = 0xA) - removed MPI2_SCSIIO_CONTROL_UNTAGGED */
5348 +-/* mpi_control |= MPI2_SCSIIO_CONTROL_UNTAGGED;
5349 +- */
5350 +- mpi_control |= (0x500);
5351 +-
5352 ++ mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
5353 + } else
5354 + mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
5355 + /* Make sure Device is not raid volume.
5356 +@@ -7303,7 +7303,9 @@ _firmware_event_work(struct work_struct *work)
5357 + case MPT2SAS_PORT_ENABLE_COMPLETE:
5358 + ioc->start_scan = 0;
5359 +
5360 +-
5361 ++ if (missing_delay[0] != -1 && missing_delay[1] != -1)
5362 ++ mpt2sas_base_update_missing_delay(ioc, missing_delay[0],
5363 ++ missing_delay[1]);
5364 +
5365 + dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "port enable: complete "
5366 + "from worker thread\n", ioc->name));
5367 +diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
5368 +index 2c0d0ec8..3b1ea34e 100644
5369 +--- a/drivers/scsi/scsi.c
5370 ++++ b/drivers/scsi/scsi.c
5371 +@@ -1070,8 +1070,8 @@ EXPORT_SYMBOL_GPL(scsi_get_vpd_page);
5372 + * @opcode: opcode for command to look up
5373 + *
5374 + * Uses the REPORT SUPPORTED OPERATION CODES to look up the given
5375 +- * opcode. Returns 0 if RSOC fails or if the command opcode is
5376 +- * unsupported. Returns 1 if the device claims to support the command.
5377 ++ * opcode. Returns -EINVAL if RSOC fails, 0 if the command opcode is
5378 ++ * unsupported and 1 if the device claims to support the command.
5379 + */
5380 + int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer,
5381 + unsigned int len, unsigned char opcode)
5382 +@@ -1081,7 +1081,7 @@ int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer,
5383 + int result;
5384 +
5385 + if (sdev->no_report_opcodes || sdev->scsi_level < SCSI_SPC_3)
5386 +- return 0;
5387 ++ return -EINVAL;
5388 +
5389 + memset(cmd, 0, 16);
5390 + cmd[0] = MAINTENANCE_IN;
5391 +@@ -1097,7 +1097,7 @@ int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer,
5392 + if (result && scsi_sense_valid(&sshdr) &&
5393 + sshdr.sense_key == ILLEGAL_REQUEST &&
5394 + (sshdr.asc == 0x20 || sshdr.asc == 0x24) && sshdr.ascq == 0x00)
5395 +- return 0;
5396 ++ return -EINVAL;
5397 +
5398 + if ((buffer[1] & 3) == 3) /* Command supported */
5399 + return 1;
5400 +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
5401 +index 6f6a1b48..1b1125e6 100644
5402 +--- a/drivers/scsi/sd.c
5403 ++++ b/drivers/scsi/sd.c
5404 +@@ -442,8 +442,10 @@ sd_store_write_same_blocks(struct device *dev, struct device_attribute *attr,
5405 +
5406 + if (max == 0)
5407 + sdp->no_write_same = 1;
5408 +- else if (max <= SD_MAX_WS16_BLOCKS)
5409 ++ else if (max <= SD_MAX_WS16_BLOCKS) {
5410 ++ sdp->no_write_same = 0;
5411 + sdkp->max_ws_blocks = max;
5412 ++ }
5413 +
5414 + sd_config_write_same(sdkp);
5415 +
5416 +@@ -740,7 +742,6 @@ static void sd_config_write_same(struct scsi_disk *sdkp)
5417 + {
5418 + struct request_queue *q = sdkp->disk->queue;
5419 + unsigned int logical_block_size = sdkp->device->sector_size;
5420 +- unsigned int blocks = 0;
5421 +
5422 + if (sdkp->device->no_write_same) {
5423 + sdkp->max_ws_blocks = 0;
5424 +@@ -752,18 +753,20 @@ static void sd_config_write_same(struct scsi_disk *sdkp)
5425 + * blocks per I/O unless the device explicitly advertises a
5426 + * bigger limit.
5427 + */
5428 +- if (sdkp->max_ws_blocks == 0)
5429 +- sdkp->max_ws_blocks = SD_MAX_WS10_BLOCKS;
5430 +-
5431 +- if (sdkp->ws16 || sdkp->max_ws_blocks > SD_MAX_WS10_BLOCKS)
5432 +- blocks = min_not_zero(sdkp->max_ws_blocks,
5433 +- (u32)SD_MAX_WS16_BLOCKS);
5434 +- else
5435 +- blocks = min_not_zero(sdkp->max_ws_blocks,
5436 +- (u32)SD_MAX_WS10_BLOCKS);
5437 ++ if (sdkp->max_ws_blocks > SD_MAX_WS10_BLOCKS)
5438 ++ sdkp->max_ws_blocks = min_not_zero(sdkp->max_ws_blocks,
5439 ++ (u32)SD_MAX_WS16_BLOCKS);
5440 ++ else if (sdkp->ws16 || sdkp->ws10 || sdkp->device->no_report_opcodes)
5441 ++ sdkp->max_ws_blocks = min_not_zero(sdkp->max_ws_blocks,
5442 ++ (u32)SD_MAX_WS10_BLOCKS);
5443 ++ else {
5444 ++ sdkp->device->no_write_same = 1;
5445 ++ sdkp->max_ws_blocks = 0;
5446 ++ }
5447 +
5448 + out:
5449 +- blk_queue_max_write_same_sectors(q, blocks * (logical_block_size >> 9));
5450 ++ blk_queue_max_write_same_sectors(q, sdkp->max_ws_blocks *
5451 ++ (logical_block_size >> 9));
5452 + }
5453 +
5454 + /**
5455 +@@ -2635,9 +2638,24 @@ static void sd_read_block_provisioning(struct scsi_disk *sdkp)
5456 +
5457 + static void sd_read_write_same(struct scsi_disk *sdkp, unsigned char *buffer)
5458 + {
5459 +- if (scsi_report_opcode(sdkp->device, buffer, SD_BUF_SIZE,
5460 +- WRITE_SAME_16))
5461 ++ struct scsi_device *sdev = sdkp->device;
5462 ++
5463 ++ if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY) < 0) {
5464 ++ sdev->no_report_opcodes = 1;
5465 ++
5466 ++ /* Disable WRITE SAME if REPORT SUPPORTED OPERATION
5467 ++ * CODES is unsupported and the device has an ATA
5468 ++ * Information VPD page (SAT).
5469 ++ */
5470 ++ if (!scsi_get_vpd_page(sdev, 0x89, buffer, SD_BUF_SIZE))
5471 ++ sdev->no_write_same = 1;
5472 ++ }
5473 ++
5474 ++ if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME_16) == 1)
5475 + sdkp->ws16 = 1;
5476 ++
5477 ++ if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME) == 1)
5478 ++ sdkp->ws10 = 1;
5479 + }
5480 +
5481 + static int sd_try_extended_inquiry(struct scsi_device *sdp)
5482 +diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
5483 +index 2386aeb4..7a049de2 100644
5484 +--- a/drivers/scsi/sd.h
5485 ++++ b/drivers/scsi/sd.h
5486 +@@ -84,6 +84,7 @@ struct scsi_disk {
5487 + unsigned lbpws : 1;
5488 + unsigned lbpws10 : 1;
5489 + unsigned lbpvpd : 1;
5490 ++ unsigned ws10 : 1;
5491 + unsigned ws16 : 1;
5492 + };
5493 + #define to_scsi_disk(obj) container_of(obj,struct scsi_disk,dev)
5494 +diff --git a/drivers/staging/line6/pcm.c b/drivers/staging/line6/pcm.c
5495 +index 02f77d74..a7856bad 100644
5496 +--- a/drivers/staging/line6/pcm.c
5497 ++++ b/drivers/staging/line6/pcm.c
5498 +@@ -385,8 +385,11 @@ static int snd_line6_pcm_free(struct snd_device *device)
5499 + */
5500 + static void pcm_disconnect_substream(struct snd_pcm_substream *substream)
5501 + {
5502 +- if (substream->runtime && snd_pcm_running(substream))
5503 ++ if (substream->runtime && snd_pcm_running(substream)) {
5504 ++ snd_pcm_stream_lock_irq(substream);
5505 + snd_pcm_stop(substream, SNDRV_PCM_STATE_DISCONNECTED);
5506 ++ snd_pcm_stream_unlock_irq(substream);
5507 ++ }
5508 + }
5509 +
5510 + /*
5511 +diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
5512 +index bd3ae324..71af7b5a 100644
5513 +--- a/drivers/virtio/virtio_balloon.c
5514 ++++ b/drivers/virtio/virtio_balloon.c
5515 +@@ -191,7 +191,8 @@ static void leak_balloon(struct virtio_balloon *vb, size_t num)
5516 + * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST);
5517 + * is true, we *have* to do it in this order
5518 + */
5519 +- tell_host(vb, vb->deflate_vq);
5520 ++ if (vb->num_pfns != 0)
5521 ++ tell_host(vb, vb->deflate_vq);
5522 + mutex_unlock(&vb->balloon_lock);
5523 + release_pages_by_pfn(vb->pfns, vb->num_pfns);
5524 + }
5525 +diff --git a/include/linux/cpu_cooling.h b/include/linux/cpu_cooling.h
5526 +index 282e2702..a5d52eea 100644
5527 +--- a/include/linux/cpu_cooling.h
5528 ++++ b/include/linux/cpu_cooling.h
5529 +@@ -41,7 +41,7 @@ cpufreq_cooling_register(const struct cpumask *clip_cpus);
5530 + */
5531 + void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev);
5532 +
5533 +-unsigned long cpufreq_cooling_get_level(unsigned int, unsigned int);
5534 ++unsigned long cpufreq_cooling_get_level(unsigned int cpu, unsigned int freq);
5535 + #else /* !CONFIG_CPU_THERMAL */
5536 + static inline struct thermal_cooling_device *
5537 + cpufreq_cooling_register(const struct cpumask *clip_cpus)
5538 +@@ -54,7 +54,7 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
5539 + return;
5540 + }
5541 + static inline
5542 +-unsigned long cpufreq_cooling_get_level(unsigned int, unsigned int)
5543 ++unsigned long cpufreq_cooling_get_level(unsigned int cpu, unsigned int freq)
5544 + {
5545 + return THERMAL_CSTATE_INVALID;
5546 + }
5547 +diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
5548 +index 8d171f42..3d35b702 100644
5549 +--- a/include/linux/iio/iio.h
5550 ++++ b/include/linux/iio/iio.h
5551 +@@ -211,8 +211,8 @@ struct iio_chan_spec {
5552 + static inline bool iio_channel_has_info(const struct iio_chan_spec *chan,
5553 + enum iio_chan_info_enum type)
5554 + {
5555 +- return (chan->info_mask_separate & type) |
5556 +- (chan->info_mask_shared_by_type & type);
5557 ++ return (chan->info_mask_separate & BIT(type)) |
5558 ++ (chan->info_mask_shared_by_type & BIT(type));
5559 + }
5560 +
5561 + #define IIO_ST(si, rb, sb, sh) \
5562 +diff --git a/kernel/events/core.c b/kernel/events/core.c
5563 +index b391907d..e76e4959 100644
5564 +--- a/kernel/events/core.c
5565 ++++ b/kernel/events/core.c
5566 +@@ -761,8 +761,18 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
5567 + {
5568 + struct perf_event_context *ctx;
5569 +
5570 +- rcu_read_lock();
5571 + retry:
5572 ++ /*
5573 ++ * One of the few rules of preemptible RCU is that one cannot do
5574 ++ * rcu_read_unlock() while holding a scheduler (or nested) lock when
5575 ++ * part of the read side critical section was preemptible -- see
5576 ++ * rcu_read_unlock_special().
5577 ++ *
5578 ++ * Since ctx->lock nests under rq->lock we must ensure the entire read
5579 ++ * side critical section is non-preemptible.
5580 ++ */
5581 ++ preempt_disable();
5582 ++ rcu_read_lock();
5583 + ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
5584 + if (ctx) {
5585 + /*
5586 +@@ -778,6 +788,8 @@ retry:
5587 + raw_spin_lock_irqsave(&ctx->lock, *flags);
5588 + if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
5589 + raw_spin_unlock_irqrestore(&ctx->lock, *flags);
5590 ++ rcu_read_unlock();
5591 ++ preempt_enable();
5592 + goto retry;
5593 + }
5594 +
5595 +@@ -787,6 +799,7 @@ retry:
5596 + }
5597 + }
5598 + rcu_read_unlock();
5599 ++ preempt_enable();
5600 + return ctx;
5601 + }
5602 +
5603 +@@ -1761,7 +1774,16 @@ static int __perf_event_enable(void *info)
5604 + struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
5605 + int err;
5606 +
5607 +- if (WARN_ON_ONCE(!ctx->is_active))
5608 ++ /*
5609 ++ * There's a time window between 'ctx->is_active' check
5610 ++ * in perf_event_enable function and this place having:
5611 ++ * - IRQs on
5612 ++ * - ctx->lock unlocked
5613 ++ *
5614 ++ * where the task could be killed and 'ctx' deactivated
5615 ++ * by perf_event_exit_task.
5616 ++ */
5617 ++ if (!ctx->is_active)
5618 + return -EINVAL;
5619 +
5620 + raw_spin_lock(&ctx->lock);
5621 +@@ -7228,7 +7250,7 @@ inherit_task_group(struct perf_event *event, struct task_struct *parent,
5622 + * child.
5623 + */
5624 +
5625 +- child_ctx = alloc_perf_context(event->pmu, child);
5626 ++ child_ctx = alloc_perf_context(parent_ctx->pmu, child);
5627 + if (!child_ctx)
5628 + return -ENOMEM;
5629 +
5630 +diff --git a/kernel/printk.c b/kernel/printk.c
5631 +index 8212c1ae..d37d45c9 100644
5632 +--- a/kernel/printk.c
5633 ++++ b/kernel/printk.c
5634 +@@ -1369,9 +1369,9 @@ static int console_trylock_for_printk(unsigned int cpu)
5635 + }
5636 + }
5637 + logbuf_cpu = UINT_MAX;
5638 ++ raw_spin_unlock(&logbuf_lock);
5639 + if (wake)
5640 + up(&console_sem);
5641 +- raw_spin_unlock(&logbuf_lock);
5642 + return retval;
5643 + }
5644 +
5645 +diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
5646 +index 20d6fba7..297b90b5 100644
5647 +--- a/kernel/time/tick-broadcast.c
5648 ++++ b/kernel/time/tick-broadcast.c
5649 +@@ -29,6 +29,7 @@
5650 +
5651 + static struct tick_device tick_broadcast_device;
5652 + static cpumask_var_t tick_broadcast_mask;
5653 ++static cpumask_var_t tick_broadcast_on;
5654 + static cpumask_var_t tmpmask;
5655 + static DEFINE_RAW_SPINLOCK(tick_broadcast_lock);
5656 + static int tick_broadcast_force;
5657 +@@ -123,8 +124,9 @@ static void tick_device_setup_broadcast_func(struct clock_event_device *dev)
5658 + */
5659 + int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu)
5660 + {
5661 ++ struct clock_event_device *bc = tick_broadcast_device.evtdev;
5662 + unsigned long flags;
5663 +- int ret = 0;
5664 ++ int ret;
5665 +
5666 + raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
5667 +
5668 +@@ -138,20 +140,59 @@ int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu)
5669 + dev->event_handler = tick_handle_periodic;
5670 + tick_device_setup_broadcast_func(dev);
5671 + cpumask_set_cpu(cpu, tick_broadcast_mask);
5672 +- tick_broadcast_start_periodic(tick_broadcast_device.evtdev);
5673 ++ tick_broadcast_start_periodic(bc);
5674 + ret = 1;
5675 + } else {
5676 + /*
5677 +- * When the new device is not affected by the stop
5678 +- * feature and the cpu is marked in the broadcast mask
5679 +- * then clear the broadcast bit.
5680 ++ * Clear the broadcast bit for this cpu if the
5681 ++ * device is not power state affected.
5682 + */
5683 +- if (!(dev->features & CLOCK_EVT_FEAT_C3STOP)) {
5684 +- int cpu = smp_processor_id();
5685 ++ if (!(dev->features & CLOCK_EVT_FEAT_C3STOP))
5686 + cpumask_clear_cpu(cpu, tick_broadcast_mask);
5687 +- tick_broadcast_clear_oneshot(cpu);
5688 +- } else {
5689 ++ else
5690 + tick_device_setup_broadcast_func(dev);
5691 ++
5692 ++ /*
5693 ++ * Clear the broadcast bit if the CPU is not in
5694 ++ * periodic broadcast on state.
5695 ++ */
5696 ++ if (!cpumask_test_cpu(cpu, tick_broadcast_on))
5697 ++ cpumask_clear_cpu(cpu, tick_broadcast_mask);
5698 ++
5699 ++ switch (tick_broadcast_device.mode) {
5700 ++ case TICKDEV_MODE_ONESHOT:
5701 ++ /*
5702 ++ * If the system is in oneshot mode we can
5703 ++ * unconditionally clear the oneshot mask bit,
5704 ++ * because the CPU is running and therefore
5705 ++ * not in an idle state which causes the power
5706 ++ * state affected device to stop. Let the
5707 ++ * caller initialize the device.
5708 ++ */
5709 ++ tick_broadcast_clear_oneshot(cpu);
5710 ++ ret = 0;
5711 ++ break;
5712 ++
5713 ++ case TICKDEV_MODE_PERIODIC:
5714 ++ /*
5715 ++ * If the system is in periodic mode, check
5716 ++ * whether the broadcast device can be
5717 ++ * switched off now.
5718 ++ */
5719 ++ if (cpumask_empty(tick_broadcast_mask) && bc)
5720 ++ clockevents_shutdown(bc);
5721 ++ /*
5722 ++ * If we kept the cpu in the broadcast mask,
5723 ++ * tell the caller to leave the per cpu device
5724 ++ * in shutdown state. The periodic interrupt
5725 ++ * is delivered by the broadcast device.
5726 ++ */
5727 ++ ret = cpumask_test_cpu(cpu, tick_broadcast_mask);
5728 ++ break;
5729 ++ default:
5730 ++ /* Nothing to do */
5731 ++ ret = 0;
5732 ++ break;
5733 + }
5734 + }
5735 + raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
5736 +@@ -281,6 +322,7 @@ static void tick_do_broadcast_on_off(unsigned long *reason)
5737 + switch (*reason) {
5738 + case CLOCK_EVT_NOTIFY_BROADCAST_ON:
5739 + case CLOCK_EVT_NOTIFY_BROADCAST_FORCE:
5740 ++ cpumask_set_cpu(cpu, tick_broadcast_on);
5741 + if (!cpumask_test_and_set_cpu(cpu, tick_broadcast_mask)) {
5742 + if (tick_broadcast_device.mode ==
5743 + TICKDEV_MODE_PERIODIC)
5744 +@@ -290,8 +332,12 @@ static void tick_do_broadcast_on_off(unsigned long *reason)
5745 + tick_broadcast_force = 1;
5746 + break;
5747 + case CLOCK_EVT_NOTIFY_BROADCAST_OFF:
5748 +- if (!tick_broadcast_force &&
5749 +- cpumask_test_and_clear_cpu(cpu, tick_broadcast_mask)) {
5750 ++ if (tick_broadcast_force)
5751 ++ break;
5752 ++ cpumask_clear_cpu(cpu, tick_broadcast_on);
5753 ++ if (!tick_device_is_functional(dev))
5754 ++ break;
5755 ++ if (cpumask_test_and_clear_cpu(cpu, tick_broadcast_mask)) {
5756 + if (tick_broadcast_device.mode ==
5757 + TICKDEV_MODE_PERIODIC)
5758 + tick_setup_periodic(dev, 0);
5759 +@@ -349,6 +395,7 @@ void tick_shutdown_broadcast(unsigned int *cpup)
5760 +
5761 + bc = tick_broadcast_device.evtdev;
5762 + cpumask_clear_cpu(cpu, tick_broadcast_mask);
5763 ++ cpumask_clear_cpu(cpu, tick_broadcast_on);
5764 +
5765 + if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) {
5766 + if (bc && cpumask_empty(tick_broadcast_mask))
5767 +@@ -475,7 +522,15 @@ void tick_check_oneshot_broadcast(int cpu)
5768 + if (cpumask_test_cpu(cpu, tick_broadcast_oneshot_mask)) {
5769 + struct tick_device *td = &per_cpu(tick_cpu_device, cpu);
5770 +
5771 +- clockevents_set_mode(td->evtdev, CLOCK_EVT_MODE_ONESHOT);
5772 ++ /*
5773 ++ * We might be in the middle of switching over from
5774 ++ * periodic to oneshot. If the CPU has not yet
5775 ++ * switched over, leave the device alone.
5776 ++ */
5777 ++ if (td->mode == TICKDEV_MODE_ONESHOT) {
5778 ++ clockevents_set_mode(td->evtdev,
5779 ++ CLOCK_EVT_MODE_ONESHOT);
5780 ++ }
5781 + }
5782 + }
5783 +
5784 +@@ -792,6 +847,7 @@ bool tick_broadcast_oneshot_available(void)
5785 + void __init tick_broadcast_init(void)
5786 + {
5787 + zalloc_cpumask_var(&tick_broadcast_mask, GFP_NOWAIT);
5788 ++ zalloc_cpumask_var(&tick_broadcast_on, GFP_NOWAIT);
5789 + zalloc_cpumask_var(&tmpmask, GFP_NOWAIT);
5790 + #ifdef CONFIG_TICK_ONESHOT
5791 + zalloc_cpumask_var(&tick_broadcast_oneshot_mask, GFP_NOWAIT);
5792 +diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
5793 +index 5d3fb100..7ce5e5a4 100644
5794 +--- a/kernel/time/tick-common.c
5795 ++++ b/kernel/time/tick-common.c
5796 +@@ -194,7 +194,8 @@ static void tick_setup_device(struct tick_device *td,
5797 + * When global broadcasting is active, check if the current
5798 + * device is registered as a placeholder for broadcast mode.
5799 + * This allows us to handle this x86 misfeature in a generic
5800 +- * way.
5801 ++ * way. This function also returns !=0 when we keep the
5802 ++ * current active broadcast state for this CPU.
5803 + */
5804 + if (tick_device_uses_broadcast(newdev, cpu))
5805 + return;
5806 +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
5807 +index e71a8be4..0b936d80 100644
5808 +--- a/kernel/trace/trace.c
5809 ++++ b/kernel/trace/trace.c
5810 +@@ -193,6 +193,37 @@ static struct trace_array global_trace;
5811 +
5812 + LIST_HEAD(ftrace_trace_arrays);
5813 +
5814 ++int trace_array_get(struct trace_array *this_tr)
5815 ++{
5816 ++ struct trace_array *tr;
5817 ++ int ret = -ENODEV;
5818 ++
5819 ++ mutex_lock(&trace_types_lock);
5820 ++ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
5821 ++ if (tr == this_tr) {
5822 ++ tr->ref++;
5823 ++ ret = 0;
5824 ++ break;
5825 ++ }
5826 ++ }
5827 ++ mutex_unlock(&trace_types_lock);
5828 ++
5829 ++ return ret;
5830 ++}
5831 ++
5832 ++static void __trace_array_put(struct trace_array *this_tr)
5833 ++{
5834 ++ WARN_ON(!this_tr->ref);
5835 ++ this_tr->ref--;
5836 ++}
5837 ++
5838 ++void trace_array_put(struct trace_array *this_tr)
5839 ++{
5840 ++ mutex_lock(&trace_types_lock);
5841 ++ __trace_array_put(this_tr);
5842 ++ mutex_unlock(&trace_types_lock);
5843 ++}
5844 ++
5845 + int filter_current_check_discard(struct ring_buffer *buffer,
5846 + struct ftrace_event_call *call, void *rec,
5847 + struct ring_buffer_event *event)
5848 +@@ -240,7 +271,7 @@ static struct tracer *trace_types __read_mostly;
5849 + /*
5850 + * trace_types_lock is used to protect the trace_types list.
5851 + */
5852 +-static DEFINE_MUTEX(trace_types_lock);
5853 ++DEFINE_MUTEX(trace_types_lock);
5854 +
5855 + /*
5856 + * serialize the access of the ring buffer
5857 +@@ -2768,10 +2799,9 @@ static const struct seq_operations tracer_seq_ops = {
5858 + };
5859 +
5860 + static struct trace_iterator *
5861 +-__tracing_open(struct inode *inode, struct file *file, bool snapshot)
5862 ++__tracing_open(struct trace_array *tr, struct trace_cpu *tc,
5863 ++ struct inode *inode, struct file *file, bool snapshot)
5864 + {
5865 +- struct trace_cpu *tc = inode->i_private;
5866 +- struct trace_array *tr = tc->tr;
5867 + struct trace_iterator *iter;
5868 + int cpu;
5869 +
5870 +@@ -2850,8 +2880,6 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
5871 + tracing_iter_reset(iter, cpu);
5872 + }
5873 +
5874 +- tr->ref++;
5875 +-
5876 + mutex_unlock(&trace_types_lock);
5877 +
5878 + return iter;
5879 +@@ -2874,6 +2902,43 @@ int tracing_open_generic(struct inode *inode, struct file *filp)
5880 + return 0;
5881 + }
5882 +
5883 ++/*
5884 ++ * Open and update trace_array ref count.
5885 ++ * Must have the current trace_array passed to it.
5886 ++ */
5887 ++int tracing_open_generic_tr(struct inode *inode, struct file *filp)
5888 ++{
5889 ++ struct trace_array *tr = inode->i_private;
5890 ++
5891 ++ if (tracing_disabled)
5892 ++ return -ENODEV;
5893 ++
5894 ++ if (trace_array_get(tr) < 0)
5895 ++ return -ENODEV;
5896 ++
5897 ++ filp->private_data = inode->i_private;
5898 ++
5899 ++ return 0;
5900 ++
5901 ++}
5902 ++
5903 ++int tracing_open_generic_tc(struct inode *inode, struct file *filp)
5904 ++{
5905 ++ struct trace_cpu *tc = inode->i_private;
5906 ++ struct trace_array *tr = tc->tr;
5907 ++
5908 ++ if (tracing_disabled)
5909 ++ return -ENODEV;
5910 ++
5911 ++ if (trace_array_get(tr) < 0)
5912 ++ return -ENODEV;
5913 ++
5914 ++ filp->private_data = inode->i_private;
5915 ++
5916 ++ return 0;
5917 ++
5918 ++}
5919 ++
5920 + static int tracing_release(struct inode *inode, struct file *file)
5921 + {
5922 + struct seq_file *m = file->private_data;
5923 +@@ -2881,17 +2946,20 @@ static int tracing_release(struct inode *inode, struct file *file)
5924 + struct trace_array *tr;
5925 + int cpu;
5926 +
5927 +- if (!(file->f_mode & FMODE_READ))
5928 ++ /* Writes do not use seq_file, need to grab tr from inode */
5929 ++ if (!(file->f_mode & FMODE_READ)) {
5930 ++ struct trace_cpu *tc = inode->i_private;
5931 ++
5932 ++ trace_array_put(tc->tr);
5933 + return 0;
5934 ++ }
5935 +
5936 + iter = m->private;
5937 + tr = iter->tr;
5938 ++ trace_array_put(tr);
5939 +
5940 + mutex_lock(&trace_types_lock);
5941 +
5942 +- WARN_ON(!tr->ref);
5943 +- tr->ref--;
5944 +-
5945 + for_each_tracing_cpu(cpu) {
5946 + if (iter->buffer_iter[cpu])
5947 + ring_buffer_read_finish(iter->buffer_iter[cpu]);
5948 +@@ -2910,20 +2978,49 @@ static int tracing_release(struct inode *inode, struct file *file)
5949 + kfree(iter->trace);
5950 + kfree(iter->buffer_iter);
5951 + seq_release_private(inode, file);
5952 ++
5953 ++ return 0;
5954 ++}
5955 ++
5956 ++static int tracing_release_generic_tr(struct inode *inode, struct file *file)
5957 ++{
5958 ++ struct trace_array *tr = inode->i_private;
5959 ++
5960 ++ trace_array_put(tr);
5961 + return 0;
5962 + }
5963 +
5964 ++static int tracing_release_generic_tc(struct inode *inode, struct file *file)
5965 ++{
5966 ++ struct trace_cpu *tc = inode->i_private;
5967 ++ struct trace_array *tr = tc->tr;
5968 ++
5969 ++ trace_array_put(tr);
5970 ++ return 0;
5971 ++}
5972 ++
5973 ++static int tracing_single_release_tr(struct inode *inode, struct file *file)
5974 ++{
5975 ++ struct trace_array *tr = inode->i_private;
5976 ++
5977 ++ trace_array_put(tr);
5978 ++
5979 ++ return single_release(inode, file);
5980 ++}
5981 ++
5982 + static int tracing_open(struct inode *inode, struct file *file)
5983 + {
5984 ++ struct trace_cpu *tc = inode->i_private;
5985 ++ struct trace_array *tr = tc->tr;
5986 + struct trace_iterator *iter;
5987 + int ret = 0;
5988 +
5989 ++ if (trace_array_get(tr) < 0)
5990 ++ return -ENODEV;
5991 ++
5992 + /* If this file was open for write, then erase contents */
5993 + if ((file->f_mode & FMODE_WRITE) &&
5994 + (file->f_flags & O_TRUNC)) {
5995 +- struct trace_cpu *tc = inode->i_private;
5996 +- struct trace_array *tr = tc->tr;
5997 +-
5998 + if (tc->cpu == RING_BUFFER_ALL_CPUS)
5999 + tracing_reset_online_cpus(&tr->trace_buffer);
6000 + else
6001 +@@ -2931,12 +3028,16 @@ static int tracing_open(struct inode *inode, struct file *file)
6002 + }
6003 +
6004 + if (file->f_mode & FMODE_READ) {
6005 +- iter = __tracing_open(inode, file, false);
6006 ++ iter = __tracing_open(tr, tc, inode, file, false);
6007 + if (IS_ERR(iter))
6008 + ret = PTR_ERR(iter);
6009 + else if (trace_flags & TRACE_ITER_LATENCY_FMT)
6010 + iter->iter_flags |= TRACE_FILE_LAT_FMT;
6011 + }
6012 ++
6013 ++ if (ret < 0)
6014 ++ trace_array_put(tr);
6015 ++
6016 + return ret;
6017 + }
6018 +
6019 +@@ -3293,9 +3394,14 @@ tracing_trace_options_write(struct file *filp, const char __user *ubuf,
6020 +
6021 + static int tracing_trace_options_open(struct inode *inode, struct file *file)
6022 + {
6023 ++ struct trace_array *tr = inode->i_private;
6024 ++
6025 + if (tracing_disabled)
6026 + return -ENODEV;
6027 +
6028 ++ if (trace_array_get(tr) < 0)
6029 ++ return -ENODEV;
6030 ++
6031 + return single_open(file, tracing_trace_options_show, inode->i_private);
6032 + }
6033 +
6034 +@@ -3303,7 +3409,7 @@ static const struct file_operations tracing_iter_fops = {
6035 + .open = tracing_trace_options_open,
6036 + .read = seq_read,
6037 + .llseek = seq_lseek,
6038 +- .release = single_release,
6039 ++ .release = tracing_single_release_tr,
6040 + .write = tracing_trace_options_write,
6041 + };
6042 +
6043 +@@ -3791,6 +3897,9 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
6044 + if (tracing_disabled)
6045 + return -ENODEV;
6046 +
6047 ++ if (trace_array_get(tr) < 0)
6048 ++ return -ENODEV;
6049 ++
6050 + mutex_lock(&trace_types_lock);
6051 +
6052 + /* create a buffer to store the information to pass to userspace */
6053 +@@ -3843,6 +3952,7 @@ out:
6054 + fail:
6055 + kfree(iter->trace);
6056 + kfree(iter);
6057 ++ __trace_array_put(tr);
6058 + mutex_unlock(&trace_types_lock);
6059 + return ret;
6060 + }
6061 +@@ -3850,6 +3960,8 @@ fail:
6062 + static int tracing_release_pipe(struct inode *inode, struct file *file)
6063 + {
6064 + struct trace_iterator *iter = file->private_data;
6065 ++ struct trace_cpu *tc = inode->i_private;
6066 ++ struct trace_array *tr = tc->tr;
6067 +
6068 + mutex_lock(&trace_types_lock);
6069 +
6070 +@@ -3863,6 +3975,8 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
6071 + kfree(iter->trace);
6072 + kfree(iter);
6073 +
6074 ++ trace_array_put(tr);
6075 ++
6076 + return 0;
6077 + }
6078 +
6079 +@@ -4320,6 +4434,8 @@ tracing_free_buffer_release(struct inode *inode, struct file *filp)
6080 + /* resize the ring buffer to 0 */
6081 + tracing_resize_ring_buffer(tr, 0, RING_BUFFER_ALL_CPUS);
6082 +
6083 ++ trace_array_put(tr);
6084 ++
6085 + return 0;
6086 + }
6087 +
6088 +@@ -4328,6 +4444,7 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
6089 + size_t cnt, loff_t *fpos)
6090 + {
6091 + unsigned long addr = (unsigned long)ubuf;
6092 ++ struct trace_array *tr = filp->private_data;
6093 + struct ring_buffer_event *event;
6094 + struct ring_buffer *buffer;
6095 + struct print_entry *entry;
6096 +@@ -4387,7 +4504,7 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
6097 +
6098 + local_save_flags(irq_flags);
6099 + size = sizeof(*entry) + cnt + 2; /* possible \n added */
6100 +- buffer = global_trace.trace_buffer.buffer;
6101 ++ buffer = tr->trace_buffer.buffer;
6102 + event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, size,
6103 + irq_flags, preempt_count());
6104 + if (!event) {
6105 +@@ -4495,10 +4612,20 @@ static ssize_t tracing_clock_write(struct file *filp, const char __user *ubuf,
6106 +
6107 + static int tracing_clock_open(struct inode *inode, struct file *file)
6108 + {
6109 ++ struct trace_array *tr = inode->i_private;
6110 ++ int ret;
6111 ++
6112 + if (tracing_disabled)
6113 + return -ENODEV;
6114 +
6115 +- return single_open(file, tracing_clock_show, inode->i_private);
6116 ++ if (trace_array_get(tr))
6117 ++ return -ENODEV;
6118 ++
6119 ++ ret = single_open(file, tracing_clock_show, inode->i_private);
6120 ++ if (ret < 0)
6121 ++ trace_array_put(tr);
6122 ++
6123 ++ return ret;
6124 + }
6125 +
6126 + struct ftrace_buffer_info {
6127 +@@ -4511,12 +4638,16 @@ struct ftrace_buffer_info {
6128 + static int tracing_snapshot_open(struct inode *inode, struct file *file)
6129 + {
6130 + struct trace_cpu *tc = inode->i_private;
6131 ++ struct trace_array *tr = tc->tr;
6132 + struct trace_iterator *iter;
6133 + struct seq_file *m;
6134 + int ret = 0;
6135 +
6136 ++ if (trace_array_get(tr) < 0)
6137 ++ return -ENODEV;
6138 ++
6139 + if (file->f_mode & FMODE_READ) {
6140 +- iter = __tracing_open(inode, file, true);
6141 ++ iter = __tracing_open(tr, tc, inode, file, true);
6142 + if (IS_ERR(iter))
6143 + ret = PTR_ERR(iter);
6144 + } else {
6145 +@@ -4529,13 +4660,16 @@ static int tracing_snapshot_open(struct inode *inode, struct file *file)
6146 + kfree(m);
6147 + return -ENOMEM;
6148 + }
6149 +- iter->tr = tc->tr;
6150 ++ iter->tr = tr;
6151 + iter->trace_buffer = &tc->tr->max_buffer;
6152 + iter->cpu_file = tc->cpu;
6153 + m->private = iter;
6154 + file->private_data = m;
6155 + }
6156 +
6157 ++ if (ret < 0)
6158 ++ trace_array_put(tr);
6159 ++
6160 + return ret;
6161 + }
6162 +
6163 +@@ -4616,9 +4750,12 @@ out:
6164 + static int tracing_snapshot_release(struct inode *inode, struct file *file)
6165 + {
6166 + struct seq_file *m = file->private_data;
6167 ++ int ret;
6168 ++
6169 ++ ret = tracing_release(inode, file);
6170 +
6171 + if (file->f_mode & FMODE_READ)
6172 +- return tracing_release(inode, file);
6173 ++ return ret;
6174 +
6175 + /* If write only, the seq_file is just a stub */
6176 + if (m)
6177 +@@ -4684,34 +4821,38 @@ static const struct file_operations tracing_pipe_fops = {
6178 + };
6179 +
6180 + static const struct file_operations tracing_entries_fops = {
6181 +- .open = tracing_open_generic,
6182 ++ .open = tracing_open_generic_tc,
6183 + .read = tracing_entries_read,
6184 + .write = tracing_entries_write,
6185 + .llseek = generic_file_llseek,
6186 ++ .release = tracing_release_generic_tc,
6187 + };
6188 +
6189 + static const struct file_operations tracing_total_entries_fops = {
6190 +- .open = tracing_open_generic,
6191 ++ .open = tracing_open_generic_tr,
6192 + .read = tracing_total_entries_read,
6193 + .llseek = generic_file_llseek,
6194 ++ .release = tracing_release_generic_tr,
6195 + };
6196 +
6197 + static const struct file_operations tracing_free_buffer_fops = {
6198 ++ .open = tracing_open_generic_tr,
6199 + .write = tracing_free_buffer_write,
6200 + .release = tracing_free_buffer_release,
6201 + };
6202 +
6203 + static const struct file_operations tracing_mark_fops = {
6204 +- .open = tracing_open_generic,
6205 ++ .open = tracing_open_generic_tr,
6206 + .write = tracing_mark_write,
6207 + .llseek = generic_file_llseek,
6208 ++ .release = tracing_release_generic_tr,
6209 + };
6210 +
6211 + static const struct file_operations trace_clock_fops = {
6212 + .open = tracing_clock_open,
6213 + .read = seq_read,
6214 + .llseek = seq_lseek,
6215 +- .release = single_release,
6216 ++ .release = tracing_single_release_tr,
6217 + .write = tracing_clock_write,
6218 + };
6219 +
6220 +@@ -4739,13 +4880,19 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
6221 + struct trace_cpu *tc = inode->i_private;
6222 + struct trace_array *tr = tc->tr;
6223 + struct ftrace_buffer_info *info;
6224 ++ int ret;
6225 +
6226 + if (tracing_disabled)
6227 + return -ENODEV;
6228 +
6229 ++ if (trace_array_get(tr) < 0)
6230 ++ return -ENODEV;
6231 ++
6232 + info = kzalloc(sizeof(*info), GFP_KERNEL);
6233 +- if (!info)
6234 ++ if (!info) {
6235 ++ trace_array_put(tr);
6236 + return -ENOMEM;
6237 ++ }
6238 +
6239 + mutex_lock(&trace_types_lock);
6240 +
6241 +@@ -4763,7 +4910,11 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
6242 +
6243 + mutex_unlock(&trace_types_lock);
6244 +
6245 +- return nonseekable_open(inode, filp);
6246 ++ ret = nonseekable_open(inode, filp);
6247 ++ if (ret < 0)
6248 ++ trace_array_put(tr);
6249 ++
6250 ++ return ret;
6251 + }
6252 +
6253 + static unsigned int
6254 +@@ -4863,8 +5014,7 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
6255 +
6256 + mutex_lock(&trace_types_lock);
6257 +
6258 +- WARN_ON(!iter->tr->ref);
6259 +- iter->tr->ref--;
6260 ++ __trace_array_put(iter->tr);
6261 +
6262 + if (info->spare)
6263 + ring_buffer_free_read_page(iter->trace_buffer->buffer, info->spare);
6264 +@@ -5659,9 +5809,10 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
6265 + }
6266 +
6267 + static const struct file_operations rb_simple_fops = {
6268 +- .open = tracing_open_generic,
6269 ++ .open = tracing_open_generic_tr,
6270 + .read = rb_simple_read,
6271 + .write = rb_simple_write,
6272 ++ .release = tracing_release_generic_tr,
6273 + .llseek = default_llseek,
6274 + };
6275 +
6276 +diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
6277 +index 20572ed8..51b44483 100644
6278 +--- a/kernel/trace/trace.h
6279 ++++ b/kernel/trace/trace.h
6280 +@@ -224,6 +224,11 @@ enum {
6281 +
6282 + extern struct list_head ftrace_trace_arrays;
6283 +
6284 ++extern struct mutex trace_types_lock;
6285 ++
6286 ++extern int trace_array_get(struct trace_array *tr);
6287 ++extern void trace_array_put(struct trace_array *tr);
6288 ++
6289 + /*
6290 + * The global tracer (top) should be the first trace array added,
6291 + * but we check the flag anyway.
6292 +diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
6293 +index 27963e2b..6dfd48b5 100644
6294 +--- a/kernel/trace/trace_events.c
6295 ++++ b/kernel/trace/trace_events.c
6296 +@@ -41,6 +41,23 @@ static LIST_HEAD(ftrace_common_fields);
6297 + static struct kmem_cache *field_cachep;
6298 + static struct kmem_cache *file_cachep;
6299 +
6300 ++#define SYSTEM_FL_FREE_NAME (1 << 31)
6301 ++
6302 ++static inline int system_refcount(struct event_subsystem *system)
6303 ++{
6304 ++ return system->ref_count & ~SYSTEM_FL_FREE_NAME;
6305 ++}
6306 ++
6307 ++static int system_refcount_inc(struct event_subsystem *system)
6308 ++{
6309 ++ return (system->ref_count++) & ~SYSTEM_FL_FREE_NAME;
6310 ++}
6311 ++
6312 ++static int system_refcount_dec(struct event_subsystem *system)
6313 ++{
6314 ++ return (--system->ref_count) & ~SYSTEM_FL_FREE_NAME;
6315 ++}
6316 ++
6317 + /* Double loops, do not use break, only goto's work */
6318 + #define do_for_each_event_file(tr, file) \
6319 + list_for_each_entry(tr, &ftrace_trace_arrays, list) { \
6320 +@@ -349,8 +366,8 @@ static void __put_system(struct event_subsystem *system)
6321 + {
6322 + struct event_filter *filter = system->filter;
6323 +
6324 +- WARN_ON_ONCE(system->ref_count == 0);
6325 +- if (--system->ref_count)
6326 ++ WARN_ON_ONCE(system_refcount(system) == 0);
6327 ++ if (system_refcount_dec(system))
6328 + return;
6329 +
6330 + list_del(&system->list);
6331 +@@ -359,13 +376,15 @@ static void __put_system(struct event_subsystem *system)
6332 + kfree(filter->filter_string);
6333 + kfree(filter);
6334 + }
6335 ++ if (system->ref_count & SYSTEM_FL_FREE_NAME)
6336 ++ kfree(system->name);
6337 + kfree(system);
6338 + }
6339 +
6340 + static void __get_system(struct event_subsystem *system)
6341 + {
6342 +- WARN_ON_ONCE(system->ref_count == 0);
6343 +- system->ref_count++;
6344 ++ WARN_ON_ONCE(system_refcount(system) == 0);
6345 ++ system_refcount_inc(system);
6346 + }
6347 +
6348 + static void __get_system_dir(struct ftrace_subsystem_dir *dir)
6349 +@@ -379,7 +398,7 @@ static void __put_system_dir(struct ftrace_subsystem_dir *dir)
6350 + {
6351 + WARN_ON_ONCE(dir->ref_count == 0);
6352 + /* If the subsystem is about to be freed, the dir must be too */
6353 +- WARN_ON_ONCE(dir->subsystem->ref_count == 1 && dir->ref_count != 1);
6354 ++ WARN_ON_ONCE(system_refcount(dir->subsystem) == 1 && dir->ref_count != 1);
6355 +
6356 + __put_system(dir->subsystem);
6357 + if (!--dir->ref_count)
6358 +@@ -394,16 +413,45 @@ static void put_system(struct ftrace_subsystem_dir *dir)
6359 + }
6360 +
6361 + /*
6362 ++ * Open and update trace_array ref count.
6363 ++ * Must have the current trace_array passed to it.
6364 ++ */
6365 ++static int tracing_open_generic_file(struct inode *inode, struct file *filp)
6366 ++{
6367 ++ struct ftrace_event_file *file = inode->i_private;
6368 ++ struct trace_array *tr = file->tr;
6369 ++ int ret;
6370 ++
6371 ++ if (trace_array_get(tr) < 0)
6372 ++ return -ENODEV;
6373 ++
6374 ++ ret = tracing_open_generic(inode, filp);
6375 ++ if (ret < 0)
6376 ++ trace_array_put(tr);
6377 ++ return ret;
6378 ++}
6379 ++
6380 ++static int tracing_release_generic_file(struct inode *inode, struct file *filp)
6381 ++{
6382 ++ struct ftrace_event_file *file = inode->i_private;
6383 ++ struct trace_array *tr = file->tr;
6384 ++
6385 ++ trace_array_put(tr);
6386 ++
6387 ++ return 0;
6388 ++}
6389 ++
6390 ++/*
6391 + * __ftrace_set_clr_event(NULL, NULL, NULL, set) will set/unset all events.
6392 + */
6393 +-static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
6394 +- const char *sub, const char *event, int set)
6395 ++static int
6396 ++__ftrace_set_clr_event_nolock(struct trace_array *tr, const char *match,
6397 ++ const char *sub, const char *event, int set)
6398 + {
6399 + struct ftrace_event_file *file;
6400 + struct ftrace_event_call *call;
6401 + int ret = -EINVAL;
6402 +
6403 +- mutex_lock(&event_mutex);
6404 + list_for_each_entry(file, &tr->events, list) {
6405 +
6406 + call = file->event_call;
6407 +@@ -429,6 +477,17 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
6408 +
6409 + ret = 0;
6410 + }
6411 ++
6412 ++ return ret;
6413 ++}
6414 ++
6415 ++static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
6416 ++ const char *sub, const char *event, int set)
6417 ++{
6418 ++ int ret;
6419 ++
6420 ++ mutex_lock(&event_mutex);
6421 ++ ret = __ftrace_set_clr_event_nolock(tr, match, sub, event, set);
6422 + mutex_unlock(&event_mutex);
6423 +
6424 + return ret;
6425 +@@ -992,6 +1051,7 @@ static int subsystem_open(struct inode *inode, struct file *filp)
6426 + int ret;
6427 +
6428 + /* Make sure the system still exists */
6429 ++ mutex_lock(&trace_types_lock);
6430 + mutex_lock(&event_mutex);
6431 + list_for_each_entry(tr, &ftrace_trace_arrays, list) {
6432 + list_for_each_entry(dir, &tr->systems, list) {
6433 +@@ -1007,6 +1067,7 @@ static int subsystem_open(struct inode *inode, struct file *filp)
6434 + }
6435 + exit_loop:
6436 + mutex_unlock(&event_mutex);
6437 ++ mutex_unlock(&trace_types_lock);
6438 +
6439 + if (!system)
6440 + return -ENODEV;
6441 +@@ -1014,9 +1075,17 @@ static int subsystem_open(struct inode *inode, struct file *filp)
6442 + /* Some versions of gcc think dir can be uninitialized here */
6443 + WARN_ON(!dir);
6444 +
6445 ++ /* Still need to increment the ref count of the system */
6446 ++ if (trace_array_get(tr) < 0) {
6447 ++ put_system(dir);
6448 ++ return -ENODEV;
6449 ++ }
6450 ++
6451 + ret = tracing_open_generic(inode, filp);
6452 +- if (ret < 0)
6453 ++ if (ret < 0) {
6454 ++ trace_array_put(tr);
6455 + put_system(dir);
6456 ++ }
6457 +
6458 + return ret;
6459 + }
6460 +@@ -1027,16 +1096,23 @@ static int system_tr_open(struct inode *inode, struct file *filp)
6461 + struct trace_array *tr = inode->i_private;
6462 + int ret;
6463 +
6464 ++ if (trace_array_get(tr) < 0)
6465 ++ return -ENODEV;
6466 ++
6467 + /* Make a temporary dir that has no system but points to tr */
6468 + dir = kzalloc(sizeof(*dir), GFP_KERNEL);
6469 +- if (!dir)
6470 ++ if (!dir) {
6471 ++ trace_array_put(tr);
6472 + return -ENOMEM;
6473 ++ }
6474 +
6475 + dir->tr = tr;
6476 +
6477 + ret = tracing_open_generic(inode, filp);
6478 +- if (ret < 0)
6479 ++ if (ret < 0) {
6480 ++ trace_array_put(tr);
6481 + kfree(dir);
6482 ++ }
6483 +
6484 + filp->private_data = dir;
6485 +
6486 +@@ -1047,6 +1123,8 @@ static int subsystem_release(struct inode *inode, struct file *file)
6487 + {
6488 + struct ftrace_subsystem_dir *dir = file->private_data;
6489 +
6490 ++ trace_array_put(dir->tr);
6491 ++
6492 + /*
6493 + * If dir->subsystem is NULL, then this is a temporary
6494 + * descriptor that was made for a trace_array to enable
6495 +@@ -1174,9 +1252,10 @@ static const struct file_operations ftrace_set_event_fops = {
6496 + };
6497 +
6498 + static const struct file_operations ftrace_enable_fops = {
6499 +- .open = tracing_open_generic,
6500 ++ .open = tracing_open_generic_file,
6501 + .read = event_enable_read,
6502 + .write = event_enable_write,
6503 ++ .release = tracing_release_generic_file,
6504 + .llseek = default_llseek,
6505 + };
6506 +
6507 +@@ -1279,7 +1358,15 @@ create_new_subsystem(const char *name)
6508 + return NULL;
6509 +
6510 + system->ref_count = 1;
6511 +- system->name = name;
6512 ++
6513 ++ /* Only allocate if dynamic (kprobes and modules) */
6514 ++ if (!core_kernel_data((unsigned long)name)) {
6515 ++ system->ref_count |= SYSTEM_FL_FREE_NAME;
6516 ++ system->name = kstrdup(name, GFP_KERNEL);
6517 ++ if (!system->name)
6518 ++ goto out_free;
6519 ++ } else
6520 ++ system->name = name;
6521 +
6522 + system->filter = NULL;
6523 +
6524 +@@ -1292,6 +1379,8 @@ create_new_subsystem(const char *name)
6525 + return system;
6526 +
6527 + out_free:
6528 ++ if (system->ref_count & SYSTEM_FL_FREE_NAME)
6529 ++ kfree(system->name);
6530 + kfree(system);
6531 + return NULL;
6532 + }
6533 +@@ -1591,6 +1680,7 @@ static void __add_event_to_tracers(struct ftrace_event_call *call,
6534 + int trace_add_event_call(struct ftrace_event_call *call)
6535 + {
6536 + int ret;
6537 ++ mutex_lock(&trace_types_lock);
6538 + mutex_lock(&event_mutex);
6539 +
6540 + ret = __register_event(call, NULL);
6541 +@@ -1598,11 +1688,13 @@ int trace_add_event_call(struct ftrace_event_call *call)
6542 + __add_event_to_tracers(call, NULL);
6543 +
6544 + mutex_unlock(&event_mutex);
6545 ++ mutex_unlock(&trace_types_lock);
6546 + return ret;
6547 + }
6548 +
6549 + /*
6550 +- * Must be called under locking both of event_mutex and trace_event_sem.
6551 ++ * Must be called under locking of trace_types_lock, event_mutex and
6552 ++ * trace_event_sem.
6553 + */
6554 + static void __trace_remove_event_call(struct ftrace_event_call *call)
6555 + {
6556 +@@ -1614,11 +1706,13 @@ static void __trace_remove_event_call(struct ftrace_event_call *call)
6557 + /* Remove an event_call */
6558 + void trace_remove_event_call(struct ftrace_event_call *call)
6559 + {
6560 ++ mutex_lock(&trace_types_lock);
6561 + mutex_lock(&event_mutex);
6562 + down_write(&trace_event_sem);
6563 + __trace_remove_event_call(call);
6564 + up_write(&trace_event_sem);
6565 + mutex_unlock(&event_mutex);
6566 ++ mutex_unlock(&trace_types_lock);
6567 + }
6568 +
6569 + #define for_each_event(event, start, end) \
6570 +@@ -1762,6 +1856,7 @@ static int trace_module_notify(struct notifier_block *self,
6571 + {
6572 + struct module *mod = data;
6573 +
6574 ++ mutex_lock(&trace_types_lock);
6575 + mutex_lock(&event_mutex);
6576 + switch (val) {
6577 + case MODULE_STATE_COMING:
6578 +@@ -1772,6 +1867,7 @@ static int trace_module_notify(struct notifier_block *self,
6579 + break;
6580 + }
6581 + mutex_unlock(&event_mutex);
6582 ++ mutex_unlock(&trace_types_lock);
6583 +
6584 + return 0;
6585 + }
6586 +@@ -2329,11 +2425,11 @@ early_event_add_tracer(struct dentry *parent, struct trace_array *tr)
6587 +
6588 + int event_trace_del_tracer(struct trace_array *tr)
6589 + {
6590 +- /* Disable any running events */
6591 +- __ftrace_set_clr_event(tr, NULL, NULL, NULL, 0);
6592 +-
6593 + mutex_lock(&event_mutex);
6594 +
6595 ++ /* Disable any running events */
6596 ++ __ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
6597 ++
6598 + down_write(&trace_event_sem);
6599 + __trace_remove_event_dirs(tr);
6600 + debugfs_remove_recursive(tr->event_dir);
6601 +diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
6602 +index 8f2ac73c..322e1646 100644
6603 +--- a/kernel/trace/trace_syscalls.c
6604 ++++ b/kernel/trace/trace_syscalls.c
6605 +@@ -306,6 +306,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
6606 + struct syscall_metadata *sys_data;
6607 + struct ring_buffer_event *event;
6608 + struct ring_buffer *buffer;
6609 ++ unsigned long irq_flags;
6610 ++ int pc;
6611 + int syscall_nr;
6612 + int size;
6613 +
6614 +@@ -321,9 +323,12 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
6615 +
6616 + size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
6617 +
6618 ++ local_save_flags(irq_flags);
6619 ++ pc = preempt_count();
6620 ++
6621 + buffer = tr->trace_buffer.buffer;
6622 + event = trace_buffer_lock_reserve(buffer,
6623 +- sys_data->enter_event->event.type, size, 0, 0);
6624 ++ sys_data->enter_event->event.type, size, irq_flags, pc);
6625 + if (!event)
6626 + return;
6627 +
6628 +@@ -333,7 +338,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
6629 +
6630 + if (!filter_current_check_discard(buffer, sys_data->enter_event,
6631 + entry, event))
6632 +- trace_current_buffer_unlock_commit(buffer, event, 0, 0);
6633 ++ trace_current_buffer_unlock_commit(buffer, event,
6634 ++ irq_flags, pc);
6635 + }
6636 +
6637 + static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
6638 +@@ -343,6 +349,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
6639 + struct syscall_metadata *sys_data;
6640 + struct ring_buffer_event *event;
6641 + struct ring_buffer *buffer;
6642 ++ unsigned long irq_flags;
6643 ++ int pc;
6644 + int syscall_nr;
6645 +
6646 + syscall_nr = trace_get_syscall_nr(current, regs);
6647 +@@ -355,9 +363,13 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
6648 + if (!sys_data)
6649 + return;
6650 +
6651 ++ local_save_flags(irq_flags);
6652 ++ pc = preempt_count();
6653 ++
6654 + buffer = tr->trace_buffer.buffer;
6655 + event = trace_buffer_lock_reserve(buffer,
6656 +- sys_data->exit_event->event.type, sizeof(*entry), 0, 0);
6657 ++ sys_data->exit_event->event.type, sizeof(*entry),
6658 ++ irq_flags, pc);
6659 + if (!event)
6660 + return;
6661 +
6662 +@@ -367,7 +379,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
6663 +
6664 + if (!filter_current_check_discard(buffer, sys_data->exit_event,
6665 + entry, event))
6666 +- trace_current_buffer_unlock_commit(buffer, event, 0, 0);
6667 ++ trace_current_buffer_unlock_commit(buffer, event,
6668 ++ irq_flags, pc);
6669 + }
6670 +
6671 + static int reg_event_syscall_enter(struct ftrace_event_file *file,
6672 +diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
6673 +index 32494fb0..d5d0cd36 100644
6674 +--- a/kernel/trace/trace_uprobe.c
6675 ++++ b/kernel/trace/trace_uprobe.c
6676 +@@ -283,8 +283,10 @@ static int create_trace_uprobe(int argc, char **argv)
6677 + return -EINVAL;
6678 + }
6679 + arg = strchr(argv[1], ':');
6680 +- if (!arg)
6681 ++ if (!arg) {
6682 ++ ret = -EINVAL;
6683 + goto fail_address_parse;
6684 ++ }
6685 +
6686 + *arg++ = '\0';
6687 + filename = argv[1];
6688 +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
6689 +index 98d20c0f..514e90f4 100644
6690 +--- a/net/mac80211/iface.c
6691 ++++ b/net/mac80211/iface.c
6692 +@@ -1726,6 +1726,15 @@ void ieee80211_remove_interfaces(struct ieee80211_local *local)
6693 + if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
6694 + dev_close(sdata->dev);
6695 +
6696 ++ /*
6697 ++ * Close all AP_VLAN interfaces first, as otherwise they
6698 ++ * might be closed while the AP interface they belong to
6699 ++ * is closed, causing unregister_netdevice_many() to crash.
6700 ++ */
6701 ++ list_for_each_entry(sdata, &local->interfaces, list)
6702 ++ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
6703 ++ dev_close(sdata->dev);
6704 ++
6705 + mutex_lock(&local->iflist_mtx);
6706 + list_for_each_entry_safe(sdata, tmp, &local->interfaces, list) {
6707 + list_del(&sdata->list);
6708 +diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
6709 +index 06bdf5a1..1583c8a4 100644
6710 +--- a/net/sunrpc/svcauth_unix.c
6711 ++++ b/net/sunrpc/svcauth_unix.c
6712 +@@ -493,8 +493,6 @@ static int unix_gid_parse(struct cache_detail *cd,
6713 + if (rv)
6714 + return -EINVAL;
6715 + uid = make_kuid(&init_user_ns, id);
6716 +- if (!uid_valid(uid))
6717 +- return -EINVAL;
6718 + ug.uid = uid;
6719 +
6720 + expiry = get_expiry(&mesg);
6721 +diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
6722 +index 0f679df7..305374d4 100644
6723 +--- a/net/sunrpc/svcsock.c
6724 ++++ b/net/sunrpc/svcsock.c
6725 +@@ -917,7 +917,10 @@ static void svc_tcp_clear_pages(struct svc_sock *svsk)
6726 + len = svsk->sk_datalen;
6727 + npages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
6728 + for (i = 0; i < npages; i++) {
6729 +- BUG_ON(svsk->sk_pages[i] == NULL);
6730 ++ if (svsk->sk_pages[i] == NULL) {
6731 ++ WARN_ON_ONCE(1);
6732 ++ continue;
6733 ++ }
6734 + put_page(svsk->sk_pages[i]);
6735 + svsk->sk_pages[i] = NULL;
6736 + }
6737 +@@ -1092,8 +1095,10 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
6738 + goto err_noclose;
6739 + }
6740 +
6741 +- if (svc_sock_reclen(svsk) < 8)
6742 ++ if (svsk->sk_datalen < 8) {
6743 ++ svsk->sk_datalen = 0;
6744 + goto err_delete; /* client is nuts. */
6745 ++ }
6746 +
6747 + rqstp->rq_arg.len = svsk->sk_datalen;
6748 + rqstp->rq_arg.page_base = 0;
6749 +diff --git a/sound/arm/pxa2xx-pcm-lib.c b/sound/arm/pxa2xx-pcm-lib.c
6750 +index 76e0d569..823359ed 100644
6751 +--- a/sound/arm/pxa2xx-pcm-lib.c
6752 ++++ b/sound/arm/pxa2xx-pcm-lib.c
6753 +@@ -166,7 +166,9 @@ void pxa2xx_pcm_dma_irq(int dma_ch, void *dev_id)
6754 + } else {
6755 + printk(KERN_ERR "%s: DMA error on channel %d (DCSR=%#x)\n",
6756 + rtd->params->name, dma_ch, dcsr);
6757 ++ snd_pcm_stream_lock(substream);
6758 + snd_pcm_stop(substream, SNDRV_PCM_STATE_XRUN);
6759 ++ snd_pcm_stream_unlock(substream);
6760 + }
6761 + }
6762 + EXPORT_SYMBOL(pxa2xx_pcm_dma_irq);
6763 +diff --git a/sound/core/seq/oss/seq_oss_init.c b/sound/core/seq/oss/seq_oss_init.c
6764 +index e3cb46fe..b3f39b5e 100644
6765 +--- a/sound/core/seq/oss/seq_oss_init.c
6766 ++++ b/sound/core/seq/oss/seq_oss_init.c
6767 +@@ -31,6 +31,7 @@
6768 + #include <linux/export.h>
6769 + #include <linux/moduleparam.h>
6770 + #include <linux/slab.h>
6771 ++#include <linux/workqueue.h>
6772 +
6773 + /*
6774 + * common variables
6775 +@@ -60,6 +61,14 @@ static void free_devinfo(void *private);
6776 + #define call_ctl(type,rec) snd_seq_kernel_client_ctl(system_client, type, rec)
6777 +
6778 +
6779 ++/* call snd_seq_oss_midi_lookup_ports() asynchronously */
6780 ++static void async_call_lookup_ports(struct work_struct *work)
6781 ++{
6782 ++ snd_seq_oss_midi_lookup_ports(system_client);
6783 ++}
6784 ++
6785 ++static DECLARE_WORK(async_lookup_work, async_call_lookup_ports);
6786 ++
6787 + /*
6788 + * create sequencer client for OSS sequencer
6789 + */
6790 +@@ -85,9 +94,6 @@ snd_seq_oss_create_client(void)
6791 + system_client = rc;
6792 + debug_printk(("new client = %d\n", rc));
6793 +
6794 +- /* look up midi devices */
6795 +- snd_seq_oss_midi_lookup_ports(system_client);
6796 +-
6797 + /* create annoucement receiver port */
6798 + memset(port, 0, sizeof(*port));
6799 + strcpy(port->name, "Receiver");
6800 +@@ -115,6 +121,9 @@ snd_seq_oss_create_client(void)
6801 + }
6802 + rc = 0;
6803 +
6804 ++ /* look up midi devices */
6805 ++ schedule_work(&async_lookup_work);
6806 ++
6807 + __error:
6808 + kfree(port);
6809 + return rc;
6810 +@@ -160,6 +169,7 @@ receive_announce(struct snd_seq_event *ev, int direct, void *private, int atomic
6811 + int
6812 + snd_seq_oss_delete_client(void)
6813 + {
6814 ++ cancel_work_sync(&async_lookup_work);
6815 + if (system_client >= 0)
6816 + snd_seq_delete_kernel_client(system_client);
6817 +
6818 +diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c
6819 +index 677dc845..862d8489 100644
6820 +--- a/sound/core/seq/oss/seq_oss_midi.c
6821 ++++ b/sound/core/seq/oss/seq_oss_midi.c
6822 +@@ -72,7 +72,7 @@ static int send_midi_event(struct seq_oss_devinfo *dp, struct snd_seq_event *ev,
6823 + * look up the existing ports
6824 + * this looks a very exhausting job.
6825 + */
6826 +-int __init
6827 ++int
6828 + snd_seq_oss_midi_lookup_ports(int client)
6829 + {
6830 + struct snd_seq_client_info *clinfo;
6831 +diff --git a/sound/pci/asihpi/asihpi.c b/sound/pci/asihpi/asihpi.c
6832 +index fbc17203..a471d821 100644
6833 +--- a/sound/pci/asihpi/asihpi.c
6834 ++++ b/sound/pci/asihpi/asihpi.c
6835 +@@ -769,7 +769,10 @@ static void snd_card_asihpi_timer_function(unsigned long data)
6836 + s->number);
6837 + ds->drained_count++;
6838 + if (ds->drained_count > 20) {
6839 ++ unsigned long flags;
6840 ++ snd_pcm_stream_lock_irqsave(s, flags);
6841 + snd_pcm_stop(s, SNDRV_PCM_STATE_XRUN);
6842 ++ snd_pcm_stream_unlock_irqrestore(s, flags);
6843 + continue;
6844 + }
6845 + } else {
6846 +diff --git a/sound/pci/atiixp.c b/sound/pci/atiixp.c
6847 +index 6e78c678..819430ac 100644
6848 +--- a/sound/pci/atiixp.c
6849 ++++ b/sound/pci/atiixp.c
6850 +@@ -689,7 +689,9 @@ static void snd_atiixp_xrun_dma(struct atiixp *chip, struct atiixp_dma *dma)
6851 + if (! dma->substream || ! dma->running)
6852 + return;
6853 + snd_printdd("atiixp: XRUN detected (DMA %d)\n", dma->ops->type);
6854 ++ snd_pcm_stream_lock(dma->substream);
6855 + snd_pcm_stop(dma->substream, SNDRV_PCM_STATE_XRUN);
6856 ++ snd_pcm_stream_unlock(dma->substream);
6857 + }
6858 +
6859 + /*
6860 +diff --git a/sound/pci/atiixp_modem.c b/sound/pci/atiixp_modem.c
6861 +index d0bec7ba..57f41820 100644
6862 +--- a/sound/pci/atiixp_modem.c
6863 ++++ b/sound/pci/atiixp_modem.c
6864 +@@ -638,7 +638,9 @@ static void snd_atiixp_xrun_dma(struct atiixp_modem *chip,
6865 + if (! dma->substream || ! dma->running)
6866 + return;
6867 + snd_printdd("atiixp-modem: XRUN detected (DMA %d)\n", dma->ops->type);
6868 ++ snd_pcm_stream_lock(dma->substream);
6869 + snd_pcm_stop(dma->substream, SNDRV_PCM_STATE_XRUN);
6870 ++ snd_pcm_stream_unlock(dma->substream);
6871 + }
6872 +
6873 + /*
6874 +diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
6875 +index 4b1524a8..24400cff 100644
6876 +--- a/sound/pci/hda/hda_generic.c
6877 ++++ b/sound/pci/hda/hda_generic.c
6878 +@@ -840,7 +840,7 @@ static int add_control_with_pfx(struct hda_gen_spec *spec, int type,
6879 + const char *pfx, const char *dir,
6880 + const char *sfx, int cidx, unsigned long val)
6881 + {
6882 +- char name[32];
6883 ++ char name[44];
6884 + snprintf(name, sizeof(name), "%s %s %s", pfx, dir, sfx);
6885 + if (!add_control(spec, type, name, cidx, val))
6886 + return -ENOMEM;
6887 +diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h
6888 +index e0bf7534..2e7493ef 100644
6889 +--- a/sound/pci/hda/hda_local.h
6890 ++++ b/sound/pci/hda/hda_local.h
6891 +@@ -562,6 +562,14 @@ static inline unsigned int get_wcaps_channels(u32 wcaps)
6892 + return chans;
6893 + }
6894 +
6895 ++static inline void snd_hda_override_wcaps(struct hda_codec *codec,
6896 ++ hda_nid_t nid, u32 val)
6897 ++{
6898 ++ if (nid >= codec->start_nid &&
6899 ++ nid < codec->start_nid + codec->num_nodes)
6900 ++ codec->wcaps[nid - codec->start_nid] = val;
6901 ++}
6902 ++
6903 + u32 query_amp_caps(struct hda_codec *codec, hda_nid_t nid, int direction);
6904 + int snd_hda_override_amp_caps(struct hda_codec *codec, hda_nid_t nid, int dir,
6905 + unsigned int caps);
6906 +@@ -667,7 +675,7 @@ snd_hda_check_power_state(struct hda_codec *codec, hda_nid_t nid,
6907 + if (state & AC_PWRST_ERROR)
6908 + return true;
6909 + state = (state >> 4) & 0x0f;
6910 +- return (state != target_state);
6911 ++ return (state == target_state);
6912 + }
6913 +
6914 + unsigned int snd_hda_codec_eapd_power_filter(struct hda_codec *codec,
6915 +diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c
6916 +index 977b0d87..d97f0d61 100644
6917 +--- a/sound/pci/hda/patch_analog.c
6918 ++++ b/sound/pci/hda/patch_analog.c
6919 +@@ -2112,6 +2112,9 @@ static void ad_vmaster_eapd_hook(void *private_data, int enabled)
6920 + {
6921 + struct hda_codec *codec = private_data;
6922 + struct ad198x_spec *spec = codec->spec;
6923 ++
6924 ++ if (!spec->eapd_nid)
6925 ++ return;
6926 + snd_hda_codec_update_cache(codec, spec->eapd_nid, 0,
6927 + AC_VERB_SET_EAPD_BTLENABLE,
6928 + enabled ? 0x02 : 0x00);
6929 +@@ -3601,13 +3604,16 @@ static void ad1884_fixup_hp_eapd(struct hda_codec *codec,
6930 + {
6931 + struct ad198x_spec *spec = codec->spec;
6932 +
6933 +- if (action == HDA_FIXUP_ACT_PRE_PROBE) {
6934 ++ switch (action) {
6935 ++ case HDA_FIXUP_ACT_PRE_PROBE:
6936 ++ spec->gen.vmaster_mute.hook = ad_vmaster_eapd_hook;
6937 ++ break;
6938 ++ case HDA_FIXUP_ACT_PROBE:
6939 + if (spec->gen.autocfg.line_out_type == AUTO_PIN_SPEAKER_OUT)
6940 + spec->eapd_nid = spec->gen.autocfg.line_out_pins[0];
6941 + else
6942 + spec->eapd_nid = spec->gen.autocfg.speaker_pins[0];
6943 +- if (spec->eapd_nid)
6944 +- spec->gen.vmaster_mute.hook = ad_vmaster_eapd_hook;
6945 ++ break;
6946 + }
6947 + }
6948 +
6949 +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
6950 +index e12f7a03..496d7f21 100644
6951 +--- a/sound/pci/hda/patch_hdmi.c
6952 ++++ b/sound/pci/hda/patch_hdmi.c
6953 +@@ -1146,7 +1146,7 @@ static int hdmi_pcm_open(struct hda_pcm_stream *hinfo,
6954 + per_cvt->assigned = 1;
6955 + hinfo->nid = per_cvt->cvt_nid;
6956 +
6957 +- snd_hda_codec_write(codec, per_pin->pin_nid, 0,
6958 ++ snd_hda_codec_write_cache(codec, per_pin->pin_nid, 0,
6959 + AC_VERB_SET_CONNECT_SEL,
6960 + mux_idx);
6961 + snd_hda_spdif_ctls_assign(codec, pin_idx, per_cvt->cvt_nid);
6962 +@@ -2536,6 +2536,7 @@ static const struct hda_codec_preset snd_hda_preset_hdmi[] = {
6963 + { .id = 0x10de0043, .name = "GPU 43 HDMI/DP", .patch = patch_generic_hdmi },
6964 + { .id = 0x10de0044, .name = "GPU 44 HDMI/DP", .patch = patch_generic_hdmi },
6965 + { .id = 0x10de0051, .name = "GPU 51 HDMI/DP", .patch = patch_generic_hdmi },
6966 ++{ .id = 0x10de0060, .name = "GPU 60 HDMI/DP", .patch = patch_generic_hdmi },
6967 + { .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch },
6968 + { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch },
6969 + { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi },
6970 +@@ -2588,6 +2589,7 @@ MODULE_ALIAS("snd-hda-codec-id:10de0042");
6971 + MODULE_ALIAS("snd-hda-codec-id:10de0043");
6972 + MODULE_ALIAS("snd-hda-codec-id:10de0044");
6973 + MODULE_ALIAS("snd-hda-codec-id:10de0051");
6974 ++MODULE_ALIAS("snd-hda-codec-id:10de0060");
6975 + MODULE_ALIAS("snd-hda-codec-id:10de0067");
6976 + MODULE_ALIAS("snd-hda-codec-id:10de8001");
6977 + MODULE_ALIAS("snd-hda-codec-id:11069f80");
6978 +diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
6979 +index e5245544..aed19c3f 100644
6980 +--- a/sound/pci/hda/patch_via.c
6981 ++++ b/sound/pci/hda/patch_via.c
6982 +@@ -910,6 +910,8 @@ static const struct hda_verb vt1708S_init_verbs[] = {
6983 + static void override_mic_boost(struct hda_codec *codec, hda_nid_t pin,
6984 + int offset, int num_steps, int step_size)
6985 + {
6986 ++ snd_hda_override_wcaps(codec, pin,
6987 ++ get_wcaps(codec, pin) | AC_WCAP_IN_AMP);
6988 + snd_hda_override_amp_caps(codec, pin, HDA_INPUT,
6989 + (offset << AC_AMPCAP_OFFSET_SHIFT) |
6990 + (num_steps << AC_AMPCAP_NUM_STEPS_SHIFT) |
6991 +diff --git a/sound/soc/atmel/atmel-pcm-dma.c b/sound/soc/atmel/atmel-pcm-dma.c
6992 +index 1d38fd0b..d1282652 100644
6993 +--- a/sound/soc/atmel/atmel-pcm-dma.c
6994 ++++ b/sound/soc/atmel/atmel-pcm-dma.c
6995 +@@ -81,7 +81,9 @@ static void atmel_pcm_dma_irq(u32 ssc_sr,
6996 +
6997 + /* stop RX and capture: will be enabled again at restart */
6998 + ssc_writex(prtd->ssc->regs, SSC_CR, prtd->mask->ssc_disable);
6999 ++ snd_pcm_stream_lock(substream);
7000 + snd_pcm_stop(substream, SNDRV_PCM_STATE_XRUN);
7001 ++ snd_pcm_stream_unlock(substream);
7002 +
7003 + /* now drain RHR and read status to remove xrun condition */
7004 + ssc_readx(prtd->ssc->regs, SSC_RHR);
7005 +diff --git a/sound/soc/codecs/sgtl5000.h b/sound/soc/codecs/sgtl5000.h
7006 +index 8a9f4353..d3a68bbf 100644
7007 +--- a/sound/soc/codecs/sgtl5000.h
7008 ++++ b/sound/soc/codecs/sgtl5000.h
7009 +@@ -347,7 +347,7 @@
7010 + #define SGTL5000_PLL_INT_DIV_MASK 0xf800
7011 + #define SGTL5000_PLL_INT_DIV_SHIFT 11
7012 + #define SGTL5000_PLL_INT_DIV_WIDTH 5
7013 +-#define SGTL5000_PLL_FRAC_DIV_MASK 0x0700
7014 ++#define SGTL5000_PLL_FRAC_DIV_MASK 0x07ff
7015 + #define SGTL5000_PLL_FRAC_DIV_SHIFT 0
7016 + #define SGTL5000_PLL_FRAC_DIV_WIDTH 11
7017 +
7018 +diff --git a/sound/soc/s6000/s6000-pcm.c b/sound/soc/s6000/s6000-pcm.c
7019 +index 1358c7de..d0740a76 100644
7020 +--- a/sound/soc/s6000/s6000-pcm.c
7021 ++++ b/sound/soc/s6000/s6000-pcm.c
7022 +@@ -128,7 +128,9 @@ static irqreturn_t s6000_pcm_irq(int irq, void *data)
7023 + substream->runtime &&
7024 + snd_pcm_running(substream)) {
7025 + dev_dbg(pcm->dev, "xrun\n");
7026 ++ snd_pcm_stream_lock(substream);
7027 + snd_pcm_stop(substream, SNDRV_PCM_STATE_XRUN);
7028 ++ snd_pcm_stream_unlock(substream);
7029 + ret = IRQ_HANDLED;
7030 + }
7031 +
7032 +diff --git a/sound/usb/6fire/pcm.c b/sound/usb/6fire/pcm.c
7033 +index 40dd50a8..8221ff2f 100644
7034 +--- a/sound/usb/6fire/pcm.c
7035 ++++ b/sound/usb/6fire/pcm.c
7036 +@@ -641,17 +641,25 @@ int usb6fire_pcm_init(struct sfire_chip *chip)
7037 + void usb6fire_pcm_abort(struct sfire_chip *chip)
7038 + {
7039 + struct pcm_runtime *rt = chip->pcm;
7040 ++ unsigned long flags;
7041 + int i;
7042 +
7043 + if (rt) {
7044 + rt->panic = true;
7045 +
7046 +- if (rt->playback.instance)
7047 ++ if (rt->playback.instance) {
7048 ++ snd_pcm_stream_lock_irqsave(rt->playback.instance, flags);
7049 + snd_pcm_stop(rt->playback.instance,
7050 + SNDRV_PCM_STATE_XRUN);
7051 +- if (rt->capture.instance)
7052 ++ snd_pcm_stream_unlock_irqrestore(rt->playback.instance, flags);
7053 ++ }
7054 ++
7055 ++ if (rt->capture.instance) {
7056 ++ snd_pcm_stream_lock_irqsave(rt->capture.instance, flags);
7057 + snd_pcm_stop(rt->capture.instance,
7058 + SNDRV_PCM_STATE_XRUN);
7059 ++ snd_pcm_stream_unlock_irqrestore(rt->capture.instance, flags);
7060 ++ }
7061 +
7062 + for (i = 0; i < PCM_N_URBS; i++) {
7063 + usb_poison_urb(&rt->in_urbs[i].instance);
7064 +diff --git a/sound/usb/misc/ua101.c b/sound/usb/misc/ua101.c
7065 +index 6ad617b9..76d83290 100644
7066 +--- a/sound/usb/misc/ua101.c
7067 ++++ b/sound/usb/misc/ua101.c
7068 +@@ -613,14 +613,24 @@ static int start_usb_playback(struct ua101 *ua)
7069 +
7070 + static void abort_alsa_capture(struct ua101 *ua)
7071 + {
7072 +- if (test_bit(ALSA_CAPTURE_RUNNING, &ua->states))
7073 ++ unsigned long flags;
7074 ++
7075 ++ if (test_bit(ALSA_CAPTURE_RUNNING, &ua->states)) {
7076 ++ snd_pcm_stream_lock_irqsave(ua->capture.substream, flags);
7077 + snd_pcm_stop(ua->capture.substream, SNDRV_PCM_STATE_XRUN);
7078 ++ snd_pcm_stream_unlock_irqrestore(ua->capture.substream, flags);
7079 ++ }
7080 + }
7081 +
7082 + static void abort_alsa_playback(struct ua101 *ua)
7083 + {
7084 +- if (test_bit(ALSA_PLAYBACK_RUNNING, &ua->states))
7085 ++ unsigned long flags;
7086 ++
7087 ++ if (test_bit(ALSA_PLAYBACK_RUNNING, &ua->states)) {
7088 ++ snd_pcm_stream_lock_irqsave(ua->playback.substream, flags);
7089 + snd_pcm_stop(ua->playback.substream, SNDRV_PCM_STATE_XRUN);
7090 ++ snd_pcm_stream_unlock_irqrestore(ua->playback.substream, flags);
7091 ++ }
7092 + }
7093 +
7094 + static int set_stream_hw(struct ua101 *ua, struct snd_pcm_substream *substream,
7095 +diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c
7096 +index b3765324..0ce90337 100644
7097 +--- a/sound/usb/usx2y/usbusx2yaudio.c
7098 ++++ b/sound/usb/usx2y/usbusx2yaudio.c
7099 +@@ -273,7 +273,11 @@ static void usX2Y_clients_stop(struct usX2Ydev *usX2Y)
7100 + struct snd_usX2Y_substream *subs = usX2Y->subs[s];
7101 + if (subs) {
7102 + if (atomic_read(&subs->state) >= state_PRERUNNING) {
7103 ++ unsigned long flags;
7104 ++
7105 ++ snd_pcm_stream_lock_irqsave(subs->pcm_substream, flags);
7106 + snd_pcm_stop(subs->pcm_substream, SNDRV_PCM_STATE_XRUN);
7107 ++ snd_pcm_stream_unlock_irqrestore(subs->pcm_substream, flags);
7108 + }
7109 + for (u = 0; u < NRURBS; u++) {
7110 + struct urb *urb = subs->urb[u];
7111
7112 Added: genpatches-2.6/trunk/3.10.7/1003_linux-3.10.4.patch
7113 ===================================================================
7114 --- genpatches-2.6/trunk/3.10.7/1003_linux-3.10.4.patch (rev 0)
7115 +++ genpatches-2.6/trunk/3.10.7/1003_linux-3.10.4.patch 2013-08-29 12:09:12 UTC (rev 2497)
7116 @@ -0,0 +1,3086 @@
7117 +diff --git a/Makefile b/Makefile
7118 +index b548552..b4df9b2 100644
7119 +--- a/Makefile
7120 ++++ b/Makefile
7121 +@@ -1,6 +1,6 @@
7122 + VERSION = 3
7123 + PATCHLEVEL = 10
7124 +-SUBLEVEL = 3
7125 ++SUBLEVEL = 4
7126 + EXTRAVERSION =
7127 + NAME = Unicycling Gorilla
7128 +
7129 +diff --git a/arch/arm/mach-footbridge/dc21285.c b/arch/arm/mach-footbridge/dc21285.c
7130 +index a7cd2cf..3490a24 100644
7131 +--- a/arch/arm/mach-footbridge/dc21285.c
7132 ++++ b/arch/arm/mach-footbridge/dc21285.c
7133 +@@ -276,8 +276,6 @@ int __init dc21285_setup(int nr, struct pci_sys_data *sys)
7134 +
7135 + sys->mem_offset = DC21285_PCI_MEM;
7136 +
7137 +- pci_ioremap_io(0, DC21285_PCI_IO);
7138 +-
7139 + pci_add_resource_offset(&sys->resources, &res[0], sys->mem_offset);
7140 + pci_add_resource_offset(&sys->resources, &res[1], sys->mem_offset);
7141 +
7142 +diff --git a/arch/arm/mach-s3c24xx/clock-s3c2410.c b/arch/arm/mach-s3c24xx/clock-s3c2410.c
7143 +index 34fffdf..5645536 100644
7144 +--- a/arch/arm/mach-s3c24xx/clock-s3c2410.c
7145 ++++ b/arch/arm/mach-s3c24xx/clock-s3c2410.c
7146 +@@ -119,66 +119,101 @@ static struct clk init_clocks_off[] = {
7147 + }
7148 + };
7149 +
7150 +-static struct clk init_clocks[] = {
7151 +- {
7152 +- .name = "lcd",
7153 +- .parent = &clk_h,
7154 +- .enable = s3c2410_clkcon_enable,
7155 +- .ctrlbit = S3C2410_CLKCON_LCDC,
7156 +- }, {
7157 +- .name = "gpio",
7158 +- .parent = &clk_p,
7159 +- .enable = s3c2410_clkcon_enable,
7160 +- .ctrlbit = S3C2410_CLKCON_GPIO,
7161 +- }, {
7162 +- .name = "usb-host",
7163 +- .parent = &clk_h,
7164 +- .enable = s3c2410_clkcon_enable,
7165 +- .ctrlbit = S3C2410_CLKCON_USBH,
7166 +- }, {
7167 +- .name = "usb-device",
7168 +- .parent = &clk_h,
7169 +- .enable = s3c2410_clkcon_enable,
7170 +- .ctrlbit = S3C2410_CLKCON_USBD,
7171 +- }, {
7172 +- .name = "timers",
7173 +- .parent = &clk_p,
7174 +- .enable = s3c2410_clkcon_enable,
7175 +- .ctrlbit = S3C2410_CLKCON_PWMT,
7176 +- }, {
7177 +- .name = "uart",
7178 +- .devname = "s3c2410-uart.0",
7179 +- .parent = &clk_p,
7180 +- .enable = s3c2410_clkcon_enable,
7181 +- .ctrlbit = S3C2410_CLKCON_UART0,
7182 +- }, {
7183 +- .name = "uart",
7184 +- .devname = "s3c2410-uart.1",
7185 +- .parent = &clk_p,
7186 +- .enable = s3c2410_clkcon_enable,
7187 +- .ctrlbit = S3C2410_CLKCON_UART1,
7188 +- }, {
7189 +- .name = "uart",
7190 +- .devname = "s3c2410-uart.2",
7191 +- .parent = &clk_p,
7192 +- .enable = s3c2410_clkcon_enable,
7193 +- .ctrlbit = S3C2410_CLKCON_UART2,
7194 +- }, {
7195 +- .name = "rtc",
7196 +- .parent = &clk_p,
7197 +- .enable = s3c2410_clkcon_enable,
7198 +- .ctrlbit = S3C2410_CLKCON_RTC,
7199 +- }, {
7200 +- .name = "watchdog",
7201 +- .parent = &clk_p,
7202 +- .ctrlbit = 0,
7203 +- }, {
7204 +- .name = "usb-bus-host",
7205 +- .parent = &clk_usb_bus,
7206 +- }, {
7207 +- .name = "usb-bus-gadget",
7208 +- .parent = &clk_usb_bus,
7209 +- },
7210 ++static struct clk clk_lcd = {
7211 ++ .name = "lcd",
7212 ++ .parent = &clk_h,
7213 ++ .enable = s3c2410_clkcon_enable,
7214 ++ .ctrlbit = S3C2410_CLKCON_LCDC,
7215 ++};
7216 ++
7217 ++static struct clk clk_gpio = {
7218 ++ .name = "gpio",
7219 ++ .parent = &clk_p,
7220 ++ .enable = s3c2410_clkcon_enable,
7221 ++ .ctrlbit = S3C2410_CLKCON_GPIO,
7222 ++};
7223 ++
7224 ++static struct clk clk_usb_host = {
7225 ++ .name = "usb-host",
7226 ++ .parent = &clk_h,
7227 ++ .enable = s3c2410_clkcon_enable,
7228 ++ .ctrlbit = S3C2410_CLKCON_USBH,
7229 ++};
7230 ++
7231 ++static struct clk clk_usb_device = {
7232 ++ .name = "usb-device",
7233 ++ .parent = &clk_h,
7234 ++ .enable = s3c2410_clkcon_enable,
7235 ++ .ctrlbit = S3C2410_CLKCON_USBD,
7236 ++};
7237 ++
7238 ++static struct clk clk_timers = {
7239 ++ .name = "timers",
7240 ++ .parent = &clk_p,
7241 ++ .enable = s3c2410_clkcon_enable,
7242 ++ .ctrlbit = S3C2410_CLKCON_PWMT,
7243 ++};
7244 ++
7245 ++struct clk s3c24xx_clk_uart0 = {
7246 ++ .name = "uart",
7247 ++ .devname = "s3c2410-uart.0",
7248 ++ .parent = &clk_p,
7249 ++ .enable = s3c2410_clkcon_enable,
7250 ++ .ctrlbit = S3C2410_CLKCON_UART0,
7251 ++};
7252 ++
7253 ++struct clk s3c24xx_clk_uart1 = {
7254 ++ .name = "uart",
7255 ++ .devname = "s3c2410-uart.1",
7256 ++ .parent = &clk_p,
7257 ++ .enable = s3c2410_clkcon_enable,
7258 ++ .ctrlbit = S3C2410_CLKCON_UART1,
7259 ++};
7260 ++
7261 ++struct clk s3c24xx_clk_uart2 = {
7262 ++ .name = "uart",
7263 ++ .devname = "s3c2410-uart.2",
7264 ++ .parent = &clk_p,
7265 ++ .enable = s3c2410_clkcon_enable,
7266 ++ .ctrlbit = S3C2410_CLKCON_UART2,
7267 ++};
7268 ++
7269 ++static struct clk clk_rtc = {
7270 ++ .name = "rtc",
7271 ++ .parent = &clk_p,
7272 ++ .enable = s3c2410_clkcon_enable,
7273 ++ .ctrlbit = S3C2410_CLKCON_RTC,
7274 ++};
7275 ++
7276 ++static struct clk clk_watchdog = {
7277 ++ .name = "watchdog",
7278 ++ .parent = &clk_p,
7279 ++ .ctrlbit = 0,
7280 ++};
7281 ++
7282 ++static struct clk clk_usb_bus_host = {
7283 ++ .name = "usb-bus-host",
7284 ++ .parent = &clk_usb_bus,
7285 ++};
7286 ++
7287 ++static struct clk clk_usb_bus_gadget = {
7288 ++ .name = "usb-bus-gadget",
7289 ++ .parent = &clk_usb_bus,
7290 ++};
7291 ++
7292 ++static struct clk *init_clocks[] = {
7293 ++ &clk_lcd,
7294 ++ &clk_gpio,
7295 ++ &clk_usb_host,
7296 ++ &clk_usb_device,
7297 ++ &clk_timers,
7298 ++ &s3c24xx_clk_uart0,
7299 ++ &s3c24xx_clk_uart1,
7300 ++ &s3c24xx_clk_uart2,
7301 ++ &clk_rtc,
7302 ++ &clk_watchdog,
7303 ++ &clk_usb_bus_host,
7304 ++ &clk_usb_bus_gadget,
7305 + };
7306 +
7307 + /* s3c2410_baseclk_add()
7308 +@@ -195,7 +230,6 @@ int __init s3c2410_baseclk_add(void)
7309 + {
7310 + unsigned long clkslow = __raw_readl(S3C2410_CLKSLOW);
7311 + unsigned long clkcon = __raw_readl(S3C2410_CLKCON);
7312 +- struct clk *clkp;
7313 + struct clk *xtal;
7314 + int ret;
7315 + int ptr;
7316 +@@ -207,8 +241,9 @@ int __init s3c2410_baseclk_add(void)
7317 +
7318 + /* register clocks from clock array */
7319 +
7320 +- clkp = init_clocks;
7321 +- for (ptr = 0; ptr < ARRAY_SIZE(init_clocks); ptr++, clkp++) {
7322 ++ for (ptr = 0; ptr < ARRAY_SIZE(init_clocks); ptr++) {
7323 ++ struct clk *clkp = init_clocks[ptr];
7324 ++
7325 + /* ensure that we note the clock state */
7326 +
7327 + clkp->usage = clkcon & clkp->ctrlbit ? 1 : 0;
7328 +diff --git a/arch/arm/mach-s3c24xx/clock-s3c2440.c b/arch/arm/mach-s3c24xx/clock-s3c2440.c
7329 +index 1069b56..aaf006d 100644
7330 +--- a/arch/arm/mach-s3c24xx/clock-s3c2440.c
7331 ++++ b/arch/arm/mach-s3c24xx/clock-s3c2440.c
7332 +@@ -166,6 +166,9 @@ static struct clk_lookup s3c2440_clk_lookup[] = {
7333 + CLKDEV_INIT(NULL, "clk_uart_baud1", &s3c24xx_uclk),
7334 + CLKDEV_INIT(NULL, "clk_uart_baud2", &clk_p),
7335 + CLKDEV_INIT(NULL, "clk_uart_baud3", &s3c2440_clk_fclk_n),
7336 ++ CLKDEV_INIT("s3c2440-uart.0", "uart", &s3c24xx_clk_uart0),
7337 ++ CLKDEV_INIT("s3c2440-uart.1", "uart", &s3c24xx_clk_uart1),
7338 ++ CLKDEV_INIT("s3c2440-uart.2", "uart", &s3c24xx_clk_uart2),
7339 + CLKDEV_INIT("s3c2440-camif", "camera", &s3c2440_clk_cam_upll),
7340 + };
7341 +
7342 +diff --git a/arch/arm/plat-samsung/include/plat/clock.h b/arch/arm/plat-samsung/include/plat/clock.h
7343 +index a62753d..df45d6e 100644
7344 +--- a/arch/arm/plat-samsung/include/plat/clock.h
7345 ++++ b/arch/arm/plat-samsung/include/plat/clock.h
7346 +@@ -83,6 +83,11 @@ extern struct clk clk_ext;
7347 + extern struct clksrc_clk clk_epllref;
7348 + extern struct clksrc_clk clk_esysclk;
7349 +
7350 ++/* S3C24XX UART clocks */
7351 ++extern struct clk s3c24xx_clk_uart0;
7352 ++extern struct clk s3c24xx_clk_uart1;
7353 ++extern struct clk s3c24xx_clk_uart2;
7354 ++
7355 + /* S3C64XX specific clocks */
7356 + extern struct clk clk_h2;
7357 + extern struct clk clk_27m;
7358 +diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c
7359 +index 1e1e18c..2a75ff2 100644
7360 +--- a/arch/mips/cavium-octeon/setup.c
7361 ++++ b/arch/mips/cavium-octeon/setup.c
7362 +@@ -7,6 +7,7 @@
7363 + * Copyright (C) 2008, 2009 Wind River Systems
7364 + * written by Ralf Baechle <ralf@××××××××××.org>
7365 + */
7366 ++#include <linux/compiler.h>
7367 + #include <linux/init.h>
7368 + #include <linux/kernel.h>
7369 + #include <linux/console.h>
7370 +@@ -712,7 +713,7 @@ void __init prom_init(void)
7371 + if (cvmx_read_csr(CVMX_L2D_FUS3) & (3ull << 34)) {
7372 + pr_info("Skipping L2 locking due to reduced L2 cache size\n");
7373 + } else {
7374 +- uint32_t ebase = read_c0_ebase() & 0x3ffff000;
7375 ++ uint32_t __maybe_unused ebase = read_c0_ebase() & 0x3ffff000;
7376 + #ifdef CONFIG_CAVIUM_OCTEON_LOCK_L2_TLB
7377 + /* TLB refill */
7378 + cvmx_l2c_lock_mem_region(ebase, 0x100);
7379 +diff --git a/arch/sparc/kernel/asm-offsets.c b/arch/sparc/kernel/asm-offsets.c
7380 +index 961b87f..f76389a 100644
7381 +--- a/arch/sparc/kernel/asm-offsets.c
7382 ++++ b/arch/sparc/kernel/asm-offsets.c
7383 +@@ -49,6 +49,8 @@ int foo(void)
7384 + DEFINE(AOFF_task_thread, offsetof(struct task_struct, thread));
7385 + BLANK();
7386 + DEFINE(AOFF_mm_context, offsetof(struct mm_struct, context));
7387 ++ BLANK();
7388 ++ DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
7389 +
7390 + /* DEFINE(NUM_USER_SEGMENTS, TASK_SIZE>>28); */
7391 + return 0;
7392 +diff --git a/arch/sparc/mm/hypersparc.S b/arch/sparc/mm/hypersparc.S
7393 +index 44aad32..969f964 100644
7394 +--- a/arch/sparc/mm/hypersparc.S
7395 ++++ b/arch/sparc/mm/hypersparc.S
7396 +@@ -74,7 +74,7 @@ hypersparc_flush_cache_mm_out:
7397 +
7398 + /* The things we do for performance... */
7399 + hypersparc_flush_cache_range:
7400 +- ld [%o0 + 0x0], %o0 /* XXX vma->vm_mm, GROSS XXX */
7401 ++ ld [%o0 + VMA_VM_MM], %o0
7402 + #ifndef CONFIG_SMP
7403 + ld [%o0 + AOFF_mm_context], %g1
7404 + cmp %g1, -1
7405 +@@ -163,7 +163,7 @@ hypersparc_flush_cache_range_out:
7406 + */
7407 + /* Verified, my ass... */
7408 + hypersparc_flush_cache_page:
7409 +- ld [%o0 + 0x0], %o0 /* XXX vma->vm_mm, GROSS XXX */
7410 ++ ld [%o0 + VMA_VM_MM], %o0
7411 + ld [%o0 + AOFF_mm_context], %g2
7412 + #ifndef CONFIG_SMP
7413 + cmp %g2, -1
7414 +@@ -284,7 +284,7 @@ hypersparc_flush_tlb_mm_out:
7415 + sta %g5, [%g1] ASI_M_MMUREGS
7416 +
7417 + hypersparc_flush_tlb_range:
7418 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7419 ++ ld [%o0 + VMA_VM_MM], %o0
7420 + mov SRMMU_CTX_REG, %g1
7421 + ld [%o0 + AOFF_mm_context], %o3
7422 + lda [%g1] ASI_M_MMUREGS, %g5
7423 +@@ -307,7 +307,7 @@ hypersparc_flush_tlb_range_out:
7424 + sta %g5, [%g1] ASI_M_MMUREGS
7425 +
7426 + hypersparc_flush_tlb_page:
7427 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7428 ++ ld [%o0 + VMA_VM_MM], %o0
7429 + mov SRMMU_CTX_REG, %g1
7430 + ld [%o0 + AOFF_mm_context], %o3
7431 + andn %o1, (PAGE_SIZE - 1), %o1
7432 +diff --git a/arch/sparc/mm/swift.S b/arch/sparc/mm/swift.S
7433 +index c801c39..5d2b88d 100644
7434 +--- a/arch/sparc/mm/swift.S
7435 ++++ b/arch/sparc/mm/swift.S
7436 +@@ -105,7 +105,7 @@ swift_flush_cache_mm_out:
7437 +
7438 + .globl swift_flush_cache_range
7439 + swift_flush_cache_range:
7440 +- ld [%o0 + 0x0], %o0 /* XXX vma->vm_mm, GROSS XXX */
7441 ++ ld [%o0 + VMA_VM_MM], %o0
7442 + sub %o2, %o1, %o2
7443 + sethi %hi(4096), %o3
7444 + cmp %o2, %o3
7445 +@@ -116,7 +116,7 @@ swift_flush_cache_range:
7446 +
7447 + .globl swift_flush_cache_page
7448 + swift_flush_cache_page:
7449 +- ld [%o0 + 0x0], %o0 /* XXX vma->vm_mm, GROSS XXX */
7450 ++ ld [%o0 + VMA_VM_MM], %o0
7451 + 70:
7452 + ld [%o0 + AOFF_mm_context], %g2
7453 + cmp %g2, -1
7454 +@@ -219,7 +219,7 @@ swift_flush_sig_insns:
7455 + .globl swift_flush_tlb_range
7456 + .globl swift_flush_tlb_all
7457 + swift_flush_tlb_range:
7458 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7459 ++ ld [%o0 + VMA_VM_MM], %o0
7460 + swift_flush_tlb_mm:
7461 + ld [%o0 + AOFF_mm_context], %g2
7462 + cmp %g2, -1
7463 +@@ -233,7 +233,7 @@ swift_flush_tlb_all_out:
7464 +
7465 + .globl swift_flush_tlb_page
7466 + swift_flush_tlb_page:
7467 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7468 ++ ld [%o0 + VMA_VM_MM], %o0
7469 + mov SRMMU_CTX_REG, %g1
7470 + ld [%o0 + AOFF_mm_context], %o3
7471 + andn %o1, (PAGE_SIZE - 1), %o1
7472 +diff --git a/arch/sparc/mm/tsunami.S b/arch/sparc/mm/tsunami.S
7473 +index 4e55e8f..bf10a34 100644
7474 +--- a/arch/sparc/mm/tsunami.S
7475 ++++ b/arch/sparc/mm/tsunami.S
7476 +@@ -24,7 +24,7 @@
7477 + /* Sliiick... */
7478 + tsunami_flush_cache_page:
7479 + tsunami_flush_cache_range:
7480 +- ld [%o0 + 0x0], %o0 /* XXX vma->vm_mm, GROSS XXX */
7481 ++ ld [%o0 + VMA_VM_MM], %o0
7482 + tsunami_flush_cache_mm:
7483 + ld [%o0 + AOFF_mm_context], %g2
7484 + cmp %g2, -1
7485 +@@ -46,7 +46,7 @@ tsunami_flush_sig_insns:
7486 +
7487 + /* More slick stuff... */
7488 + tsunami_flush_tlb_range:
7489 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7490 ++ ld [%o0 + VMA_VM_MM], %o0
7491 + tsunami_flush_tlb_mm:
7492 + ld [%o0 + AOFF_mm_context], %g2
7493 + cmp %g2, -1
7494 +@@ -65,7 +65,7 @@ tsunami_flush_tlb_out:
7495 +
7496 + /* This one can be done in a fine grained manner... */
7497 + tsunami_flush_tlb_page:
7498 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7499 ++ ld [%o0 + VMA_VM_MM], %o0
7500 + mov SRMMU_CTX_REG, %g1
7501 + ld [%o0 + AOFF_mm_context], %o3
7502 + andn %o1, (PAGE_SIZE - 1), %o1
7503 +diff --git a/arch/sparc/mm/viking.S b/arch/sparc/mm/viking.S
7504 +index bf8ee06..852257f 100644
7505 +--- a/arch/sparc/mm/viking.S
7506 ++++ b/arch/sparc/mm/viking.S
7507 +@@ -108,7 +108,7 @@ viking_mxcc_flush_page:
7508 + viking_flush_cache_page:
7509 + viking_flush_cache_range:
7510 + #ifndef CONFIG_SMP
7511 +- ld [%o0 + 0x0], %o0 /* XXX vma->vm_mm, GROSS XXX */
7512 ++ ld [%o0 + VMA_VM_MM], %o0
7513 + #endif
7514 + viking_flush_cache_mm:
7515 + #ifndef CONFIG_SMP
7516 +@@ -148,7 +148,7 @@ viking_flush_tlb_mm:
7517 + #endif
7518 +
7519 + viking_flush_tlb_range:
7520 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7521 ++ ld [%o0 + VMA_VM_MM], %o0
7522 + mov SRMMU_CTX_REG, %g1
7523 + ld [%o0 + AOFF_mm_context], %o3
7524 + lda [%g1] ASI_M_MMUREGS, %g5
7525 +@@ -173,7 +173,7 @@ viking_flush_tlb_range:
7526 + #endif
7527 +
7528 + viking_flush_tlb_page:
7529 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7530 ++ ld [%o0 + VMA_VM_MM], %o0
7531 + mov SRMMU_CTX_REG, %g1
7532 + ld [%o0 + AOFF_mm_context], %o3
7533 + lda [%g1] ASI_M_MMUREGS, %g5
7534 +@@ -239,7 +239,7 @@ sun4dsmp_flush_tlb_range:
7535 + tst %g5
7536 + bne 3f
7537 + mov SRMMU_CTX_REG, %g1
7538 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7539 ++ ld [%o0 + VMA_VM_MM], %o0
7540 + ld [%o0 + AOFF_mm_context], %o3
7541 + lda [%g1] ASI_M_MMUREGS, %g5
7542 + sethi %hi(~((1 << SRMMU_PGDIR_SHIFT) - 1)), %o4
7543 +@@ -265,7 +265,7 @@ sun4dsmp_flush_tlb_page:
7544 + tst %g5
7545 + bne 2f
7546 + mov SRMMU_CTX_REG, %g1
7547 +- ld [%o0 + 0x00], %o0 /* XXX vma->vm_mm GROSS XXX */
7548 ++ ld [%o0 + VMA_VM_MM], %o0
7549 + ld [%o0 + AOFF_mm_context], %o3
7550 + lda [%g1] ASI_M_MMUREGS, %g5
7551 + and %o1, PAGE_MASK, %o1
7552 +diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
7553 +index 27e86d9..89e1090 100644
7554 +--- a/drivers/edac/edac_mc.c
7555 ++++ b/drivers/edac/edac_mc.c
7556 +@@ -48,6 +48,8 @@ static LIST_HEAD(mc_devices);
7557 + */
7558 + static void const *edac_mc_owner;
7559 +
7560 ++static struct bus_type mc_bus[EDAC_MAX_MCS];
7561 ++
7562 + unsigned edac_dimm_info_location(struct dimm_info *dimm, char *buf,
7563 + unsigned len)
7564 + {
7565 +@@ -723,6 +725,11 @@ int edac_mc_add_mc(struct mem_ctl_info *mci)
7566 + int ret = -EINVAL;
7567 + edac_dbg(0, "\n");
7568 +
7569 ++ if (mci->mc_idx >= EDAC_MAX_MCS) {
7570 ++ pr_warn_once("Too many memory controllers: %d\n", mci->mc_idx);
7571 ++ return -ENODEV;
7572 ++ }
7573 ++
7574 + #ifdef CONFIG_EDAC_DEBUG
7575 + if (edac_debug_level >= 3)
7576 + edac_mc_dump_mci(mci);
7577 +@@ -762,6 +769,8 @@ int edac_mc_add_mc(struct mem_ctl_info *mci)
7578 + /* set load time so that error rate can be tracked */
7579 + mci->start_time = jiffies;
7580 +
7581 ++ mci->bus = &mc_bus[mci->mc_idx];
7582 ++
7583 + if (edac_create_sysfs_mci_device(mci)) {
7584 + edac_mc_printk(mci, KERN_WARNING,
7585 + "failed to create sysfs device\n");
7586 +diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
7587 +index 67610a6..c4d700a 100644
7588 +--- a/drivers/edac/edac_mc_sysfs.c
7589 ++++ b/drivers/edac/edac_mc_sysfs.c
7590 +@@ -370,7 +370,7 @@ static int edac_create_csrow_object(struct mem_ctl_info *mci,
7591 + return -ENODEV;
7592 +
7593 + csrow->dev.type = &csrow_attr_type;
7594 +- csrow->dev.bus = &mci->bus;
7595 ++ csrow->dev.bus = mci->bus;
7596 + device_initialize(&csrow->dev);
7597 + csrow->dev.parent = &mci->dev;
7598 + csrow->mci = mci;
7599 +@@ -605,7 +605,7 @@ static int edac_create_dimm_object(struct mem_ctl_info *mci,
7600 + dimm->mci = mci;
7601 +
7602 + dimm->dev.type = &dimm_attr_type;
7603 +- dimm->dev.bus = &mci->bus;
7604 ++ dimm->dev.bus = mci->bus;
7605 + device_initialize(&dimm->dev);
7606 +
7607 + dimm->dev.parent = &mci->dev;
7608 +@@ -975,11 +975,13 @@ int edac_create_sysfs_mci_device(struct mem_ctl_info *mci)
7609 + * The memory controller needs its own bus, in order to avoid
7610 + * namespace conflicts at /sys/bus/edac.
7611 + */
7612 +- mci->bus.name = kasprintf(GFP_KERNEL, "mc%d", mci->mc_idx);
7613 +- if (!mci->bus.name)
7614 ++ mci->bus->name = kasprintf(GFP_KERNEL, "mc%d", mci->mc_idx);
7615 ++ if (!mci->bus->name)
7616 + return -ENOMEM;
7617 +- edac_dbg(0, "creating bus %s\n", mci->bus.name);
7618 +- err = bus_register(&mci->bus);
7619 ++
7620 ++ edac_dbg(0, "creating bus %s\n", mci->bus->name);
7621 ++
7622 ++ err = bus_register(mci->bus);
7623 + if (err < 0)
7624 + return err;
7625 +
7626 +@@ -988,7 +990,7 @@ int edac_create_sysfs_mci_device(struct mem_ctl_info *mci)
7627 + device_initialize(&mci->dev);
7628 +
7629 + mci->dev.parent = mci_pdev;
7630 +- mci->dev.bus = &mci->bus;
7631 ++ mci->dev.bus = mci->bus;
7632 + dev_set_name(&mci->dev, "mc%d", mci->mc_idx);
7633 + dev_set_drvdata(&mci->dev, mci);
7634 + pm_runtime_forbid(&mci->dev);
7635 +@@ -997,8 +999,8 @@ int edac_create_sysfs_mci_device(struct mem_ctl_info *mci)
7636 + err = device_add(&mci->dev);
7637 + if (err < 0) {
7638 + edac_dbg(1, "failure: create device %s\n", dev_name(&mci->dev));
7639 +- bus_unregister(&mci->bus);
7640 +- kfree(mci->bus.name);
7641 ++ bus_unregister(mci->bus);
7642 ++ kfree(mci->bus->name);
7643 + return err;
7644 + }
7645 +
7646 +@@ -1064,8 +1066,8 @@ fail:
7647 + }
7648 + fail2:
7649 + device_unregister(&mci->dev);
7650 +- bus_unregister(&mci->bus);
7651 +- kfree(mci->bus.name);
7652 ++ bus_unregister(mci->bus);
7653 ++ kfree(mci->bus->name);
7654 + return err;
7655 + }
7656 +
7657 +@@ -1098,8 +1100,8 @@ void edac_unregister_sysfs(struct mem_ctl_info *mci)
7658 + {
7659 + edac_dbg(1, "Unregistering device %s\n", dev_name(&mci->dev));
7660 + device_unregister(&mci->dev);
7661 +- bus_unregister(&mci->bus);
7662 +- kfree(mci->bus.name);
7663 ++ bus_unregister(mci->bus);
7664 ++ kfree(mci->bus->name);
7665 + }
7666 +
7667 + static void mc_attr_release(struct device *dev)
7668 +diff --git a/drivers/edac/i5100_edac.c b/drivers/edac/i5100_edac.c
7669 +index 1b63517..157b934 100644
7670 +--- a/drivers/edac/i5100_edac.c
7671 ++++ b/drivers/edac/i5100_edac.c
7672 +@@ -974,7 +974,7 @@ static int i5100_setup_debugfs(struct mem_ctl_info *mci)
7673 + if (!i5100_debugfs)
7674 + return -ENODEV;
7675 +
7676 +- priv->debugfs = debugfs_create_dir(mci->bus.name, i5100_debugfs);
7677 ++ priv->debugfs = debugfs_create_dir(mci->bus->name, i5100_debugfs);
7678 +
7679 + if (!priv->debugfs)
7680 + return -ENOMEM;
7681 +diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
7682 +index d3e15b4..c42b14b 100644
7683 +--- a/drivers/md/bcache/bcache.h
7684 ++++ b/drivers/md/bcache/bcache.h
7685 +@@ -437,6 +437,7 @@ struct bcache_device {
7686 +
7687 + /* If nonzero, we're detaching/unregistering from cache set */
7688 + atomic_t detaching;
7689 ++ int flush_done;
7690 +
7691 + atomic_long_t sectors_dirty;
7692 + unsigned long sectors_dirty_gc;
7693 +diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
7694 +index 7a5658f..7b687a6 100644
7695 +--- a/drivers/md/bcache/btree.c
7696 ++++ b/drivers/md/bcache/btree.c
7697 +@@ -1419,8 +1419,10 @@ static void btree_gc_start(struct cache_set *c)
7698 + for_each_cache(ca, c, i)
7699 + for_each_bucket(b, ca) {
7700 + b->gc_gen = b->gen;
7701 +- if (!atomic_read(&b->pin))
7702 ++ if (!atomic_read(&b->pin)) {
7703 + SET_GC_MARK(b, GC_MARK_RECLAIMABLE);
7704 ++ SET_GC_SECTORS_USED(b, 0);
7705 ++ }
7706 + }
7707 +
7708 + for (d = c->devices;
7709 +diff --git a/drivers/md/bcache/closure.c b/drivers/md/bcache/closure.c
7710 +index bd05a9a..9aba201 100644
7711 +--- a/drivers/md/bcache/closure.c
7712 ++++ b/drivers/md/bcache/closure.c
7713 +@@ -66,16 +66,18 @@ static inline void closure_put_after_sub(struct closure *cl, int flags)
7714 + } else {
7715 + struct closure *parent = cl->parent;
7716 + struct closure_waitlist *wait = closure_waitlist(cl);
7717 ++ closure_fn *destructor = cl->fn;
7718 +
7719 + closure_debug_destroy(cl);
7720 +
7721 ++ smp_mb();
7722 + atomic_set(&cl->remaining, -1);
7723 +
7724 + if (wait)
7725 + closure_wake_up(wait);
7726 +
7727 +- if (cl->fn)
7728 +- cl->fn(cl);
7729 ++ if (destructor)
7730 ++ destructor(cl);
7731 +
7732 + if (parent)
7733 + closure_put(parent);
7734 +diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
7735 +index 8c8dfdc..8a54d3b 100644
7736 +--- a/drivers/md/bcache/journal.c
7737 ++++ b/drivers/md/bcache/journal.c
7738 +@@ -182,9 +182,14 @@ bsearch:
7739 + pr_debug("starting binary search, l %u r %u", l, r);
7740 +
7741 + while (l + 1 < r) {
7742 ++ seq = list_entry(list->prev, struct journal_replay,
7743 ++ list)->j.seq;
7744 ++
7745 + m = (l + r) >> 1;
7746 ++ read_bucket(m);
7747 +
7748 +- if (read_bucket(m))
7749 ++ if (seq != list_entry(list->prev, struct journal_replay,
7750 ++ list)->j.seq)
7751 + l = m;
7752 + else
7753 + r = m;
7754 +diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
7755 +index e5ff12e..2f36743 100644
7756 +--- a/drivers/md/bcache/request.c
7757 ++++ b/drivers/md/bcache/request.c
7758 +@@ -489,6 +489,12 @@ static void bch_insert_data_loop(struct closure *cl)
7759 + bch_queue_gc(op->c);
7760 + }
7761 +
7762 ++ /*
7763 ++ * Journal writes are marked REQ_FLUSH; if the original write was a
7764 ++ * flush, it'll wait on the journal write.
7765 ++ */
7766 ++ bio->bi_rw &= ~(REQ_FLUSH|REQ_FUA);
7767 ++
7768 + do {
7769 + unsigned i;
7770 + struct bkey *k;
7771 +@@ -716,7 +722,7 @@ static struct search *search_alloc(struct bio *bio, struct bcache_device *d)
7772 + s->task = current;
7773 + s->orig_bio = bio;
7774 + s->write = (bio->bi_rw & REQ_WRITE) != 0;
7775 +- s->op.flush_journal = (bio->bi_rw & REQ_FLUSH) != 0;
7776 ++ s->op.flush_journal = (bio->bi_rw & (REQ_FLUSH|REQ_FUA)) != 0;
7777 + s->op.skip = (bio->bi_rw & REQ_DISCARD) != 0;
7778 + s->recoverable = 1;
7779 + s->start_time = jiffies;
7780 +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
7781 +index f88e2b6..b4713ce 100644
7782 +--- a/drivers/md/bcache/super.c
7783 ++++ b/drivers/md/bcache/super.c
7784 +@@ -704,7 +704,8 @@ static void bcache_device_detach(struct bcache_device *d)
7785 + atomic_set(&d->detaching, 0);
7786 + }
7787 +
7788 +- bcache_device_unlink(d);
7789 ++ if (!d->flush_done)
7790 ++ bcache_device_unlink(d);
7791 +
7792 + d->c->devices[d->id] = NULL;
7793 + closure_put(&d->c->caching);
7794 +@@ -781,6 +782,8 @@ static int bcache_device_init(struct bcache_device *d, unsigned block_size)
7795 + set_bit(QUEUE_FLAG_NONROT, &d->disk->queue->queue_flags);
7796 + set_bit(QUEUE_FLAG_DISCARD, &d->disk->queue->queue_flags);
7797 +
7798 ++ blk_queue_flush(q, REQ_FLUSH|REQ_FUA);
7799 ++
7800 + return 0;
7801 + }
7802 +
7803 +@@ -1014,6 +1017,14 @@ static void cached_dev_flush(struct closure *cl)
7804 + struct cached_dev *dc = container_of(cl, struct cached_dev, disk.cl);
7805 + struct bcache_device *d = &dc->disk;
7806 +
7807 ++ mutex_lock(&bch_register_lock);
7808 ++ d->flush_done = 1;
7809 ++
7810 ++ if (d->c)
7811 ++ bcache_device_unlink(d);
7812 ++
7813 ++ mutex_unlock(&bch_register_lock);
7814 ++
7815 + bch_cache_accounting_destroy(&dc->accounting);
7816 + kobject_del(&d->kobj);
7817 +
7818 +@@ -1303,18 +1314,22 @@ static void cache_set_flush(struct closure *cl)
7819 + static void __cache_set_unregister(struct closure *cl)
7820 + {
7821 + struct cache_set *c = container_of(cl, struct cache_set, caching);
7822 +- struct cached_dev *dc, *t;
7823 ++ struct cached_dev *dc;
7824 + size_t i;
7825 +
7826 + mutex_lock(&bch_register_lock);
7827 +
7828 +- if (test_bit(CACHE_SET_UNREGISTERING, &c->flags))
7829 +- list_for_each_entry_safe(dc, t, &c->cached_devs, list)
7830 +- bch_cached_dev_detach(dc);
7831 +-
7832 + for (i = 0; i < c->nr_uuids; i++)
7833 +- if (c->devices[i] && UUID_FLASH_ONLY(&c->uuids[i]))
7834 +- bcache_device_stop(c->devices[i]);
7835 ++ if (c->devices[i]) {
7836 ++ if (!UUID_FLASH_ONLY(&c->uuids[i]) &&
7837 ++ test_bit(CACHE_SET_UNREGISTERING, &c->flags)) {
7838 ++ dc = container_of(c->devices[i],
7839 ++ struct cached_dev, disk);
7840 ++ bch_cached_dev_detach(dc);
7841 ++ } else {
7842 ++ bcache_device_stop(c->devices[i]);
7843 ++ }
7844 ++ }
7845 +
7846 + mutex_unlock(&bch_register_lock);
7847 +
7848 +diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
7849 +index a1a3a51..0b4616b 100644
7850 +--- a/drivers/media/dvb-core/dmxdev.c
7851 ++++ b/drivers/media/dvb-core/dmxdev.c
7852 +@@ -377,10 +377,8 @@ static int dvb_dmxdev_section_callback(const u8 *buffer1, size_t buffer1_len,
7853 + ret = dvb_dmxdev_buffer_write(&dmxdevfilter->buffer, buffer2,
7854 + buffer2_len);
7855 + }
7856 +- if (ret < 0) {
7857 +- dvb_ringbuffer_flush(&dmxdevfilter->buffer);
7858 ++ if (ret < 0)
7859 + dmxdevfilter->buffer.error = ret;
7860 +- }
7861 + if (dmxdevfilter->params.sec.flags & DMX_ONESHOT)
7862 + dmxdevfilter->state = DMXDEV_STATE_DONE;
7863 + spin_unlock(&dmxdevfilter->dev->lock);
7864 +@@ -416,10 +414,8 @@ static int dvb_dmxdev_ts_callback(const u8 *buffer1, size_t buffer1_len,
7865 + ret = dvb_dmxdev_buffer_write(buffer, buffer1, buffer1_len);
7866 + if (ret == buffer1_len)
7867 + ret = dvb_dmxdev_buffer_write(buffer, buffer2, buffer2_len);
7868 +- if (ret < 0) {
7869 +- dvb_ringbuffer_flush(buffer);
7870 ++ if (ret < 0)
7871 + buffer->error = ret;
7872 +- }
7873 + spin_unlock(&dmxdevfilter->dev->lock);
7874 + wake_up(&buffer->queue);
7875 + return 0;
7876 +diff --git a/drivers/media/pci/saa7134/saa7134-alsa.c b/drivers/media/pci/saa7134/saa7134-alsa.c
7877 +index 10460fd..dbcdfbf 100644
7878 +--- a/drivers/media/pci/saa7134/saa7134-alsa.c
7879 ++++ b/drivers/media/pci/saa7134/saa7134-alsa.c
7880 +@@ -172,7 +172,9 @@ static void saa7134_irq_alsa_done(struct saa7134_dev *dev,
7881 + dprintk("irq: overrun [full=%d/%d] - Blocks in %d\n",dev->dmasound.read_count,
7882 + dev->dmasound.bufsize, dev->dmasound.blocks);
7883 + spin_unlock(&dev->slock);
7884 ++ snd_pcm_stream_lock(dev->dmasound.substream);
7885 + snd_pcm_stop(dev->dmasound.substream,SNDRV_PCM_STATE_XRUN);
7886 ++ snd_pcm_stream_unlock(dev->dmasound.substream);
7887 + return;
7888 + }
7889 +
7890 +diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
7891 +index 42aa54a..b710c6b 100644
7892 +--- a/drivers/net/dummy.c
7893 ++++ b/drivers/net/dummy.c
7894 +@@ -185,6 +185,8 @@ static int __init dummy_init_module(void)
7895 +
7896 + rtnl_lock();
7897 + err = __rtnl_link_register(&dummy_link_ops);
7898 ++ if (err < 0)
7899 ++ goto out;
7900 +
7901 + for (i = 0; i < numdummies && !err; i++) {
7902 + err = dummy_init_one();
7903 +@@ -192,6 +194,8 @@ static int __init dummy_init_module(void)
7904 + }
7905 + if (err < 0)
7906 + __rtnl_link_unregister(&dummy_link_ops);
7907 ++
7908 ++out:
7909 + rtnl_unlock();
7910 +
7911 + return err;
7912 +diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
7913 +index 418de8b..d30085c 100644
7914 +--- a/drivers/net/ethernet/atheros/alx/main.c
7915 ++++ b/drivers/net/ethernet/atheros/alx/main.c
7916 +@@ -1303,6 +1303,8 @@ static int alx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
7917 +
7918 + SET_NETDEV_DEV(netdev, &pdev->dev);
7919 + alx = netdev_priv(netdev);
7920 ++ spin_lock_init(&alx->hw.mdio_lock);
7921 ++ spin_lock_init(&alx->irq_lock);
7922 + alx->dev = netdev;
7923 + alx->hw.pdev = pdev;
7924 + alx->msg_enable = NETIF_MSG_LINK | NETIF_MSG_HW | NETIF_MSG_IFUP |
7925 +@@ -1385,9 +1387,6 @@ static int alx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
7926 +
7927 + INIT_WORK(&alx->link_check_wk, alx_link_check);
7928 + INIT_WORK(&alx->reset_wk, alx_reset);
7929 +- spin_lock_init(&alx->hw.mdio_lock);
7930 +- spin_lock_init(&alx->irq_lock);
7931 +-
7932 + netif_carrier_off(netdev);
7933 +
7934 + err = register_netdev(netdev);
7935 +diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
7936 +index 0688bb8..c23bb02 100644
7937 +--- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
7938 ++++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
7939 +@@ -1665,8 +1665,8 @@ check_sum:
7940 + return 0;
7941 + }
7942 +
7943 +-static void atl1e_tx_map(struct atl1e_adapter *adapter,
7944 +- struct sk_buff *skb, struct atl1e_tpd_desc *tpd)
7945 ++static int atl1e_tx_map(struct atl1e_adapter *adapter,
7946 ++ struct sk_buff *skb, struct atl1e_tpd_desc *tpd)
7947 + {
7948 + struct atl1e_tpd_desc *use_tpd = NULL;
7949 + struct atl1e_tx_buffer *tx_buffer = NULL;
7950 +@@ -1677,6 +1677,8 @@ static void atl1e_tx_map(struct atl1e_adapter *adapter,
7951 + u16 nr_frags;
7952 + u16 f;
7953 + int segment;
7954 ++ int ring_start = adapter->tx_ring.next_to_use;
7955 ++ int ring_end;
7956 +
7957 + nr_frags = skb_shinfo(skb)->nr_frags;
7958 + segment = (tpd->word3 >> TPD_SEGMENT_EN_SHIFT) & TPD_SEGMENT_EN_MASK;
7959 +@@ -1689,6 +1691,9 @@ static void atl1e_tx_map(struct atl1e_adapter *adapter,
7960 + tx_buffer->length = map_len;
7961 + tx_buffer->dma = pci_map_single(adapter->pdev,
7962 + skb->data, hdr_len, PCI_DMA_TODEVICE);
7963 ++ if (dma_mapping_error(&adapter->pdev->dev, tx_buffer->dma))
7964 ++ return -ENOSPC;
7965 ++
7966 + ATL1E_SET_PCIMAP_TYPE(tx_buffer, ATL1E_TX_PCIMAP_SINGLE);
7967 + mapped_len += map_len;
7968 + use_tpd->buffer_addr = cpu_to_le64(tx_buffer->dma);
7969 +@@ -1715,6 +1720,22 @@ static void atl1e_tx_map(struct atl1e_adapter *adapter,
7970 + tx_buffer->dma =
7971 + pci_map_single(adapter->pdev, skb->data + mapped_len,
7972 + map_len, PCI_DMA_TODEVICE);
7973 ++
7974 ++ if (dma_mapping_error(&adapter->pdev->dev, tx_buffer->dma)) {
7975 ++ /* We need to unwind the mappings we've done */
7976 ++ ring_end = adapter->tx_ring.next_to_use;
7977 ++ adapter->tx_ring.next_to_use = ring_start;
7978 ++ while (adapter->tx_ring.next_to_use != ring_end) {
7979 ++ tpd = atl1e_get_tpd(adapter);
7980 ++ tx_buffer = atl1e_get_tx_buffer(adapter, tpd);
7981 ++ pci_unmap_single(adapter->pdev, tx_buffer->dma,
7982 ++ tx_buffer->length, PCI_DMA_TODEVICE);
7983 ++ }
7984 ++ /* Reset the tx rings next pointer */
7985 ++ adapter->tx_ring.next_to_use = ring_start;
7986 ++ return -ENOSPC;
7987 ++ }
7988 ++
7989 + ATL1E_SET_PCIMAP_TYPE(tx_buffer, ATL1E_TX_PCIMAP_SINGLE);
7990 + mapped_len += map_len;
7991 + use_tpd->buffer_addr = cpu_to_le64(tx_buffer->dma);
7992 +@@ -1750,6 +1771,23 @@ static void atl1e_tx_map(struct atl1e_adapter *adapter,
7993 + (i * MAX_TX_BUF_LEN),
7994 + tx_buffer->length,
7995 + DMA_TO_DEVICE);
7996 ++
7997 ++ if (dma_mapping_error(&adapter->pdev->dev, tx_buffer->dma)) {
7998 ++ /* We need to unwind the mappings we've done */
7999 ++ ring_end = adapter->tx_ring.next_to_use;
8000 ++ adapter->tx_ring.next_to_use = ring_start;
8001 ++ while (adapter->tx_ring.next_to_use != ring_end) {
8002 ++ tpd = atl1e_get_tpd(adapter);
8003 ++ tx_buffer = atl1e_get_tx_buffer(adapter, tpd);
8004 ++ dma_unmap_page(&adapter->pdev->dev, tx_buffer->dma,
8005 ++ tx_buffer->length, DMA_TO_DEVICE);
8006 ++ }
8007 ++
8008 ++ /* Reset the ring next to use pointer */
8009 ++ adapter->tx_ring.next_to_use = ring_start;
8010 ++ return -ENOSPC;
8011 ++ }
8012 ++
8013 + ATL1E_SET_PCIMAP_TYPE(tx_buffer, ATL1E_TX_PCIMAP_PAGE);
8014 + use_tpd->buffer_addr = cpu_to_le64(tx_buffer->dma);
8015 + use_tpd->word2 = (use_tpd->word2 & (~TPD_BUFLEN_MASK)) |
8016 +@@ -1767,6 +1805,7 @@ static void atl1e_tx_map(struct atl1e_adapter *adapter,
8017 + /* The last buffer info contain the skb address,
8018 + so it will be free after unmap */
8019 + tx_buffer->skb = skb;
8020 ++ return 0;
8021 + }
8022 +
8023 + static void atl1e_tx_queue(struct atl1e_adapter *adapter, u16 count,
8024 +@@ -1834,10 +1873,15 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
8025 + return NETDEV_TX_OK;
8026 + }
8027 +
8028 +- atl1e_tx_map(adapter, skb, tpd);
8029 ++ if (atl1e_tx_map(adapter, skb, tpd)) {
8030 ++ dev_kfree_skb_any(skb);
8031 ++ goto out;
8032 ++ }
8033 ++
8034 + atl1e_tx_queue(adapter, tpd_req, tpd);
8035 +
8036 + netdev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
8037 ++out:
8038 + spin_unlock_irqrestore(&adapter->tx_lock, flags);
8039 + return NETDEV_TX_OK;
8040 + }
8041 +diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
8042 +index c89aa41..b4e0dc8 100644
8043 +--- a/drivers/net/ethernet/cadence/macb.c
8044 ++++ b/drivers/net/ethernet/cadence/macb.c
8045 +@@ -1070,7 +1070,7 @@ static void macb_configure_dma(struct macb *bp)
8046 + static void macb_configure_caps(struct macb *bp)
8047 + {
8048 + if (macb_is_gem(bp)) {
8049 +- if (GEM_BF(IRQCOR, gem_readl(bp, DCFG1)) == 0)
8050 ++ if (GEM_BFEXT(IRQCOR, gem_readl(bp, DCFG1)) == 0)
8051 + bp->caps |= MACB_CAPS_ISR_CLEAR_ON_WRITE;
8052 + }
8053 + }
8054 +diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
8055 +index a0b4be5..6e43426 100644
8056 +--- a/drivers/net/ethernet/emulex/benet/be_main.c
8057 ++++ b/drivers/net/ethernet/emulex/benet/be_main.c
8058 +@@ -782,16 +782,22 @@ static struct sk_buff *be_insert_vlan_in_pkt(struct be_adapter *adapter,
8059 +
8060 + if (vlan_tx_tag_present(skb))
8061 + vlan_tag = be_get_tx_vlan_tag(adapter, skb);
8062 +- else if (qnq_async_evt_rcvd(adapter) && adapter->pvid)
8063 +- vlan_tag = adapter->pvid;
8064 ++
8065 ++ if (qnq_async_evt_rcvd(adapter) && adapter->pvid) {
8066 ++ if (!vlan_tag)
8067 ++ vlan_tag = adapter->pvid;
8068 ++ /* f/w workaround to set skip_hw_vlan = 1, informs the F/W to
8069 ++ * skip VLAN insertion
8070 ++ */
8071 ++ if (skip_hw_vlan)
8072 ++ *skip_hw_vlan = true;
8073 ++ }
8074 +
8075 + if (vlan_tag) {
8076 + skb = __vlan_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
8077 + if (unlikely(!skb))
8078 + return skb;
8079 + skb->vlan_tci = 0;
8080 +- if (skip_hw_vlan)
8081 +- *skip_hw_vlan = true;
8082 + }
8083 +
8084 + /* Insert the outer VLAN, if any */
8085 +diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
8086 +index a7dfe36..5173eaa 100644
8087 +--- a/drivers/net/ethernet/sfc/rx.c
8088 ++++ b/drivers/net/ethernet/sfc/rx.c
8089 +@@ -282,9 +282,9 @@ static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue,
8090 + }
8091 +
8092 + /* Recycle the pages that are used by buffers that have just been received. */
8093 +-static void efx_recycle_rx_buffers(struct efx_channel *channel,
8094 +- struct efx_rx_buffer *rx_buf,
8095 +- unsigned int n_frags)
8096 ++static void efx_recycle_rx_pages(struct efx_channel *channel,
8097 ++ struct efx_rx_buffer *rx_buf,
8098 ++ unsigned int n_frags)
8099 + {
8100 + struct efx_rx_queue *rx_queue = efx_channel_get_rx_queue(channel);
8101 +
8102 +@@ -294,6 +294,20 @@ static void efx_recycle_rx_buffers(struct efx_channel *channel,
8103 + } while (--n_frags);
8104 + }
8105 +
8106 ++static void efx_discard_rx_packet(struct efx_channel *channel,
8107 ++ struct efx_rx_buffer *rx_buf,
8108 ++ unsigned int n_frags)
8109 ++{
8110 ++ struct efx_rx_queue *rx_queue = efx_channel_get_rx_queue(channel);
8111 ++
8112 ++ efx_recycle_rx_pages(channel, rx_buf, n_frags);
8113 ++
8114 ++ do {
8115 ++ efx_free_rx_buffer(rx_buf);
8116 ++ rx_buf = efx_rx_buf_next(rx_queue, rx_buf);
8117 ++ } while (--n_frags);
8118 ++}
8119 ++
8120 + /**
8121 + * efx_fast_push_rx_descriptors - push new RX descriptors quickly
8122 + * @rx_queue: RX descriptor queue
8123 +@@ -533,8 +547,7 @@ void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index,
8124 + */
8125 + if (unlikely(rx_buf->flags & EFX_RX_PKT_DISCARD)) {
8126 + efx_rx_flush_packet(channel);
8127 +- put_page(rx_buf->page);
8128 +- efx_recycle_rx_buffers(channel, rx_buf, n_frags);
8129 ++ efx_discard_rx_packet(channel, rx_buf, n_frags);
8130 + return;
8131 + }
8132 +
8133 +@@ -570,9 +583,9 @@ void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index,
8134 + efx_sync_rx_buffer(efx, rx_buf, rx_buf->len);
8135 + }
8136 +
8137 +- /* All fragments have been DMA-synced, so recycle buffers and pages. */
8138 ++ /* All fragments have been DMA-synced, so recycle pages. */
8139 + rx_buf = efx_rx_buffer(rx_queue, index);
8140 +- efx_recycle_rx_buffers(channel, rx_buf, n_frags);
8141 ++ efx_recycle_rx_pages(channel, rx_buf, n_frags);
8142 +
8143 + /* Pipeline receives so that we give time for packet headers to be
8144 + * prefetched into cache.
8145 +diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c
8146 +index 1df0ff3..3df5684 100644
8147 +--- a/drivers/net/ethernet/sun/sunvnet.c
8148 ++++ b/drivers/net/ethernet/sun/sunvnet.c
8149 +@@ -1239,6 +1239,8 @@ static int vnet_port_remove(struct vio_dev *vdev)
8150 + dev_set_drvdata(&vdev->dev, NULL);
8151 +
8152 + kfree(port);
8153 ++
8154 ++ unregister_netdev(vp->dev);
8155 + }
8156 + return 0;
8157 + }
8158 +diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
8159 +index 4dccead..23a0fff 100644
8160 +--- a/drivers/net/hyperv/netvsc_drv.c
8161 ++++ b/drivers/net/hyperv/netvsc_drv.c
8162 +@@ -431,8 +431,8 @@ static int netvsc_probe(struct hv_device *dev,
8163 + net->netdev_ops = &device_ops;
8164 +
8165 + /* TODO: Add GSO and Checksum offload */
8166 +- net->hw_features = NETIF_F_SG;
8167 +- net->features = NETIF_F_SG | NETIF_F_HW_VLAN_CTAG_TX;
8168 ++ net->hw_features = 0;
8169 ++ net->features = NETIF_F_HW_VLAN_CTAG_TX;
8170 +
8171 + SET_ETHTOOL_OPS(net, &ethtool_ops);
8172 + SET_NETDEV_DEV(net, &dev->device);
8173 +diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c
8174 +index dc9f6a4..a3bed28 100644
8175 +--- a/drivers/net/ifb.c
8176 ++++ b/drivers/net/ifb.c
8177 +@@ -291,11 +291,17 @@ static int __init ifb_init_module(void)
8178 +
8179 + rtnl_lock();
8180 + err = __rtnl_link_register(&ifb_link_ops);
8181 ++ if (err < 0)
8182 ++ goto out;
8183 +
8184 +- for (i = 0; i < numifbs && !err; i++)
8185 ++ for (i = 0; i < numifbs && !err; i++) {
8186 + err = ifb_init_one(i);
8187 ++ cond_resched();
8188 ++ }
8189 + if (err)
8190 + __rtnl_link_unregister(&ifb_link_ops);
8191 ++
8192 ++out:
8193 + rtnl_unlock();
8194 +
8195 + return err;
8196 +diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
8197 +index b6dd6a7..523d6b2 100644
8198 +--- a/drivers/net/macvtap.c
8199 ++++ b/drivers/net/macvtap.c
8200 +@@ -633,6 +633,28 @@ static int macvtap_skb_to_vnet_hdr(const struct sk_buff *skb,
8201 + return 0;
8202 + }
8203 +
8204 ++static unsigned long iov_pages(const struct iovec *iv, int offset,
8205 ++ unsigned long nr_segs)
8206 ++{
8207 ++ unsigned long seg, base;
8208 ++ int pages = 0, len, size;
8209 ++
8210 ++ while (nr_segs && (offset >= iv->iov_len)) {
8211 ++ offset -= iv->iov_len;
8212 ++ ++iv;
8213 ++ --nr_segs;
8214 ++ }
8215 ++
8216 ++ for (seg = 0; seg < nr_segs; seg++) {
8217 ++ base = (unsigned long)iv[seg].iov_base + offset;
8218 ++ len = iv[seg].iov_len - offset;
8219 ++ size = ((base & ~PAGE_MASK) + len + ~PAGE_MASK) >> PAGE_SHIFT;
8220 ++ pages += size;
8221 ++ offset = 0;
8222 ++ }
8223 ++
8224 ++ return pages;
8225 ++}
8226 +
8227 + /* Get packet from user space buffer */
8228 + static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
8229 +@@ -647,6 +669,7 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
8230 + int vnet_hdr_len = 0;
8231 + int copylen = 0;
8232 + bool zerocopy = false;
8233 ++ size_t linear;
8234 +
8235 + if (q->flags & IFF_VNET_HDR) {
8236 + vnet_hdr_len = q->vnet_hdr_sz;
8237 +@@ -678,42 +701,35 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
8238 + if (unlikely(count > UIO_MAXIOV))
8239 + goto err;
8240 +
8241 +- if (m && m->msg_control && sock_flag(&q->sk, SOCK_ZEROCOPY))
8242 +- zerocopy = true;
8243 ++ if (m && m->msg_control && sock_flag(&q->sk, SOCK_ZEROCOPY)) {
8244 ++ copylen = vnet_hdr.hdr_len ? vnet_hdr.hdr_len : GOODCOPY_LEN;
8245 ++ linear = copylen;
8246 ++ if (iov_pages(iv, vnet_hdr_len + copylen, count)
8247 ++ <= MAX_SKB_FRAGS)
8248 ++ zerocopy = true;
8249 ++ }
8250 +
8251 +- if (zerocopy) {
8252 +- /* Userspace may produce vectors with count greater than
8253 +- * MAX_SKB_FRAGS, so we need to linearize parts of the skb
8254 +- * to let the rest of data to be fit in the frags.
8255 +- */
8256 +- if (count > MAX_SKB_FRAGS) {
8257 +- copylen = iov_length(iv, count - MAX_SKB_FRAGS);
8258 +- if (copylen < vnet_hdr_len)
8259 +- copylen = 0;
8260 +- else
8261 +- copylen -= vnet_hdr_len;
8262 +- }
8263 +- /* There are 256 bytes to be copied in skb, so there is enough
8264 +- * room for skb expand head in case it is used.
8265 +- * The rest buffer is mapped from userspace.
8266 +- */
8267 +- if (copylen < vnet_hdr.hdr_len)
8268 +- copylen = vnet_hdr.hdr_len;
8269 +- if (!copylen)
8270 +- copylen = GOODCOPY_LEN;
8271 +- } else
8272 ++ if (!zerocopy) {
8273 + copylen = len;
8274 ++ linear = vnet_hdr.hdr_len;
8275 ++ }
8276 +
8277 + skb = macvtap_alloc_skb(&q->sk, NET_IP_ALIGN, copylen,
8278 +- vnet_hdr.hdr_len, noblock, &err);
8279 ++ linear, noblock, &err);
8280 + if (!skb)
8281 + goto err;
8282 +
8283 + if (zerocopy)
8284 + err = zerocopy_sg_from_iovec(skb, iv, vnet_hdr_len, count);
8285 +- else
8286 ++ else {
8287 + err = skb_copy_datagram_from_iovec(skb, 0, iv, vnet_hdr_len,
8288 + len);
8289 ++ if (!err && m && m->msg_control) {
8290 ++ struct ubuf_info *uarg = m->msg_control;
8291 ++ uarg->callback(uarg, false);
8292 ++ }
8293 ++ }
8294 ++
8295 + if (err)
8296 + goto err_kfree;
8297 +
8298 +diff --git a/drivers/net/tun.c b/drivers/net/tun.c
8299 +index 9c61f87..2491eb2 100644
8300 +--- a/drivers/net/tun.c
8301 ++++ b/drivers/net/tun.c
8302 +@@ -1037,6 +1037,29 @@ static int zerocopy_sg_from_iovec(struct sk_buff *skb, const struct iovec *from,
8303 + return 0;
8304 + }
8305 +
8306 ++static unsigned long iov_pages(const struct iovec *iv, int offset,
8307 ++ unsigned long nr_segs)
8308 ++{
8309 ++ unsigned long seg, base;
8310 ++ int pages = 0, len, size;
8311 ++
8312 ++ while (nr_segs && (offset >= iv->iov_len)) {
8313 ++ offset -= iv->iov_len;
8314 ++ ++iv;
8315 ++ --nr_segs;
8316 ++ }
8317 ++
8318 ++ for (seg = 0; seg < nr_segs; seg++) {
8319 ++ base = (unsigned long)iv[seg].iov_base + offset;
8320 ++ len = iv[seg].iov_len - offset;
8321 ++ size = ((base & ~PAGE_MASK) + len + ~PAGE_MASK) >> PAGE_SHIFT;
8322 ++ pages += size;
8323 ++ offset = 0;
8324 ++ }
8325 ++
8326 ++ return pages;
8327 ++}
8328 ++
8329 + /* Get packet from user space buffer */
8330 + static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
8331 + void *msg_control, const struct iovec *iv,
8332 +@@ -1044,7 +1067,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
8333 + {
8334 + struct tun_pi pi = { 0, cpu_to_be16(ETH_P_IP) };
8335 + struct sk_buff *skb;
8336 +- size_t len = total_len, align = NET_SKB_PAD;
8337 ++ size_t len = total_len, align = NET_SKB_PAD, linear;
8338 + struct virtio_net_hdr gso = { 0 };
8339 + int offset = 0;
8340 + int copylen;
8341 +@@ -1084,34 +1107,23 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
8342 + return -EINVAL;
8343 + }
8344 +
8345 +- if (msg_control)
8346 +- zerocopy = true;
8347 +-
8348 +- if (zerocopy) {
8349 +- /* Userspace may produce vectors with count greater than
8350 +- * MAX_SKB_FRAGS, so we need to linearize parts of the skb
8351 +- * to let the rest of data to be fit in the frags.
8352 +- */
8353 +- if (count > MAX_SKB_FRAGS) {
8354 +- copylen = iov_length(iv, count - MAX_SKB_FRAGS);
8355 +- if (copylen < offset)
8356 +- copylen = 0;
8357 +- else
8358 +- copylen -= offset;
8359 +- } else
8360 +- copylen = 0;
8361 +- /* There are 256 bytes to be copied in skb, so there is enough
8362 +- * room for skb expand head in case it is used.
8363 ++ if (msg_control) {
8364 ++ /* There are 256 bytes to be copied in skb, so there is
8365 ++ * enough room for skb expand head in case it is used.
8366 + * The rest of the buffer is mapped from userspace.
8367 + */
8368 +- if (copylen < gso.hdr_len)
8369 +- copylen = gso.hdr_len;
8370 +- if (!copylen)
8371 +- copylen = GOODCOPY_LEN;
8372 +- } else
8373 ++ copylen = gso.hdr_len ? gso.hdr_len : GOODCOPY_LEN;
8374 ++ linear = copylen;
8375 ++ if (iov_pages(iv, offset + copylen, count) <= MAX_SKB_FRAGS)
8376 ++ zerocopy = true;
8377 ++ }
8378 ++
8379 ++ if (!zerocopy) {
8380 + copylen = len;
8381 ++ linear = gso.hdr_len;
8382 ++ }
8383 +
8384 +- skb = tun_alloc_skb(tfile, align, copylen, gso.hdr_len, noblock);
8385 ++ skb = tun_alloc_skb(tfile, align, copylen, linear, noblock);
8386 + if (IS_ERR(skb)) {
8387 + if (PTR_ERR(skb) != -EAGAIN)
8388 + tun->dev->stats.rx_dropped++;
8389 +@@ -1120,8 +1132,13 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
8390 +
8391 + if (zerocopy)
8392 + err = zerocopy_sg_from_iovec(skb, iv, offset, count);
8393 +- else
8394 ++ else {
8395 + err = skb_copy_datagram_from_iovec(skb, 0, iv, offset, len);
8396 ++ if (!err && msg_control) {
8397 ++ struct ubuf_info *uarg = msg_control;
8398 ++ uarg->callback(uarg, false);
8399 ++ }
8400 ++ }
8401 +
8402 + if (err) {
8403 + tun->dev->stats.rx_dropped++;
8404 +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
8405 +index c9e0038..42d670a 100644
8406 +--- a/drivers/net/virtio_net.c
8407 ++++ b/drivers/net/virtio_net.c
8408 +@@ -602,7 +602,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
8409 + container_of(napi, struct receive_queue, napi);
8410 + struct virtnet_info *vi = rq->vq->vdev->priv;
8411 + void *buf;
8412 +- unsigned int len, received = 0;
8413 ++ unsigned int r, len, received = 0;
8414 +
8415 + again:
8416 + while (received < budget &&
8417 +@@ -619,8 +619,9 @@ again:
8418 +
8419 + /* Out of packets? */
8420 + if (received < budget) {
8421 ++ r = virtqueue_enable_cb_prepare(rq->vq);
8422 + napi_complete(napi);
8423 +- if (unlikely(!virtqueue_enable_cb(rq->vq)) &&
8424 ++ if (unlikely(virtqueue_poll(rq->vq, r)) &&
8425 + napi_schedule_prep(napi)) {
8426 + virtqueue_disable_cb(rq->vq);
8427 + __napi_schedule(napi);
8428 +diff --git a/drivers/rapidio/switches/idt_gen2.c b/drivers/rapidio/switches/idt_gen2.c
8429 +index 809b7a3..5d3b0f0 100644
8430 +--- a/drivers/rapidio/switches/idt_gen2.c
8431 ++++ b/drivers/rapidio/switches/idt_gen2.c
8432 +@@ -15,6 +15,8 @@
8433 + #include <linux/rio_drv.h>
8434 + #include <linux/rio_ids.h>
8435 + #include <linux/delay.h>
8436 ++
8437 ++#include <asm/page.h>
8438 + #include "../rio.h"
8439 +
8440 + #define LOCAL_RTE_CONF_DESTID_SEL 0x010070
8441 +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
8442 +index 3a9ddae..89178b8 100644
8443 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c
8444 ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
8445 +@@ -4852,10 +4852,12 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
8446 + sense, sense_handle);
8447 + }
8448 +
8449 +- for (i = 0; i < ioc->sge_count && kbuff_arr[i]; i++) {
8450 +- dma_free_coherent(&instance->pdev->dev,
8451 +- kern_sge32[i].length,
8452 +- kbuff_arr[i], kern_sge32[i].phys_addr);
8453 ++ for (i = 0; i < ioc->sge_count; i++) {
8454 ++ if (kbuff_arr[i])
8455 ++ dma_free_coherent(&instance->pdev->dev,
8456 ++ kern_sge32[i].length,
8457 ++ kbuff_arr[i],
8458 ++ kern_sge32[i].phys_addr);
8459 + }
8460 +
8461 + megasas_return_cmd(instance, cmd);
8462 +diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
8463 +index dcbf7c8..f8c4b85 100644
8464 +--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
8465 ++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
8466 +@@ -1273,6 +1273,7 @@ _scsih_slave_alloc(struct scsi_device *sdev)
8467 + struct MPT3SAS_DEVICE *sas_device_priv_data;
8468 + struct scsi_target *starget;
8469 + struct _raid_device *raid_device;
8470 ++ struct _sas_device *sas_device;
8471 + unsigned long flags;
8472 +
8473 + sas_device_priv_data = kzalloc(sizeof(struct scsi_device), GFP_KERNEL);
8474 +@@ -1301,6 +1302,19 @@ _scsih_slave_alloc(struct scsi_device *sdev)
8475 + spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
8476 + }
8477 +
8478 ++ if (!(sas_target_priv_data->flags & MPT_TARGET_FLAGS_VOLUME)) {
8479 ++ spin_lock_irqsave(&ioc->sas_device_lock, flags);
8480 ++ sas_device = mpt3sas_scsih_sas_device_find_by_sas_address(ioc,
8481 ++ sas_target_priv_data->sas_address);
8482 ++ if (sas_device && (sas_device->starget == NULL)) {
8483 ++ sdev_printk(KERN_INFO, sdev,
8484 ++ "%s : sas_device->starget set to starget @ %d\n",
8485 ++ __func__, __LINE__);
8486 ++ sas_device->starget = starget;
8487 ++ }
8488 ++ spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
8489 ++ }
8490 ++
8491 + return 0;
8492 + }
8493 +
8494 +@@ -6392,7 +6406,7 @@ _scsih_search_responding_sas_devices(struct MPT3SAS_ADAPTER *ioc)
8495 + handle))) {
8496 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8497 + MPI2_IOCSTATUS_MASK;
8498 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8499 ++ if (ioc_status != MPI2_IOCSTATUS_SUCCESS)
8500 + break;
8501 + handle = le16_to_cpu(sas_device_pg0.DevHandle);
8502 + device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
8503 +@@ -6494,7 +6508,7 @@ _scsih_search_responding_raid_devices(struct MPT3SAS_ADAPTER *ioc)
8504 + &volume_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
8505 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8506 + MPI2_IOCSTATUS_MASK;
8507 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8508 ++ if (ioc_status != MPI2_IOCSTATUS_SUCCESS)
8509 + break;
8510 + handle = le16_to_cpu(volume_pg1.DevHandle);
8511 +
8512 +@@ -6518,7 +6532,7 @@ _scsih_search_responding_raid_devices(struct MPT3SAS_ADAPTER *ioc)
8513 + phys_disk_num))) {
8514 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8515 + MPI2_IOCSTATUS_MASK;
8516 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8517 ++ if (ioc_status != MPI2_IOCSTATUS_SUCCESS)
8518 + break;
8519 + phys_disk_num = pd_pg0.PhysDiskNum;
8520 + handle = le16_to_cpu(pd_pg0.DevHandle);
8521 +@@ -6597,7 +6611,7 @@ _scsih_search_responding_expanders(struct MPT3SAS_ADAPTER *ioc)
8522 +
8523 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8524 + MPI2_IOCSTATUS_MASK;
8525 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8526 ++ if (ioc_status != MPI2_IOCSTATUS_SUCCESS)
8527 + break;
8528 +
8529 + handle = le16_to_cpu(expander_pg0.DevHandle);
8530 +@@ -6742,8 +6756,6 @@ _scsih_scan_for_devices_after_reset(struct MPT3SAS_ADAPTER *ioc)
8531 + MPI2_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL, handle))) {
8532 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8533 + MPI2_IOCSTATUS_MASK;
8534 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8535 +- break;
8536 + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
8537 + pr_info(MPT3SAS_FMT "\tbreak from expander scan: " \
8538 + "ioc_status(0x%04x), loginfo(0x%08x)\n",
8539 +@@ -6787,8 +6799,6 @@ _scsih_scan_for_devices_after_reset(struct MPT3SAS_ADAPTER *ioc)
8540 + phys_disk_num))) {
8541 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8542 + MPI2_IOCSTATUS_MASK;
8543 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8544 +- break;
8545 + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
8546 + pr_info(MPT3SAS_FMT "\tbreak from phys disk scan: "\
8547 + "ioc_status(0x%04x), loginfo(0x%08x)\n",
8548 +@@ -6854,8 +6864,6 @@ _scsih_scan_for_devices_after_reset(struct MPT3SAS_ADAPTER *ioc)
8549 + &volume_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
8550 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8551 + MPI2_IOCSTATUS_MASK;
8552 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8553 +- break;
8554 + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
8555 + pr_info(MPT3SAS_FMT "\tbreak from volume scan: " \
8556 + "ioc_status(0x%04x), loginfo(0x%08x)\n",
8557 +@@ -6914,8 +6922,6 @@ _scsih_scan_for_devices_after_reset(struct MPT3SAS_ADAPTER *ioc)
8558 + handle))) {
8559 + ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
8560 + MPI2_IOCSTATUS_MASK;
8561 +- if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
8562 +- break;
8563 + if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
8564 + pr_info(MPT3SAS_FMT "\tbreak from end device scan:"\
8565 + " ioc_status(0x%04x), loginfo(0x%08x)\n",
8566 +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
8567 +index 2c65955..c90d960 100644
8568 +--- a/drivers/usb/serial/cp210x.c
8569 ++++ b/drivers/usb/serial/cp210x.c
8570 +@@ -53,6 +53,7 @@ static const struct usb_device_id id_table[] = {
8571 + { USB_DEVICE(0x0489, 0xE000) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
8572 + { USB_DEVICE(0x0489, 0xE003) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
8573 + { USB_DEVICE(0x0745, 0x1000) }, /* CipherLab USB CCD Barcode Scanner 1000 */
8574 ++ { USB_DEVICE(0x0846, 0x1100) }, /* NetGear Managed Switch M4100 series, M5300 series, M7100 series */
8575 + { USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
8576 + { USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
8577 + { USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
8578 +@@ -118,6 +119,8 @@ static const struct usb_device_id id_table[] = {
8579 + { USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
8580 + { USB_DEVICE(0x10C4, 0x8664) }, /* AC-Services CAN-IF */
8581 + { USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */
8582 ++ { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
8583 ++ { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
8584 + { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
8585 + { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */
8586 + { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
8587 +@@ -148,6 +151,7 @@ static const struct usb_device_id id_table[] = {
8588 + { USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
8589 + { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
8590 + { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
8591 ++ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
8592 + { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
8593 + { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
8594 + { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
8595 +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
8596 +index 5dd857d..1cf6f12 100644
8597 +--- a/drivers/usb/serial/option.c
8598 ++++ b/drivers/usb/serial/option.c
8599 +@@ -341,17 +341,12 @@ static void option_instat_callback(struct urb *urb);
8600 + #define OLIVETTI_VENDOR_ID 0x0b3c
8601 + #define OLIVETTI_PRODUCT_OLICARD100 0xc000
8602 + #define OLIVETTI_PRODUCT_OLICARD145 0xc003
8603 ++#define OLIVETTI_PRODUCT_OLICARD200 0xc005
8604 +
8605 + /* Celot products */
8606 + #define CELOT_VENDOR_ID 0x211f
8607 + #define CELOT_PRODUCT_CT680M 0x6801
8608 +
8609 +-/* ONDA Communication vendor id */
8610 +-#define ONDA_VENDOR_ID 0x1ee8
8611 +-
8612 +-/* ONDA MT825UP HSDPA 14.2 modem */
8613 +-#define ONDA_MT825UP 0x000b
8614 +-
8615 + /* Samsung products */
8616 + #define SAMSUNG_VENDOR_ID 0x04e8
8617 + #define SAMSUNG_PRODUCT_GT_B3730 0x6889
8618 +@@ -444,7 +439,8 @@ static void option_instat_callback(struct urb *urb);
8619 +
8620 + /* Hyundai Petatel Inc. products */
8621 + #define PETATEL_VENDOR_ID 0x1ff4
8622 +-#define PETATEL_PRODUCT_NP10T 0x600e
8623 ++#define PETATEL_PRODUCT_NP10T_600A 0x600a
8624 ++#define PETATEL_PRODUCT_NP10T_600E 0x600e
8625 +
8626 + /* TP-LINK Incorporated products */
8627 + #define TPLINK_VENDOR_ID 0x2357
8628 +@@ -782,6 +778,7 @@ static const struct usb_device_id option_ids[] = {
8629 + { USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC650) },
8630 + { USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) },
8631 + { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
8632 ++ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
8633 + { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
8634 + { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6280) }, /* BP3-USB & BP3-EXT HSDPA */
8635 + { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6008) },
8636 +@@ -817,7 +814,8 @@ static const struct usb_device_id option_ids[] = {
8637 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0017, 0xff, 0xff, 0xff),
8638 + .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
8639 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0018, 0xff, 0xff, 0xff) },
8640 +- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0019, 0xff, 0xff, 0xff) },
8641 ++ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0019, 0xff, 0xff, 0xff),
8642 ++ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
8643 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0020, 0xff, 0xff, 0xff) },
8644 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0021, 0xff, 0xff, 0xff),
8645 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
8646 +@@ -1256,8 +1254,8 @@ static const struct usb_device_id option_ids[] = {
8647 +
8648 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) },
8649 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) },
8650 ++ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200) },
8651 + { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */
8652 +- { USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */
8653 + { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/
8654 + { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) },
8655 + { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM610) },
8656 +@@ -1329,9 +1327,12 @@ static const struct usb_device_id option_ids[] = {
8657 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x02, 0x01) },
8658 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM2, 0xff, 0x00, 0x00) },
8659 + { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
8660 +- { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T) },
8661 ++ { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T_600A) },
8662 ++ { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T_600E) },
8663 + { USB_DEVICE(TPLINK_VENDOR_ID, TPLINK_PRODUCT_MA180),
8664 + .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
8665 ++ { USB_DEVICE(TPLINK_VENDOR_ID, 0x9000), /* TP-Link MA260 */
8666 ++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
8667 + { USB_DEVICE(CHANGHONG_VENDOR_ID, CHANGHONG_PRODUCT_CH690) },
8668 + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d01, 0xff, 0x02, 0x01) }, /* D-Link DWM-156 (variant) */
8669 + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d01, 0xff, 0x00, 0x00) }, /* D-Link DWM-156 (variant) */
8670 +@@ -1339,6 +1340,8 @@ static const struct usb_device_id option_ids[] = {
8671 + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d02, 0xff, 0x00, 0x00) },
8672 + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x02, 0x01) },
8673 + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) },
8674 ++ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
8675 ++ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
8676 + { } /* Terminating entry */
8677 + };
8678 + MODULE_DEVICE_TABLE(usb, option_ids);
8679 +diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
8680 +index f80d3dd..8ca5ac7 100644
8681 +--- a/drivers/vhost/net.c
8682 ++++ b/drivers/vhost/net.c
8683 +@@ -150,6 +150,11 @@ static void vhost_net_ubuf_put_and_wait(struct vhost_net_ubuf_ref *ubufs)
8684 + {
8685 + kref_put(&ubufs->kref, vhost_net_zerocopy_done_signal);
8686 + wait_event(ubufs->wait, !atomic_read(&ubufs->kref.refcount));
8687 ++}
8688 ++
8689 ++static void vhost_net_ubuf_put_wait_and_free(struct vhost_net_ubuf_ref *ubufs)
8690 ++{
8691 ++ vhost_net_ubuf_put_and_wait(ubufs);
8692 + kfree(ubufs);
8693 + }
8694 +
8695 +@@ -948,7 +953,7 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
8696 + mutex_unlock(&vq->mutex);
8697 +
8698 + if (oldubufs) {
8699 +- vhost_net_ubuf_put_and_wait(oldubufs);
8700 ++ vhost_net_ubuf_put_wait_and_free(oldubufs);
8701 + mutex_lock(&vq->mutex);
8702 + vhost_zerocopy_signal_used(n, vq);
8703 + mutex_unlock(&vq->mutex);
8704 +@@ -966,7 +971,7 @@ err_used:
8705 + rcu_assign_pointer(vq->private_data, oldsock);
8706 + vhost_net_enable_vq(n, vq);
8707 + if (ubufs)
8708 +- vhost_net_ubuf_put_and_wait(ubufs);
8709 ++ vhost_net_ubuf_put_wait_and_free(ubufs);
8710 + err_ubufs:
8711 + fput(sock->file);
8712 + err_vq:
8713 +diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
8714 +index 5217baf..37d58f8 100644
8715 +--- a/drivers/virtio/virtio_ring.c
8716 ++++ b/drivers/virtio/virtio_ring.c
8717 +@@ -607,19 +607,21 @@ void virtqueue_disable_cb(struct virtqueue *_vq)
8718 + EXPORT_SYMBOL_GPL(virtqueue_disable_cb);
8719 +
8720 + /**
8721 +- * virtqueue_enable_cb - restart callbacks after disable_cb.
8722 ++ * virtqueue_enable_cb_prepare - restart callbacks after disable_cb
8723 + * @vq: the struct virtqueue we're talking about.
8724 + *
8725 +- * This re-enables callbacks; it returns "false" if there are pending
8726 +- * buffers in the queue, to detect a possible race between the driver
8727 +- * checking for more work, and enabling callbacks.
8728 ++ * This re-enables callbacks; it returns current queue state
8729 ++ * in an opaque unsigned value. This value should be later tested by
8730 ++ * virtqueue_poll, to detect a possible race between the driver checking for
8731 ++ * more work, and enabling callbacks.
8732 + *
8733 + * Caller must ensure we don't call this with other virtqueue
8734 + * operations at the same time (except where noted).
8735 + */
8736 +-bool virtqueue_enable_cb(struct virtqueue *_vq)
8737 ++unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq)
8738 + {
8739 + struct vring_virtqueue *vq = to_vvq(_vq);
8740 ++ u16 last_used_idx;
8741 +
8742 + START_USE(vq);
8743 +
8744 +@@ -629,15 +631,45 @@ bool virtqueue_enable_cb(struct virtqueue *_vq)
8745 + * either clear the flags bit or point the event index at the next
8746 + * entry. Always do both to keep code simple. */
8747 + vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT;
8748 +- vring_used_event(&vq->vring) = vq->last_used_idx;
8749 ++ vring_used_event(&vq->vring) = last_used_idx = vq->last_used_idx;
8750 ++ END_USE(vq);
8751 ++ return last_used_idx;
8752 ++}
8753 ++EXPORT_SYMBOL_GPL(virtqueue_enable_cb_prepare);
8754 ++
8755 ++/**
8756 ++ * virtqueue_poll - query pending used buffers
8757 ++ * @vq: the struct virtqueue we're talking about.
8758 ++ * @last_used_idx: virtqueue state (from call to virtqueue_enable_cb_prepare).
8759 ++ *
8760 ++ * Returns "true" if there are pending used buffers in the queue.
8761 ++ *
8762 ++ * This does not need to be serialized.
8763 ++ */
8764 ++bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
8765 ++{
8766 ++ struct vring_virtqueue *vq = to_vvq(_vq);
8767 ++
8768 + virtio_mb(vq->weak_barriers);
8769 +- if (unlikely(more_used(vq))) {
8770 +- END_USE(vq);
8771 +- return false;
8772 +- }
8773 ++ return (u16)last_used_idx != vq->vring.used->idx;
8774 ++}
8775 ++EXPORT_SYMBOL_GPL(virtqueue_poll);
8776 +
8777 +- END_USE(vq);
8778 +- return true;
8779 ++/**
8780 ++ * virtqueue_enable_cb - restart callbacks after disable_cb.
8781 ++ * @vq: the struct virtqueue we're talking about.
8782 ++ *
8783 ++ * This re-enables callbacks; it returns "false" if there are pending
8784 ++ * buffers in the queue, to detect a possible race between the driver
8785 ++ * checking for more work, and enabling callbacks.
8786 ++ *
8787 ++ * Caller must ensure we don't call this with other virtqueue
8788 ++ * operations at the same time (except where noted).
8789 ++ */
8790 ++bool virtqueue_enable_cb(struct virtqueue *_vq)
8791 ++{
8792 ++ unsigned last_used_idx = virtqueue_enable_cb_prepare(_vq);
8793 ++ return !virtqueue_poll(_vq, last_used_idx);
8794 + }
8795 + EXPORT_SYMBOL_GPL(virtqueue_enable_cb);
8796 +
8797 +diff --git a/fs/block_dev.c b/fs/block_dev.c
8798 +index 2091db8..85f5c85 100644
8799 +--- a/fs/block_dev.c
8800 ++++ b/fs/block_dev.c
8801 +@@ -58,17 +58,24 @@ static void bdev_inode_switch_bdi(struct inode *inode,
8802 + struct backing_dev_info *dst)
8803 + {
8804 + struct backing_dev_info *old = inode->i_data.backing_dev_info;
8805 ++ bool wakeup_bdi = false;
8806 +
8807 + if (unlikely(dst == old)) /* deadlock avoidance */
8808 + return;
8809 + bdi_lock_two(&old->wb, &dst->wb);
8810 + spin_lock(&inode->i_lock);
8811 + inode->i_data.backing_dev_info = dst;
8812 +- if (inode->i_state & I_DIRTY)
8813 ++ if (inode->i_state & I_DIRTY) {
8814 ++ if (bdi_cap_writeback_dirty(dst) && !wb_has_dirty_io(&dst->wb))
8815 ++ wakeup_bdi = true;
8816 + list_move(&inode->i_wb_list, &dst->wb.b_dirty);
8817 ++ }
8818 + spin_unlock(&inode->i_lock);
8819 + spin_unlock(&old->wb.list_lock);
8820 + spin_unlock(&dst->wb.list_lock);
8821 ++
8822 ++ if (wakeup_bdi)
8823 ++ bdi_wakeup_thread_delayed(dst);
8824 + }
8825 +
8826 + /* Kill _all_ buffers and pagecache , dirty or not.. */
8827 +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
8828 +index e49da58..fddf3d9 100644
8829 +--- a/fs/ext4/extents.c
8830 ++++ b/fs/ext4/extents.c
8831 +@@ -4386,9 +4386,20 @@ void ext4_ext_truncate(handle_t *handle, struct inode *inode)
8832 +
8833 + last_block = (inode->i_size + sb->s_blocksize - 1)
8834 + >> EXT4_BLOCK_SIZE_BITS(sb);
8835 ++retry:
8836 + err = ext4_es_remove_extent(inode, last_block,
8837 + EXT_MAX_BLOCKS - last_block);
8838 ++ if (err == ENOMEM) {
8839 ++ cond_resched();
8840 ++ congestion_wait(BLK_RW_ASYNC, HZ/50);
8841 ++ goto retry;
8842 ++ }
8843 ++ if (err) {
8844 ++ ext4_std_error(inode->i_sb, err);
8845 ++ return;
8846 ++ }
8847 + err = ext4_ext_remove_space(inode, last_block, EXT_MAX_BLOCKS - 1);
8848 ++ ext4_std_error(inode->i_sb, err);
8849 + }
8850 +
8851 + static void ext4_falloc_update_inode(struct inode *inode,
8852 +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
8853 +index f3f783d..5b12746 100644
8854 +--- a/fs/fuse/dir.c
8855 ++++ b/fs/fuse/dir.c
8856 +@@ -1225,13 +1225,29 @@ static int fuse_direntplus_link(struct file *file,
8857 + if (name.name[1] == '.' && name.len == 2)
8858 + return 0;
8859 + }
8860 ++
8861 ++ if (invalid_nodeid(o->nodeid))
8862 ++ return -EIO;
8863 ++ if (!fuse_valid_type(o->attr.mode))
8864 ++ return -EIO;
8865 ++
8866 + fc = get_fuse_conn(dir);
8867 +
8868 + name.hash = full_name_hash(name.name, name.len);
8869 + dentry = d_lookup(parent, &name);
8870 +- if (dentry && dentry->d_inode) {
8871 ++ if (dentry) {
8872 + inode = dentry->d_inode;
8873 +- if (get_node_id(inode) == o->nodeid) {
8874 ++ if (!inode) {
8875 ++ d_drop(dentry);
8876 ++ } else if (get_node_id(inode) != o->nodeid ||
8877 ++ ((o->attr.mode ^ inode->i_mode) & S_IFMT)) {
8878 ++ err = d_invalidate(dentry);
8879 ++ if (err)
8880 ++ goto out;
8881 ++ } else if (is_bad_inode(inode)) {
8882 ++ err = -EIO;
8883 ++ goto out;
8884 ++ } else {
8885 + struct fuse_inode *fi;
8886 + fi = get_fuse_inode(inode);
8887 + spin_lock(&fc->lock);
8888 +@@ -1244,9 +1260,6 @@ static int fuse_direntplus_link(struct file *file,
8889 + */
8890 + goto found;
8891 + }
8892 +- err = d_invalidate(dentry);
8893 +- if (err)
8894 +- goto out;
8895 + dput(dentry);
8896 + dentry = NULL;
8897 + }
8898 +@@ -1261,10 +1274,19 @@ static int fuse_direntplus_link(struct file *file,
8899 + if (!inode)
8900 + goto out;
8901 +
8902 +- alias = d_materialise_unique(dentry, inode);
8903 +- err = PTR_ERR(alias);
8904 +- if (IS_ERR(alias))
8905 +- goto out;
8906 ++ if (S_ISDIR(inode->i_mode)) {
8907 ++ mutex_lock(&fc->inst_mutex);
8908 ++ alias = fuse_d_add_directory(dentry, inode);
8909 ++ mutex_unlock(&fc->inst_mutex);
8910 ++ err = PTR_ERR(alias);
8911 ++ if (IS_ERR(alias)) {
8912 ++ iput(inode);
8913 ++ goto out;
8914 ++ }
8915 ++ } else {
8916 ++ alias = d_splice_alias(inode, dentry);
8917 ++ }
8918 ++
8919 + if (alias) {
8920 + dput(dentry);
8921 + dentry = alias;
8922 +diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
8923 +index e703318..8ebd3f5 100644
8924 +--- a/fs/lockd/svclock.c
8925 ++++ b/fs/lockd/svclock.c
8926 +@@ -939,6 +939,7 @@ nlmsvc_retry_blocked(void)
8927 + unsigned long timeout = MAX_SCHEDULE_TIMEOUT;
8928 + struct nlm_block *block;
8929 +
8930 ++ spin_lock(&nlm_blocked_lock);
8931 + while (!list_empty(&nlm_blocked) && !kthread_should_stop()) {
8932 + block = list_entry(nlm_blocked.next, struct nlm_block, b_list);
8933 +
8934 +@@ -948,6 +949,7 @@ nlmsvc_retry_blocked(void)
8935 + timeout = block->b_when - jiffies;
8936 + break;
8937 + }
8938 ++ spin_unlock(&nlm_blocked_lock);
8939 +
8940 + dprintk("nlmsvc_retry_blocked(%p, when=%ld)\n",
8941 + block, block->b_when);
8942 +@@ -957,7 +959,9 @@ nlmsvc_retry_blocked(void)
8943 + retry_deferred_block(block);
8944 + } else
8945 + nlmsvc_grant_blocked(block);
8946 ++ spin_lock(&nlm_blocked_lock);
8947 + }
8948 ++ spin_unlock(&nlm_blocked_lock);
8949 +
8950 + return timeout;
8951 + }
8952 +diff --git a/include/linux/edac.h b/include/linux/edac.h
8953 +index 0b76327..5c6d7fb 100644
8954 +--- a/include/linux/edac.h
8955 ++++ b/include/linux/edac.h
8956 +@@ -622,7 +622,7 @@ struct edac_raw_error_desc {
8957 + */
8958 + struct mem_ctl_info {
8959 + struct device dev;
8960 +- struct bus_type bus;
8961 ++ struct bus_type *bus;
8962 +
8963 + struct list_head link; /* for global list of mem_ctl_info structs */
8964 +
8965 +@@ -742,4 +742,9 @@ struct mem_ctl_info {
8966 + #endif
8967 + };
8968 +
8969 ++/*
8970 ++ * Maximum number of memory controllers in the coherent fabric.
8971 ++ */
8972 ++#define EDAC_MAX_MCS 16
8973 ++
8974 + #endif
8975 +diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
8976 +index 637fa71d..0b34988 100644
8977 +--- a/include/linux/if_vlan.h
8978 ++++ b/include/linux/if_vlan.h
8979 +@@ -79,9 +79,8 @@ static inline int is_vlan_dev(struct net_device *dev)
8980 + }
8981 +
8982 + #define vlan_tx_tag_present(__skb) ((__skb)->vlan_tci & VLAN_TAG_PRESENT)
8983 +-#define vlan_tx_nonzero_tag_present(__skb) \
8984 +- (vlan_tx_tag_present(__skb) && ((__skb)->vlan_tci & VLAN_VID_MASK))
8985 + #define vlan_tx_tag_get(__skb) ((__skb)->vlan_tci & ~VLAN_TAG_PRESENT)
8986 ++#define vlan_tx_tag_get_id(__skb) ((__skb)->vlan_tci & VLAN_VID_MASK)
8987 +
8988 + #if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
8989 +
8990 +diff --git a/include/linux/virtio.h b/include/linux/virtio.h
8991 +index 9ff8645..72398ee 100644
8992 +--- a/include/linux/virtio.h
8993 ++++ b/include/linux/virtio.h
8994 +@@ -70,6 +70,10 @@ void virtqueue_disable_cb(struct virtqueue *vq);
8995 +
8996 + bool virtqueue_enable_cb(struct virtqueue *vq);
8997 +
8998 ++unsigned virtqueue_enable_cb_prepare(struct virtqueue *vq);
8999 ++
9000 ++bool virtqueue_poll(struct virtqueue *vq, unsigned);
9001 ++
9002 + bool virtqueue_enable_cb_delayed(struct virtqueue *vq);
9003 +
9004 + void *virtqueue_detach_unused_buf(struct virtqueue *vq);
9005 +diff --git a/include/net/addrconf.h b/include/net/addrconf.h
9006 +index 21f70270..01b1a1a 100644
9007 +--- a/include/net/addrconf.h
9008 ++++ b/include/net/addrconf.h
9009 +@@ -86,6 +86,9 @@ extern int ipv6_dev_get_saddr(struct net *net,
9010 + const struct in6_addr *daddr,
9011 + unsigned int srcprefs,
9012 + struct in6_addr *saddr);
9013 ++extern int __ipv6_get_lladdr(struct inet6_dev *idev,
9014 ++ struct in6_addr *addr,
9015 ++ unsigned char banned_flags);
9016 + extern int ipv6_get_lladdr(struct net_device *dev,
9017 + struct in6_addr *addr,
9018 + unsigned char banned_flags);
9019 +diff --git a/include/net/udp.h b/include/net/udp.h
9020 +index 065f379..ad99eed 100644
9021 +--- a/include/net/udp.h
9022 ++++ b/include/net/udp.h
9023 +@@ -181,6 +181,7 @@ extern int udp_get_port(struct sock *sk, unsigned short snum,
9024 + extern void udp_err(struct sk_buff *, u32);
9025 + extern int udp_sendmsg(struct kiocb *iocb, struct sock *sk,
9026 + struct msghdr *msg, size_t len);
9027 ++extern int udp_push_pending_frames(struct sock *sk);
9028 + extern void udp_flush_pending_frames(struct sock *sk);
9029 + extern int udp_rcv(struct sk_buff *skb);
9030 + extern int udp_ioctl(struct sock *sk, int cmd, unsigned long arg);
9031 +diff --git a/include/uapi/linux/if_pppox.h b/include/uapi/linux/if_pppox.h
9032 +index 0b46fd5..e36a4ae 100644
9033 +--- a/include/uapi/linux/if_pppox.h
9034 ++++ b/include/uapi/linux/if_pppox.h
9035 +@@ -135,11 +135,11 @@ struct pppoe_tag {
9036 +
9037 + struct pppoe_hdr {
9038 + #if defined(__LITTLE_ENDIAN_BITFIELD)
9039 +- __u8 ver : 4;
9040 + __u8 type : 4;
9041 ++ __u8 ver : 4;
9042 + #elif defined(__BIG_ENDIAN_BITFIELD)
9043 +- __u8 type : 4;
9044 + __u8 ver : 4;
9045 ++ __u8 type : 4;
9046 + #else
9047 + #error "Please fix <asm/byteorder.h>"
9048 + #endif
9049 +diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
9050 +index fd4b13b..2288fbd 100644
9051 +--- a/kernel/hrtimer.c
9052 ++++ b/kernel/hrtimer.c
9053 +@@ -721,17 +721,20 @@ static int hrtimer_switch_to_hres(void)
9054 + return 1;
9055 + }
9056 +
9057 ++static void clock_was_set_work(struct work_struct *work)
9058 ++{
9059 ++ clock_was_set();
9060 ++}
9061 ++
9062 ++static DECLARE_WORK(hrtimer_work, clock_was_set_work);
9063 ++
9064 + /*
9065 +- * Called from timekeeping code to reprogramm the hrtimer interrupt
9066 +- * device. If called from the timer interrupt context we defer it to
9067 +- * softirq context.
9068 ++ * Called from timekeeping and resume code to reprogramm the hrtimer
9069 ++ * interrupt device on all cpus.
9070 + */
9071 + void clock_was_set_delayed(void)
9072 + {
9073 +- struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
9074 +-
9075 +- cpu_base->clock_was_set = 1;
9076 +- __raise_softirq_irqoff(HRTIMER_SOFTIRQ);
9077 ++ schedule_work(&hrtimer_work);
9078 + }
9079 +
9080 + #else
9081 +@@ -780,8 +783,10 @@ void hrtimers_resume(void)
9082 + WARN_ONCE(!irqs_disabled(),
9083 + KERN_INFO "hrtimers_resume() called with IRQs enabled!");
9084 +
9085 ++ /* Retrigger on the local CPU */
9086 + retrigger_next_event(NULL);
9087 +- timerfd_clock_was_set();
9088 ++ /* And schedule a retrigger for all others */
9089 ++ clock_was_set_delayed();
9090 + }
9091 +
9092 + static inline void timer_stats_hrtimer_set_start_info(struct hrtimer *timer)
9093 +@@ -1432,13 +1437,6 @@ void hrtimer_peek_ahead_timers(void)
9094 +
9095 + static void run_hrtimer_softirq(struct softirq_action *h)
9096 + {
9097 +- struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
9098 +-
9099 +- if (cpu_base->clock_was_set) {
9100 +- cpu_base->clock_was_set = 0;
9101 +- clock_was_set();
9102 +- }
9103 +-
9104 + hrtimer_peek_ahead_timers();
9105 + }
9106 +
9107 +diff --git a/kernel/power/autosleep.c b/kernel/power/autosleep.c
9108 +index c6422ff..9012ecf 100644
9109 +--- a/kernel/power/autosleep.c
9110 ++++ b/kernel/power/autosleep.c
9111 +@@ -32,7 +32,8 @@ static void try_to_suspend(struct work_struct *work)
9112 +
9113 + mutex_lock(&autosleep_lock);
9114 +
9115 +- if (!pm_save_wakeup_count(initial_count)) {
9116 ++ if (!pm_save_wakeup_count(initial_count) ||
9117 ++ system_state != SYSTEM_RUNNING) {
9118 + mutex_unlock(&autosleep_lock);
9119 + goto out;
9120 + }
9121 +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
9122 +index 566cf2b..74fdc5c 100644
9123 +--- a/lib/Kconfig.debug
9124 ++++ b/lib/Kconfig.debug
9125 +@@ -1272,7 +1272,7 @@ config FAULT_INJECTION_STACKTRACE_FILTER
9126 + depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT
9127 + depends on !X86_64
9128 + select STACKTRACE
9129 +- select FRAME_POINTER if !PPC && !S390 && !MICROBLAZE && !ARM_UNWIND
9130 ++ select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM_UNWIND
9131 + help
9132 + Provide stacktrace filter for fault-injection capabilities
9133 +
9134 +diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c
9135 +index 8a15eaa..4a78c4d 100644
9136 +--- a/net/8021q/vlan_core.c
9137 ++++ b/net/8021q/vlan_core.c
9138 +@@ -9,7 +9,7 @@ bool vlan_do_receive(struct sk_buff **skbp)
9139 + {
9140 + struct sk_buff *skb = *skbp;
9141 + __be16 vlan_proto = skb->vlan_proto;
9142 +- u16 vlan_id = skb->vlan_tci & VLAN_VID_MASK;
9143 ++ u16 vlan_id = vlan_tx_tag_get_id(skb);
9144 + struct net_device *vlan_dev;
9145 + struct vlan_pcpu_stats *rx_stats;
9146 +
9147 +diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
9148 +index 3a8c8fd..1cd3d2a 100644
9149 +--- a/net/8021q/vlan_dev.c
9150 ++++ b/net/8021q/vlan_dev.c
9151 +@@ -73,6 +73,8 @@ vlan_dev_get_egress_qos_mask(struct net_device *dev, struct sk_buff *skb)
9152 + {
9153 + struct vlan_priority_tci_mapping *mp;
9154 +
9155 ++ smp_rmb(); /* coupled with smp_wmb() in vlan_dev_set_egress_priority() */
9156 ++
9157 + mp = vlan_dev_priv(dev)->egress_priority_map[(skb->priority & 0xF)];
9158 + while (mp) {
9159 + if (mp->priority == skb->priority) {
9160 +@@ -249,6 +251,11 @@ int vlan_dev_set_egress_priority(const struct net_device *dev,
9161 + np->next = mp;
9162 + np->priority = skb_prio;
9163 + np->vlan_qos = vlan_qos;
9164 ++ /* Before inserting this element in hash table, make sure all its fields
9165 ++ * are committed to memory.
9166 ++ * coupled with smp_rmb() in vlan_dev_get_egress_qos_mask()
9167 ++ */
9168 ++ smp_wmb();
9169 + vlan->egress_priority_map[skb_prio & 0xF] = np;
9170 + if (vlan_qos)
9171 + vlan->nr_egress_mappings++;
9172 +diff --git a/net/9p/trans_common.c b/net/9p/trans_common.c
9173 +index de8df95..2ee3879 100644
9174 +--- a/net/9p/trans_common.c
9175 ++++ b/net/9p/trans_common.c
9176 +@@ -24,11 +24,11 @@
9177 + */
9178 + void p9_release_pages(struct page **pages, int nr_pages)
9179 + {
9180 +- int i = 0;
9181 +- while (pages[i] && nr_pages--) {
9182 +- put_page(pages[i]);
9183 +- i++;
9184 +- }
9185 ++ int i;
9186 ++
9187 ++ for (i = 0; i < nr_pages; i++)
9188 ++ if (pages[i])
9189 ++ put_page(pages[i]);
9190 + }
9191 + EXPORT_SYMBOL(p9_release_pages);
9192 +
9193 +diff --git a/net/core/dev.c b/net/core/dev.c
9194 +index faebb39..7ddbb31 100644
9195 +--- a/net/core/dev.c
9196 ++++ b/net/core/dev.c
9197 +@@ -3513,8 +3513,15 @@ ncls:
9198 + }
9199 + }
9200 +
9201 +- if (vlan_tx_nonzero_tag_present(skb))
9202 +- skb->pkt_type = PACKET_OTHERHOST;
9203 ++ if (unlikely(vlan_tx_tag_present(skb))) {
9204 ++ if (vlan_tx_tag_get_id(skb))
9205 ++ skb->pkt_type = PACKET_OTHERHOST;
9206 ++ /* Note: we might in the future use prio bits
9207 ++ * and set skb->priority like in vlan_do_receive()
9208 ++ * For the time being, just ignore Priority Code Point
9209 ++ */
9210 ++ skb->vlan_tci = 0;
9211 ++ }
9212 +
9213 + /* deliver only exact match when indicated */
9214 + null_or_dev = deliver_exact ? skb->dev : NULL;
9215 +diff --git a/net/core/neighbour.c b/net/core/neighbour.c
9216 +index 5c56b21..ce90b02 100644
9217 +--- a/net/core/neighbour.c
9218 ++++ b/net/core/neighbour.c
9219 +@@ -231,7 +231,7 @@ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev)
9220 + we must kill timers etc. and move
9221 + it to safe state.
9222 + */
9223 +- skb_queue_purge(&n->arp_queue);
9224 ++ __skb_queue_purge(&n->arp_queue);
9225 + n->arp_queue_len_bytes = 0;
9226 + n->output = neigh_blackhole;
9227 + if (n->nud_state & NUD_VALID)
9228 +@@ -286,7 +286,7 @@ static struct neighbour *neigh_alloc(struct neigh_table *tbl, struct net_device
9229 + if (!n)
9230 + goto out_entries;
9231 +
9232 +- skb_queue_head_init(&n->arp_queue);
9233 ++ __skb_queue_head_init(&n->arp_queue);
9234 + rwlock_init(&n->lock);
9235 + seqlock_init(&n->ha_lock);
9236 + n->updated = n->used = now;
9237 +@@ -708,7 +708,9 @@ void neigh_destroy(struct neighbour *neigh)
9238 + if (neigh_del_timer(neigh))
9239 + pr_warn("Impossible event\n");
9240 +
9241 +- skb_queue_purge(&neigh->arp_queue);
9242 ++ write_lock_bh(&neigh->lock);
9243 ++ __skb_queue_purge(&neigh->arp_queue);
9244 ++ write_unlock_bh(&neigh->lock);
9245 + neigh->arp_queue_len_bytes = 0;
9246 +
9247 + if (dev->netdev_ops->ndo_neigh_destroy)
9248 +@@ -858,7 +860,7 @@ static void neigh_invalidate(struct neighbour *neigh)
9249 + neigh->ops->error_report(neigh, skb);
9250 + write_lock(&neigh->lock);
9251 + }
9252 +- skb_queue_purge(&neigh->arp_queue);
9253 ++ __skb_queue_purge(&neigh->arp_queue);
9254 + neigh->arp_queue_len_bytes = 0;
9255 + }
9256 +
9257 +@@ -1210,7 +1212,7 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
9258 +
9259 + write_lock_bh(&neigh->lock);
9260 + }
9261 +- skb_queue_purge(&neigh->arp_queue);
9262 ++ __skb_queue_purge(&neigh->arp_queue);
9263 + neigh->arp_queue_len_bytes = 0;
9264 + }
9265 + out:
9266 +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
9267 +index 2a83591..855004f 100644
9268 +--- a/net/ipv4/ip_gre.c
9269 ++++ b/net/ipv4/ip_gre.c
9270 +@@ -503,10 +503,11 @@ static int ipgre_tunnel_ioctl(struct net_device *dev,
9271 +
9272 + if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p)))
9273 + return -EFAULT;
9274 +- if (p.iph.version != 4 || p.iph.protocol != IPPROTO_GRE ||
9275 +- p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)) ||
9276 +- ((p.i_flags|p.o_flags)&(GRE_VERSION|GRE_ROUTING))) {
9277 +- return -EINVAL;
9278 ++ if (cmd == SIOCADDTUNNEL || cmd == SIOCCHGTUNNEL) {
9279 ++ if (p.iph.version != 4 || p.iph.protocol != IPPROTO_GRE ||
9280 ++ p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)) ||
9281 ++ ((p.i_flags|p.o_flags)&(GRE_VERSION|GRE_ROUTING)))
9282 ++ return -EINVAL;
9283 + }
9284 + p.i_flags = gre_flags_to_tnl_flags(p.i_flags);
9285 + p.o_flags = gre_flags_to_tnl_flags(p.o_flags);
9286 +diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
9287 +index 3da817b..15e3e68 100644
9288 +--- a/net/ipv4/ip_input.c
9289 ++++ b/net/ipv4/ip_input.c
9290 +@@ -190,10 +190,7 @@ static int ip_local_deliver_finish(struct sk_buff *skb)
9291 + {
9292 + struct net *net = dev_net(skb->dev);
9293 +
9294 +- __skb_pull(skb, ip_hdrlen(skb));
9295 +-
9296 +- /* Point into the IP datagram, just past the header. */
9297 +- skb_reset_transport_header(skb);
9298 ++ __skb_pull(skb, skb_network_header_len(skb));
9299 +
9300 + rcu_read_lock();
9301 + {
9302 +@@ -437,6 +434,8 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
9303 + goto drop;
9304 + }
9305 +
9306 ++ skb->transport_header = skb->network_header + iph->ihl*4;
9307 ++
9308 + /* Remove any debris in the socket control block */
9309 + memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
9310 +
9311 +diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
9312 +index 7fa8f08..cbfc37f 100644
9313 +--- a/net/ipv4/ip_tunnel.c
9314 ++++ b/net/ipv4/ip_tunnel.c
9315 +@@ -486,6 +486,53 @@ drop:
9316 + }
9317 + EXPORT_SYMBOL_GPL(ip_tunnel_rcv);
9318 +
9319 ++static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
9320 ++ struct rtable *rt, __be16 df)
9321 ++{
9322 ++ struct ip_tunnel *tunnel = netdev_priv(dev);
9323 ++ int pkt_size = skb->len - tunnel->hlen - dev->hard_header_len;
9324 ++ int mtu;
9325 ++
9326 ++ if (df)
9327 ++ mtu = dst_mtu(&rt->dst) - dev->hard_header_len
9328 ++ - sizeof(struct iphdr) - tunnel->hlen;
9329 ++ else
9330 ++ mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
9331 ++
9332 ++ if (skb_dst(skb))
9333 ++ skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
9334 ++
9335 ++ if (skb->protocol == htons(ETH_P_IP)) {
9336 ++ if (!skb_is_gso(skb) &&
9337 ++ (df & htons(IP_DF)) && mtu < pkt_size) {
9338 ++ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
9339 ++ return -E2BIG;
9340 ++ }
9341 ++ }
9342 ++#if IS_ENABLED(CONFIG_IPV6)
9343 ++ else if (skb->protocol == htons(ETH_P_IPV6)) {
9344 ++ struct rt6_info *rt6 = (struct rt6_info *)skb_dst(skb);
9345 ++
9346 ++ if (rt6 && mtu < dst_mtu(skb_dst(skb)) &&
9347 ++ mtu >= IPV6_MIN_MTU) {
9348 ++ if ((tunnel->parms.iph.daddr &&
9349 ++ !ipv4_is_multicast(tunnel->parms.iph.daddr)) ||
9350 ++ rt6->rt6i_dst.plen == 128) {
9351 ++ rt6->rt6i_flags |= RTF_MODIFIED;
9352 ++ dst_metric_set(skb_dst(skb), RTAX_MTU, mtu);
9353 ++ }
9354 ++ }
9355 ++
9356 ++ if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU &&
9357 ++ mtu < pkt_size) {
9358 ++ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
9359 ++ return -E2BIG;
9360 ++ }
9361 ++ }
9362 ++#endif
9363 ++ return 0;
9364 ++}
9365 ++
9366 + void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
9367 + const struct iphdr *tnl_params)
9368 + {
9369 +@@ -499,7 +546,6 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
9370 + struct net_device *tdev; /* Device to other host */
9371 + unsigned int max_headroom; /* The extra header space needed */
9372 + __be32 dst;
9373 +- int mtu;
9374 +
9375 + inner_iph = (const struct iphdr *)skb_inner_network_header(skb);
9376 +
9377 +@@ -579,50 +625,11 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
9378 + goto tx_error;
9379 + }
9380 +
9381 +- df = tnl_params->frag_off;
9382 +
9383 +- if (df)
9384 +- mtu = dst_mtu(&rt->dst) - dev->hard_header_len
9385 +- - sizeof(struct iphdr);
9386 +- else
9387 +- mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
9388 +-
9389 +- if (skb_dst(skb))
9390 +- skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
9391 +-
9392 +- if (skb->protocol == htons(ETH_P_IP)) {
9393 +- df |= (inner_iph->frag_off&htons(IP_DF));
9394 +-
9395 +- if (!skb_is_gso(skb) &&
9396 +- (inner_iph->frag_off&htons(IP_DF)) &&
9397 +- mtu < ntohs(inner_iph->tot_len)) {
9398 +- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
9399 +- ip_rt_put(rt);
9400 +- goto tx_error;
9401 +- }
9402 +- }
9403 +-#if IS_ENABLED(CONFIG_IPV6)
9404 +- else if (skb->protocol == htons(ETH_P_IPV6)) {
9405 +- struct rt6_info *rt6 = (struct rt6_info *)skb_dst(skb);
9406 +-
9407 +- if (rt6 && mtu < dst_mtu(skb_dst(skb)) &&
9408 +- mtu >= IPV6_MIN_MTU) {
9409 +- if ((tunnel->parms.iph.daddr &&
9410 +- !ipv4_is_multicast(tunnel->parms.iph.daddr)) ||
9411 +- rt6->rt6i_dst.plen == 128) {
9412 +- rt6->rt6i_flags |= RTF_MODIFIED;
9413 +- dst_metric_set(skb_dst(skb), RTAX_MTU, mtu);
9414 +- }
9415 +- }
9416 +-
9417 +- if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU &&
9418 +- mtu < skb->len) {
9419 +- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
9420 +- ip_rt_put(rt);
9421 +- goto tx_error;
9422 +- }
9423 ++ if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off)) {
9424 ++ ip_rt_put(rt);
9425 ++ goto tx_error;
9426 + }
9427 +-#endif
9428 +
9429 + if (tunnel->err_count > 0) {
9430 + if (time_before(jiffies,
9431 +@@ -646,6 +653,10 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
9432 + ttl = ip4_dst_hoplimit(&rt->dst);
9433 + }
9434 +
9435 ++ df = tnl_params->frag_off;
9436 ++ if (skb->protocol == htons(ETH_P_IP))
9437 ++ df |= (inner_iph->frag_off&htons(IP_DF));
9438 ++
9439 + max_headroom = LL_RESERVED_SPACE(tdev) + sizeof(struct iphdr)
9440 + + rt->dst.header_len;
9441 + if (max_headroom > dev->needed_headroom) {
9442 +diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
9443 +index c118f6b..17cc0ff 100644
9444 +--- a/net/ipv4/ip_vti.c
9445 ++++ b/net/ipv4/ip_vti.c
9446 +@@ -606,17 +606,10 @@ static int __net_init vti_fb_tunnel_init(struct net_device *dev)
9447 + struct iphdr *iph = &tunnel->parms.iph;
9448 + struct vti_net *ipn = net_generic(dev_net(dev), vti_net_id);
9449 +
9450 +- tunnel->dev = dev;
9451 +- strcpy(tunnel->parms.name, dev->name);
9452 +-
9453 + iph->version = 4;
9454 + iph->protocol = IPPROTO_IPIP;
9455 + iph->ihl = 5;
9456 +
9457 +- dev->tstats = alloc_percpu(struct pcpu_tstats);
9458 +- if (!dev->tstats)
9459 +- return -ENOMEM;
9460 +-
9461 + dev_hold(dev);
9462 + rcu_assign_pointer(ipn->tunnels_wc[0], tunnel);
9463 + return 0;
9464 +diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
9465 +index 77bfcce..7cfc456 100644
9466 +--- a/net/ipv4/ipip.c
9467 ++++ b/net/ipv4/ipip.c
9468 +@@ -240,11 +240,13 @@ ipip_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
9469 + if (copy_from_user(&p, ifr->ifr_ifru.ifru_data, sizeof(p)))
9470 + return -EFAULT;
9471 +
9472 +- if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPIP ||
9473 +- p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)))
9474 +- return -EINVAL;
9475 +- if (p.i_key || p.o_key || p.i_flags || p.o_flags)
9476 +- return -EINVAL;
9477 ++ if (cmd == SIOCADDTUNNEL || cmd == SIOCCHGTUNNEL) {
9478 ++ if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPIP ||
9479 ++ p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)))
9480 ++ return -EINVAL;
9481 ++ }
9482 ++
9483 ++ p.i_key = p.o_key = p.i_flags = p.o_flags = 0;
9484 + if (p.iph.ttl)
9485 + p.iph.frag_off |= htons(IP_DF);
9486 +
9487 +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
9488 +index 0bf5d39..93b731d 100644
9489 +--- a/net/ipv4/udp.c
9490 ++++ b/net/ipv4/udp.c
9491 +@@ -799,7 +799,7 @@ send:
9492 + /*
9493 + * Push out all pending data as one UDP datagram. Socket is locked.
9494 + */
9495 +-static int udp_push_pending_frames(struct sock *sk)
9496 ++int udp_push_pending_frames(struct sock *sk)
9497 + {
9498 + struct udp_sock *up = udp_sk(sk);
9499 + struct inet_sock *inet = inet_sk(sk);
9500 +@@ -818,6 +818,7 @@ out:
9501 + up->pending = 0;
9502 + return err;
9503 + }
9504 ++EXPORT_SYMBOL(udp_push_pending_frames);
9505 +
9506 + int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
9507 + size_t len)
9508 +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
9509 +index 4ab4c38..fb8c94c 100644
9510 +--- a/net/ipv6/addrconf.c
9511 ++++ b/net/ipv6/addrconf.c
9512 +@@ -1448,6 +1448,23 @@ try_nextdev:
9513 + }
9514 + EXPORT_SYMBOL(ipv6_dev_get_saddr);
9515 +
9516 ++int __ipv6_get_lladdr(struct inet6_dev *idev, struct in6_addr *addr,
9517 ++ unsigned char banned_flags)
9518 ++{
9519 ++ struct inet6_ifaddr *ifp;
9520 ++ int err = -EADDRNOTAVAIL;
9521 ++
9522 ++ list_for_each_entry(ifp, &idev->addr_list, if_list) {
9523 ++ if (ifp->scope == IFA_LINK &&
9524 ++ !(ifp->flags & banned_flags)) {
9525 ++ *addr = ifp->addr;
9526 ++ err = 0;
9527 ++ break;
9528 ++ }
9529 ++ }
9530 ++ return err;
9531 ++}
9532 ++
9533 + int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr,
9534 + unsigned char banned_flags)
9535 + {
9536 +@@ -1457,17 +1474,8 @@ int ipv6_get_lladdr(struct net_device *dev, struct in6_addr *addr,
9537 + rcu_read_lock();
9538 + idev = __in6_dev_get(dev);
9539 + if (idev) {
9540 +- struct inet6_ifaddr *ifp;
9541 +-
9542 + read_lock_bh(&idev->lock);
9543 +- list_for_each_entry(ifp, &idev->addr_list, if_list) {
9544 +- if (ifp->scope == IFA_LINK &&
9545 +- !(ifp->flags & banned_flags)) {
9546 +- *addr = ifp->addr;
9547 +- err = 0;
9548 +- break;
9549 +- }
9550 +- }
9551 ++ err = __ipv6_get_lladdr(idev, addr, banned_flags);
9552 + read_unlock_bh(&idev->lock);
9553 + }
9554 + rcu_read_unlock();
9555 +diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
9556 +index 192dd1a..5fc9c7a 100644
9557 +--- a/net/ipv6/ip6_fib.c
9558 ++++ b/net/ipv6/ip6_fib.c
9559 +@@ -632,6 +632,12 @@ insert_above:
9560 + return ln;
9561 + }
9562 +
9563 ++static inline bool rt6_qualify_for_ecmp(struct rt6_info *rt)
9564 ++{
9565 ++ return (rt->rt6i_flags & (RTF_GATEWAY|RTF_ADDRCONF|RTF_DYNAMIC)) ==
9566 ++ RTF_GATEWAY;
9567 ++}
9568 ++
9569 + /*
9570 + * Insert routing information in a node.
9571 + */
9572 +@@ -646,6 +652,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct rt6_info *rt,
9573 + int add = (!info->nlh ||
9574 + (info->nlh->nlmsg_flags & NLM_F_CREATE));
9575 + int found = 0;
9576 ++ bool rt_can_ecmp = rt6_qualify_for_ecmp(rt);
9577 +
9578 + ins = &fn->leaf;
9579 +
9580 +@@ -691,9 +698,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct rt6_info *rt,
9581 + * To avoid long list, we only had siblings if the
9582 + * route have a gateway.
9583 + */
9584 +- if (rt->rt6i_flags & RTF_GATEWAY &&
9585 +- !(rt->rt6i_flags & RTF_EXPIRES) &&
9586 +- !(iter->rt6i_flags & RTF_EXPIRES))
9587 ++ if (rt_can_ecmp &&
9588 ++ rt6_qualify_for_ecmp(iter))
9589 + rt->rt6i_nsiblings++;
9590 + }
9591 +
9592 +@@ -715,7 +721,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct rt6_info *rt,
9593 + /* Find the first route that have the same metric */
9594 + sibling = fn->leaf;
9595 + while (sibling) {
9596 +- if (sibling->rt6i_metric == rt->rt6i_metric) {
9597 ++ if (sibling->rt6i_metric == rt->rt6i_metric &&
9598 ++ rt6_qualify_for_ecmp(sibling)) {
9599 + list_add_tail(&rt->rt6i_siblings,
9600 + &sibling->rt6i_siblings);
9601 + break;
9602 +diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
9603 +index d5d20cd..6e3ddf8 100644
9604 +--- a/net/ipv6/ip6_output.c
9605 ++++ b/net/ipv6/ip6_output.c
9606 +@@ -1098,11 +1098,12 @@ static inline struct ipv6_rt_hdr *ip6_rthdr_dup(struct ipv6_rt_hdr *src,
9607 + return src ? kmemdup(src, (src->hdrlen + 1) * 8, gfp) : NULL;
9608 + }
9609 +
9610 +-static void ip6_append_data_mtu(int *mtu,
9611 ++static void ip6_append_data_mtu(unsigned int *mtu,
9612 + int *maxfraglen,
9613 + unsigned int fragheaderlen,
9614 + struct sk_buff *skb,
9615 +- struct rt6_info *rt)
9616 ++ struct rt6_info *rt,
9617 ++ bool pmtuprobe)
9618 + {
9619 + if (!(rt->dst.flags & DST_XFRM_TUNNEL)) {
9620 + if (skb == NULL) {
9621 +@@ -1114,7 +1115,9 @@ static void ip6_append_data_mtu(int *mtu,
9622 + * this fragment is not first, the headers
9623 + * space is regarded as data space.
9624 + */
9625 +- *mtu = dst_mtu(rt->dst.path);
9626 ++ *mtu = min(*mtu, pmtuprobe ?
9627 ++ rt->dst.dev->mtu :
9628 ++ dst_mtu(rt->dst.path));
9629 + }
9630 + *maxfraglen = ((*mtu - fragheaderlen) & ~7)
9631 + + fragheaderlen - sizeof(struct frag_hdr);
9632 +@@ -1131,11 +1134,10 @@ int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
9633 + struct ipv6_pinfo *np = inet6_sk(sk);
9634 + struct inet_cork *cork;
9635 + struct sk_buff *skb, *skb_prev = NULL;
9636 +- unsigned int maxfraglen, fragheaderlen;
9637 ++ unsigned int maxfraglen, fragheaderlen, mtu;
9638 + int exthdrlen;
9639 + int dst_exthdrlen;
9640 + int hh_len;
9641 +- int mtu;
9642 + int copy;
9643 + int err;
9644 + int offset = 0;
9645 +@@ -1292,7 +1294,9 @@ alloc_new_skb:
9646 + /* update mtu and maxfraglen if necessary */
9647 + if (skb == NULL || skb_prev == NULL)
9648 + ip6_append_data_mtu(&mtu, &maxfraglen,
9649 +- fragheaderlen, skb, rt);
9650 ++ fragheaderlen, skb, rt,
9651 ++ np->pmtudisc ==
9652 ++ IPV6_PMTUDISC_PROBE);
9653 +
9654 + skb_prev = skb;
9655 +
9656 +diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
9657 +index bfa6cc3..c3998c2 100644
9658 +--- a/net/ipv6/mcast.c
9659 ++++ b/net/ipv6/mcast.c
9660 +@@ -1343,8 +1343,9 @@ static void ip6_mc_hdr(struct sock *sk, struct sk_buff *skb,
9661 + hdr->daddr = *daddr;
9662 + }
9663 +
9664 +-static struct sk_buff *mld_newpack(struct net_device *dev, int size)
9665 ++static struct sk_buff *mld_newpack(struct inet6_dev *idev, int size)
9666 + {
9667 ++ struct net_device *dev = idev->dev;
9668 + struct net *net = dev_net(dev);
9669 + struct sock *sk = net->ipv6.igmp_sk;
9670 + struct sk_buff *skb;
9671 +@@ -1369,7 +1370,7 @@ static struct sk_buff *mld_newpack(struct net_device *dev, int size)
9672 +
9673 + skb_reserve(skb, hlen);
9674 +
9675 +- if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) {
9676 ++ if (__ipv6_get_lladdr(idev, &addr_buf, IFA_F_TENTATIVE)) {
9677 + /* <draft-ietf-magma-mld-source-05.txt>:
9678 + * use unspecified address as the source address
9679 + * when a valid link-local address is not available.
9680 +@@ -1465,7 +1466,7 @@ static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc,
9681 + struct mld2_grec *pgr;
9682 +
9683 + if (!skb)
9684 +- skb = mld_newpack(dev, dev->mtu);
9685 ++ skb = mld_newpack(pmc->idev, dev->mtu);
9686 + if (!skb)
9687 + return NULL;
9688 + pgr = (struct mld2_grec *)skb_put(skb, sizeof(struct mld2_grec));
9689 +@@ -1485,7 +1486,8 @@ static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc,
9690 + static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc,
9691 + int type, int gdeleted, int sdeleted)
9692 + {
9693 +- struct net_device *dev = pmc->idev->dev;
9694 ++ struct inet6_dev *idev = pmc->idev;
9695 ++ struct net_device *dev = idev->dev;
9696 + struct mld2_report *pmr;
9697 + struct mld2_grec *pgr = NULL;
9698 + struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list;
9699 +@@ -1514,7 +1516,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc,
9700 + AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) {
9701 + if (skb)
9702 + mld_sendpack(skb);
9703 +- skb = mld_newpack(dev, dev->mtu);
9704 ++ skb = mld_newpack(idev, dev->mtu);
9705 + }
9706 + }
9707 + first = 1;
9708 +@@ -1541,7 +1543,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc,
9709 + pgr->grec_nsrcs = htons(scount);
9710 + if (skb)
9711 + mld_sendpack(skb);
9712 +- skb = mld_newpack(dev, dev->mtu);
9713 ++ skb = mld_newpack(idev, dev->mtu);
9714 + first = 1;
9715 + scount = 0;
9716 + }
9717 +@@ -1596,8 +1598,8 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc)
9718 + struct sk_buff *skb = NULL;
9719 + int type;
9720 +
9721 ++ read_lock_bh(&idev->lock);
9722 + if (!pmc) {
9723 +- read_lock_bh(&idev->lock);
9724 + for (pmc=idev->mc_list; pmc; pmc=pmc->next) {
9725 + if (pmc->mca_flags & MAF_NOREPORT)
9726 + continue;
9727 +@@ -1609,7 +1611,6 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc)
9728 + skb = add_grec(skb, pmc, type, 0, 0);
9729 + spin_unlock_bh(&pmc->mca_lock);
9730 + }
9731 +- read_unlock_bh(&idev->lock);
9732 + } else {
9733 + spin_lock_bh(&pmc->mca_lock);
9734 + if (pmc->mca_sfcount[MCAST_EXCLUDE])
9735 +@@ -1619,6 +1620,7 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc)
9736 + skb = add_grec(skb, pmc, type, 0, 0);
9737 + spin_unlock_bh(&pmc->mca_lock);
9738 + }
9739 ++ read_unlock_bh(&idev->lock);
9740 + if (skb)
9741 + mld_sendpack(skb);
9742 + }
9743 +diff --git a/net/ipv6/route.c b/net/ipv6/route.c
9744 +index ad0aa6b..bacce6c 100644
9745 +--- a/net/ipv6/route.c
9746 ++++ b/net/ipv6/route.c
9747 +@@ -65,6 +65,12 @@
9748 + #include <linux/sysctl.h>
9749 + #endif
9750 +
9751 ++enum rt6_nud_state {
9752 ++ RT6_NUD_FAIL_HARD = -2,
9753 ++ RT6_NUD_FAIL_SOFT = -1,
9754 ++ RT6_NUD_SUCCEED = 1
9755 ++};
9756 ++
9757 + static struct rt6_info *ip6_rt_copy(struct rt6_info *ort,
9758 + const struct in6_addr *dest);
9759 + static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie);
9760 +@@ -527,26 +533,29 @@ static inline int rt6_check_dev(struct rt6_info *rt, int oif)
9761 + return 0;
9762 + }
9763 +
9764 +-static inline bool rt6_check_neigh(struct rt6_info *rt)
9765 ++static inline enum rt6_nud_state rt6_check_neigh(struct rt6_info *rt)
9766 + {
9767 + struct neighbour *neigh;
9768 +- bool ret = false;
9769 ++ enum rt6_nud_state ret = RT6_NUD_FAIL_HARD;
9770 +
9771 + if (rt->rt6i_flags & RTF_NONEXTHOP ||
9772 + !(rt->rt6i_flags & RTF_GATEWAY))
9773 +- return true;
9774 ++ return RT6_NUD_SUCCEED;
9775 +
9776 + rcu_read_lock_bh();
9777 + neigh = __ipv6_neigh_lookup_noref(rt->dst.dev, &rt->rt6i_gateway);
9778 + if (neigh) {
9779 + read_lock(&neigh->lock);
9780 + if (neigh->nud_state & NUD_VALID)
9781 +- ret = true;
9782 ++ ret = RT6_NUD_SUCCEED;
9783 + #ifdef CONFIG_IPV6_ROUTER_PREF
9784 + else if (!(neigh->nud_state & NUD_FAILED))
9785 +- ret = true;
9786 ++ ret = RT6_NUD_SUCCEED;
9787 + #endif
9788 + read_unlock(&neigh->lock);
9789 ++ } else {
9790 ++ ret = IS_ENABLED(CONFIG_IPV6_ROUTER_PREF) ?
9791 ++ RT6_NUD_SUCCEED : RT6_NUD_FAIL_SOFT;
9792 + }
9793 + rcu_read_unlock_bh();
9794 +
9795 +@@ -560,43 +569,52 @@ static int rt6_score_route(struct rt6_info *rt, int oif,
9796 +
9797 + m = rt6_check_dev(rt, oif);
9798 + if (!m && (strict & RT6_LOOKUP_F_IFACE))
9799 +- return -1;
9800 ++ return RT6_NUD_FAIL_HARD;
9801 + #ifdef CONFIG_IPV6_ROUTER_PREF
9802 + m |= IPV6_DECODE_PREF(IPV6_EXTRACT_PREF(rt->rt6i_flags)) << 2;
9803 + #endif
9804 +- if (!rt6_check_neigh(rt) && (strict & RT6_LOOKUP_F_REACHABLE))
9805 +- return -1;
9806 ++ if (strict & RT6_LOOKUP_F_REACHABLE) {
9807 ++ int n = rt6_check_neigh(rt);
9808 ++ if (n < 0)
9809 ++ return n;
9810 ++ }
9811 + return m;
9812 + }
9813 +
9814 + static struct rt6_info *find_match(struct rt6_info *rt, int oif, int strict,
9815 +- int *mpri, struct rt6_info *match)
9816 ++ int *mpri, struct rt6_info *match,
9817 ++ bool *do_rr)
9818 + {
9819 + int m;
9820 ++ bool match_do_rr = false;
9821 +
9822 + if (rt6_check_expired(rt))
9823 + goto out;
9824 +
9825 + m = rt6_score_route(rt, oif, strict);
9826 +- if (m < 0)
9827 ++ if (m == RT6_NUD_FAIL_SOFT && !IS_ENABLED(CONFIG_IPV6_ROUTER_PREF)) {
9828 ++ match_do_rr = true;
9829 ++ m = 0; /* lowest valid score */
9830 ++ } else if (m < 0) {
9831 + goto out;
9832 ++ }
9833 ++
9834 ++ if (strict & RT6_LOOKUP_F_REACHABLE)
9835 ++ rt6_probe(rt);
9836 +
9837 + if (m > *mpri) {
9838 +- if (strict & RT6_LOOKUP_F_REACHABLE)
9839 +- rt6_probe(match);
9840 ++ *do_rr = match_do_rr;
9841 + *mpri = m;
9842 + match = rt;
9843 +- } else if (strict & RT6_LOOKUP_F_REACHABLE) {
9844 +- rt6_probe(rt);
9845 + }
9846 +-
9847 + out:
9848 + return match;
9849 + }
9850 +
9851 + static struct rt6_info *find_rr_leaf(struct fib6_node *fn,
9852 + struct rt6_info *rr_head,
9853 +- u32 metric, int oif, int strict)
9854 ++ u32 metric, int oif, int strict,
9855 ++ bool *do_rr)
9856 + {
9857 + struct rt6_info *rt, *match;
9858 + int mpri = -1;
9859 +@@ -604,10 +622,10 @@ static struct rt6_info *find_rr_leaf(struct fib6_node *fn,
9860 + match = NULL;
9861 + for (rt = rr_head; rt && rt->rt6i_metric == metric;
9862 + rt = rt->dst.rt6_next)
9863 +- match = find_match(rt, oif, strict, &mpri, match);
9864 ++ match = find_match(rt, oif, strict, &mpri, match, do_rr);
9865 + for (rt = fn->leaf; rt && rt != rr_head && rt->rt6i_metric == metric;
9866 + rt = rt->dst.rt6_next)
9867 +- match = find_match(rt, oif, strict, &mpri, match);
9868 ++ match = find_match(rt, oif, strict, &mpri, match, do_rr);
9869 +
9870 + return match;
9871 + }
9872 +@@ -616,15 +634,16 @@ static struct rt6_info *rt6_select(struct fib6_node *fn, int oif, int strict)
9873 + {
9874 + struct rt6_info *match, *rt0;
9875 + struct net *net;
9876 ++ bool do_rr = false;
9877 +
9878 + rt0 = fn->rr_ptr;
9879 + if (!rt0)
9880 + fn->rr_ptr = rt0 = fn->leaf;
9881 +
9882 +- match = find_rr_leaf(fn, rt0, rt0->rt6i_metric, oif, strict);
9883 ++ match = find_rr_leaf(fn, rt0, rt0->rt6i_metric, oif, strict,
9884 ++ &do_rr);
9885 +
9886 +- if (!match &&
9887 +- (strict & RT6_LOOKUP_F_REACHABLE)) {
9888 ++ if (do_rr) {
9889 + struct rt6_info *next = rt0->dst.rt6_next;
9890 +
9891 + /* no entries matched; do round-robin */
9892 +@@ -1074,10 +1093,13 @@ static void ip6_link_failure(struct sk_buff *skb)
9893 +
9894 + rt = (struct rt6_info *) skb_dst(skb);
9895 + if (rt) {
9896 +- if (rt->rt6i_flags & RTF_CACHE)
9897 +- rt6_update_expires(rt, 0);
9898 +- else if (rt->rt6i_node && (rt->rt6i_flags & RTF_DEFAULT))
9899 ++ if (rt->rt6i_flags & RTF_CACHE) {
9900 ++ dst_hold(&rt->dst);
9901 ++ if (ip6_del_rt(rt))
9902 ++ dst_free(&rt->dst);
9903 ++ } else if (rt->rt6i_node && (rt->rt6i_flags & RTF_DEFAULT)) {
9904 + rt->rt6i_node->fn_sernum = -1;
9905 ++ }
9906 + }
9907 + }
9908 +
9909 +diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
9910 +index 3353634..60df36d 100644
9911 +--- a/net/ipv6/sit.c
9912 ++++ b/net/ipv6/sit.c
9913 +@@ -589,7 +589,7 @@ static int ipip6_rcv(struct sk_buff *skb)
9914 + tunnel->dev->stats.rx_errors++;
9915 + goto out;
9916 + }
9917 +- } else {
9918 ++ } else if (!(tunnel->dev->flags&IFF_POINTOPOINT)) {
9919 + if (is_spoofed_6rd(tunnel, iph->saddr,
9920 + &ipv6_hdr(skb)->saddr) ||
9921 + is_spoofed_6rd(tunnel, iph->daddr,
9922 +diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
9923 +index 42923b1..e7b28f9 100644
9924 +--- a/net/ipv6/udp.c
9925 ++++ b/net/ipv6/udp.c
9926 +@@ -955,11 +955,16 @@ static int udp_v6_push_pending_frames(struct sock *sk)
9927 + struct udphdr *uh;
9928 + struct udp_sock *up = udp_sk(sk);
9929 + struct inet_sock *inet = inet_sk(sk);
9930 +- struct flowi6 *fl6 = &inet->cork.fl.u.ip6;
9931 ++ struct flowi6 *fl6;
9932 + int err = 0;
9933 + int is_udplite = IS_UDPLITE(sk);
9934 + __wsum csum = 0;
9935 +
9936 ++ if (up->pending == AF_INET)
9937 ++ return udp_push_pending_frames(sk);
9938 ++
9939 ++ fl6 = &inet->cork.fl.u.ip6;
9940 ++
9941 + /* Grab the skbuff where UDP header space exists. */
9942 + if ((skb = skb_peek(&sk->sk_write_queue)) == NULL)
9943 + goto out;
9944 +diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
9945 +index 8dec687..5ebee2d 100644
9946 +--- a/net/l2tp/l2tp_ppp.c
9947 ++++ b/net/l2tp/l2tp_ppp.c
9948 +@@ -1793,7 +1793,8 @@ static const struct proto_ops pppol2tp_ops = {
9949 +
9950 + static const struct pppox_proto pppol2tp_proto = {
9951 + .create = pppol2tp_create,
9952 +- .ioctl = pppol2tp_ioctl
9953 ++ .ioctl = pppol2tp_ioctl,
9954 ++ .owner = THIS_MODULE,
9955 + };
9956 +
9957 + #ifdef CONFIG_L2TP_V3
9958 +diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
9959 +index d51852b..5792252 100644
9960 +--- a/net/sched/sch_qfq.c
9961 ++++ b/net/sched/sch_qfq.c
9962 +@@ -113,7 +113,6 @@
9963 +
9964 + #define FRAC_BITS 30 /* fixed point arithmetic */
9965 + #define ONE_FP (1UL << FRAC_BITS)
9966 +-#define IWSUM (ONE_FP/QFQ_MAX_WSUM)
9967 +
9968 + #define QFQ_MTU_SHIFT 16 /* to support TSO/GSO */
9969 + #define QFQ_MIN_LMAX 512 /* see qfq_slot_insert */
9970 +@@ -189,6 +188,7 @@ struct qfq_sched {
9971 + struct qfq_aggregate *in_serv_agg; /* Aggregate being served. */
9972 + u32 num_active_agg; /* Num. of active aggregates */
9973 + u32 wsum; /* weight sum */
9974 ++ u32 iwsum; /* inverse weight sum */
9975 +
9976 + unsigned long bitmaps[QFQ_MAX_STATE]; /* Group bitmaps. */
9977 + struct qfq_group groups[QFQ_MAX_INDEX + 1]; /* The groups. */
9978 +@@ -314,6 +314,7 @@ static void qfq_update_agg(struct qfq_sched *q, struct qfq_aggregate *agg,
9979 +
9980 + q->wsum +=
9981 + (int) agg->class_weight * (new_num_classes - agg->num_classes);
9982 ++ q->iwsum = ONE_FP / q->wsum;
9983 +
9984 + agg->num_classes = new_num_classes;
9985 + }
9986 +@@ -340,6 +341,10 @@ static void qfq_destroy_agg(struct qfq_sched *q, struct qfq_aggregate *agg)
9987 + {
9988 + if (!hlist_unhashed(&agg->nonfull_next))
9989 + hlist_del_init(&agg->nonfull_next);
9990 ++ q->wsum -= agg->class_weight;
9991 ++ if (q->wsum != 0)
9992 ++ q->iwsum = ONE_FP / q->wsum;
9993 ++
9994 + if (q->in_serv_agg == agg)
9995 + q->in_serv_agg = qfq_choose_next_agg(q);
9996 + kfree(agg);
9997 +@@ -827,38 +832,60 @@ static void qfq_make_eligible(struct qfq_sched *q)
9998 + }
9999 + }
10000 +
10001 +-
10002 + /*
10003 +- * The index of the slot in which the aggregate is to be inserted must
10004 +- * not be higher than QFQ_MAX_SLOTS-2. There is a '-2' and not a '-1'
10005 +- * because the start time of the group may be moved backward by one
10006 +- * slot after the aggregate has been inserted, and this would cause
10007 +- * non-empty slots to be right-shifted by one position.
10008 ++ * The index of the slot in which the input aggregate agg is to be
10009 ++ * inserted must not be higher than QFQ_MAX_SLOTS-2. There is a '-2'
10010 ++ * and not a '-1' because the start time of the group may be moved
10011 ++ * backward by one slot after the aggregate has been inserted, and
10012 ++ * this would cause non-empty slots to be right-shifted by one
10013 ++ * position.
10014 ++ *
10015 ++ * QFQ+ fully satisfies this bound to the slot index if the parameters
10016 ++ * of the classes are not changed dynamically, and if QFQ+ never
10017 ++ * happens to postpone the service of agg unjustly, i.e., it never
10018 ++ * happens that the aggregate becomes backlogged and eligible, or just
10019 ++ * eligible, while an aggregate with a higher approximated finish time
10020 ++ * is being served. In particular, in this case QFQ+ guarantees that
10021 ++ * the timestamps of agg are low enough that the slot index is never
10022 ++ * higher than 2. Unfortunately, QFQ+ cannot provide the same
10023 ++ * guarantee if it happens to unjustly postpone the service of agg, or
10024 ++ * if the parameters of some class are changed.
10025 ++ *
10026 ++ * As for the first event, i.e., an out-of-order service, the
10027 ++ * upper bound to the slot index guaranteed by QFQ+ grows to
10028 ++ * 2 +
10029 ++ * QFQ_MAX_AGG_CLASSES * ((1<<QFQ_MTU_SHIFT)/QFQ_MIN_LMAX) *
10030 ++ * (current_max_weight/current_wsum) <= 2 + 8 * 128 * 1.
10031 + *
10032 +- * If the weight and lmax (max_pkt_size) of the classes do not change,
10033 +- * then QFQ+ does meet the above contraint according to the current
10034 +- * values of its parameters. In fact, if the weight and lmax of the
10035 +- * classes do not change, then, from the theory, QFQ+ guarantees that
10036 +- * the slot index is never higher than
10037 +- * 2 + QFQ_MAX_AGG_CLASSES * ((1<<QFQ_MTU_SHIFT)/QFQ_MIN_LMAX) *
10038 +- * (QFQ_MAX_WEIGHT/QFQ_MAX_WSUM) = 2 + 8 * 128 * (1 / 64) = 18
10039 ++ * The following function deals with this problem by backward-shifting
10040 ++ * the timestamps of agg, if needed, so as to guarantee that the slot
10041 ++ * index is never higher than QFQ_MAX_SLOTS-2. This backward-shift may
10042 ++ * cause the service of other aggregates to be postponed, yet the
10043 ++ * worst-case guarantees of these aggregates are not violated. In
10044 ++ * fact, in case of no out-of-order service, the timestamps of agg
10045 ++ * would have been even lower than they are after the backward shift,
10046 ++ * because QFQ+ would have guaranteed a maximum value equal to 2 for
10047 ++ * the slot index, and 2 < QFQ_MAX_SLOTS-2. Hence the aggregates whose
10048 ++ * service is postponed because of the backward-shift would have
10049 ++ * however waited for the service of agg before being served.
10050 + *
10051 +- * When the weight of a class is increased or the lmax of the class is
10052 +- * decreased, a new aggregate with smaller slot size than the original
10053 +- * parent aggregate of the class may happen to be activated. The
10054 +- * activation of this aggregate should be properly delayed to when the
10055 +- * service of the class has finished in the ideal system tracked by
10056 +- * QFQ+. If the activation of the aggregate is not delayed to this
10057 +- * reference time instant, then this aggregate may be unjustly served
10058 +- * before other aggregates waiting for service. This may cause the
10059 +- * above bound to the slot index to be violated for some of these
10060 +- * unlucky aggregates.
10061 ++ * The other event that may cause the slot index to be higher than 2
10062 ++ * for agg is a recent change of the parameters of some class. If the
10063 ++ * weight of a class is increased or the lmax (max_pkt_size) of the
10064 ++ * class is decreased, then a new aggregate with smaller slot size
10065 ++ * than the original parent aggregate of the class may happen to be
10066 ++ * activated. The activation of this aggregate should be properly
10067 ++ * delayed to when the service of the class has finished in the ideal
10068 ++ * system tracked by QFQ+. If the activation of the aggregate is not
10069 ++ * delayed to this reference time instant, then this aggregate may be
10070 ++ * unjustly served before other aggregates waiting for service. This
10071 ++ * may cause the above bound to the slot index to be violated for some
10072 ++ * of these unlucky aggregates.
10073 + *
10074 + * Instead of delaying the activation of the new aggregate, which is
10075 +- * quite complex, the following inaccurate but simple solution is used:
10076 +- * if the slot index is higher than QFQ_MAX_SLOTS-2, then the
10077 +- * timestamps of the aggregate are shifted backward so as to let the
10078 +- * slot index become equal to QFQ_MAX_SLOTS-2.
10079 ++ * quite complex, the above-discussed capping of the slot index is
10080 ++ * used to handle also the consequences of a change of the parameters
10081 ++ * of a class.
10082 + */
10083 + static void qfq_slot_insert(struct qfq_group *grp, struct qfq_aggregate *agg,
10084 + u64 roundedS)
10085 +@@ -1077,7 +1104,7 @@ static struct sk_buff *qfq_dequeue(struct Qdisc *sch)
10086 + else
10087 + in_serv_agg->budget -= len;
10088 +
10089 +- q->V += (u64)len * IWSUM;
10090 ++ q->V += (u64)len * q->iwsum;
10091 + pr_debug("qfq dequeue: len %u F %lld now %lld\n",
10092 + len, (unsigned long long) in_serv_agg->F,
10093 + (unsigned long long) q->V);
10094 +diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
10095 +index 37ca969..22c88d2 100644
10096 +--- a/net/x25/af_x25.c
10097 ++++ b/net/x25/af_x25.c
10098 +@@ -1583,11 +1583,11 @@ out_cud_release:
10099 + case SIOCX25CALLACCPTAPPRV: {
10100 + rc = -EINVAL;
10101 + lock_sock(sk);
10102 +- if (sk->sk_state != TCP_CLOSE)
10103 +- break;
10104 +- clear_bit(X25_ACCPT_APPRV_FLAG, &x25->flags);
10105 ++ if (sk->sk_state == TCP_CLOSE) {
10106 ++ clear_bit(X25_ACCPT_APPRV_FLAG, &x25->flags);
10107 ++ rc = 0;
10108 ++ }
10109 + release_sock(sk);
10110 +- rc = 0;
10111 + break;
10112 + }
10113 +
10114 +@@ -1595,14 +1595,15 @@ out_cud_release:
10115 + rc = -EINVAL;
10116 + lock_sock(sk);
10117 + if (sk->sk_state != TCP_ESTABLISHED)
10118 +- break;
10119 ++ goto out_sendcallaccpt_release;
10120 + /* must call accptapprv above */
10121 + if (test_bit(X25_ACCPT_APPRV_FLAG, &x25->flags))
10122 +- break;
10123 ++ goto out_sendcallaccpt_release;
10124 + x25_write_internal(sk, X25_CALL_ACCEPTED);
10125 + x25->state = X25_STATE_3;
10126 +- release_sock(sk);
10127 + rc = 0;
10128 ++out_sendcallaccpt_release:
10129 ++ release_sock(sk);
10130 + break;
10131 + }
10132 +
10133 +diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
10134 +index 1d9d642..e849e1e 100644
10135 +--- a/sound/pci/hda/patch_sigmatel.c
10136 ++++ b/sound/pci/hda/patch_sigmatel.c
10137 +@@ -417,9 +417,11 @@ static void stac_update_outputs(struct hda_codec *codec)
10138 + val &= ~spec->eapd_mask;
10139 + else
10140 + val |= spec->eapd_mask;
10141 +- if (spec->gpio_data != val)
10142 ++ if (spec->gpio_data != val) {
10143 ++ spec->gpio_data = val;
10144 + stac_gpio_set(codec, spec->gpio_mask, spec->gpio_dir,
10145 + val);
10146 ++ }
10147 + }
10148 + }
10149 +
10150 +@@ -3227,7 +3229,7 @@ static const struct hda_fixup stac927x_fixups[] = {
10151 + /* configure the analog microphone on some laptops */
10152 + { 0x0c, 0x90a79130 },
10153 + /* correct the front output jack as a hp out */
10154 +- { 0x0f, 0x0227011f },
10155 ++ { 0x0f, 0x0221101f },
10156 + /* correct the front input jack as a mic */
10157 + { 0x0e, 0x02a79130 },
10158 + {}
10159 +@@ -3608,20 +3610,18 @@ static int stac_parse_auto_config(struct hda_codec *codec)
10160 + static int stac_init(struct hda_codec *codec)
10161 + {
10162 + struct sigmatel_spec *spec = codec->spec;
10163 +- unsigned int gpio;
10164 + int i;
10165 +
10166 + /* override some hints */
10167 + stac_store_hints(codec);
10168 +
10169 + /* set up GPIO */
10170 +- gpio = spec->gpio_data;
10171 + /* turn on EAPD statically when spec->eapd_switch isn't set.
10172 + * otherwise, unsol event will turn it on/off dynamically
10173 + */
10174 + if (!spec->eapd_switch)
10175 +- gpio |= spec->eapd_mask;
10176 +- stac_gpio_set(codec, spec->gpio_mask, spec->gpio_dir, gpio);
10177 ++ spec->gpio_data |= spec->eapd_mask;
10178 ++ stac_gpio_set(codec, spec->gpio_mask, spec->gpio_dir, spec->gpio_data);
10179 +
10180 + snd_hda_gen_init(codec);
10181 +
10182 +@@ -3921,6 +3921,7 @@ static void stac_setup_gpio(struct hda_codec *codec)
10183 + {
10184 + struct sigmatel_spec *spec = codec->spec;
10185 +
10186 ++ spec->gpio_mask |= spec->eapd_mask;
10187 + if (spec->gpio_led) {
10188 + if (!spec->vref_mute_led_nid) {
10189 + spec->gpio_mask |= spec->gpio_led;
10190 +diff --git a/sound/usb/6fire/pcm.c b/sound/usb/6fire/pcm.c
10191 +index 8221ff2..074aaf7 100644
10192 +--- a/sound/usb/6fire/pcm.c
10193 ++++ b/sound/usb/6fire/pcm.c
10194 +@@ -543,7 +543,7 @@ static snd_pcm_uframes_t usb6fire_pcm_pointer(
10195 + snd_pcm_uframes_t ret;
10196 +
10197 + if (rt->panic || !sub)
10198 +- return SNDRV_PCM_STATE_XRUN;
10199 ++ return SNDRV_PCM_POS_XRUN;
10200 +
10201 + spin_lock_irqsave(&sub->lock, flags);
10202 + ret = sub->dma_off;
10203
10204 Added: genpatches-2.6/trunk/3.10.7/1004_linux-3.10.5.patch
10205 ===================================================================
10206 --- genpatches-2.6/trunk/3.10.7/1004_linux-3.10.5.patch (rev 0)
10207 +++ genpatches-2.6/trunk/3.10.7/1004_linux-3.10.5.patch 2013-08-29 12:09:12 UTC (rev 2497)
10208 @@ -0,0 +1,4826 @@
10209 +diff --git a/Makefile b/Makefile
10210 +index b4df9b2..f8349d0 100644
10211 +--- a/Makefile
10212 ++++ b/Makefile
10213 +@@ -1,6 +1,6 @@
10214 + VERSION = 3
10215 + PATCHLEVEL = 10
10216 +-SUBLEVEL = 4
10217 ++SUBLEVEL = 5
10218 + EXTRAVERSION =
10219 + NAME = Unicycling Gorilla
10220 +
10221 +diff --git a/arch/arm/boot/compressed/atags_to_fdt.c b/arch/arm/boot/compressed/atags_to_fdt.c
10222 +index aabc02a..d1153c8 100644
10223 +--- a/arch/arm/boot/compressed/atags_to_fdt.c
10224 ++++ b/arch/arm/boot/compressed/atags_to_fdt.c
10225 +@@ -53,6 +53,17 @@ static const void *getprop(const void *fdt, const char *node_path,
10226 + return fdt_getprop(fdt, offset, property, len);
10227 + }
10228 +
10229 ++static uint32_t get_cell_size(const void *fdt)
10230 ++{
10231 ++ int len;
10232 ++ uint32_t cell_size = 1;
10233 ++ const uint32_t *size_len = getprop(fdt, "/", "#size-cells", &len);
10234 ++
10235 ++ if (size_len)
10236 ++ cell_size = fdt32_to_cpu(*size_len);
10237 ++ return cell_size;
10238 ++}
10239 ++
10240 + static void merge_fdt_bootargs(void *fdt, const char *fdt_cmdline)
10241 + {
10242 + char cmdline[COMMAND_LINE_SIZE];
10243 +@@ -95,9 +106,11 @@ static void merge_fdt_bootargs(void *fdt, const char *fdt_cmdline)
10244 + int atags_to_fdt(void *atag_list, void *fdt, int total_space)
10245 + {
10246 + struct tag *atag = atag_list;
10247 +- uint32_t mem_reg_property[2 * NR_BANKS];
10248 ++ /* In the case of 64 bits memory size, need to reserve 2 cells for
10249 ++ * address and size for each bank */
10250 ++ uint32_t mem_reg_property[2 * 2 * NR_BANKS];
10251 + int memcount = 0;
10252 +- int ret;
10253 ++ int ret, memsize;
10254 +
10255 + /* make sure we've got an aligned pointer */
10256 + if ((u32)atag_list & 0x3)
10257 +@@ -137,8 +150,25 @@ int atags_to_fdt(void *atag_list, void *fdt, int total_space)
10258 + continue;
10259 + if (!atag->u.mem.size)
10260 + continue;
10261 +- mem_reg_property[memcount++] = cpu_to_fdt32(atag->u.mem.start);
10262 +- mem_reg_property[memcount++] = cpu_to_fdt32(atag->u.mem.size);
10263 ++ memsize = get_cell_size(fdt);
10264 ++
10265 ++ if (memsize == 2) {
10266 ++ /* if memsize is 2, that means that
10267 ++ * each data needs 2 cells of 32 bits,
10268 ++ * so the data are 64 bits */
10269 ++ uint64_t *mem_reg_prop64 =
10270 ++ (uint64_t *)mem_reg_property;
10271 ++ mem_reg_prop64[memcount++] =
10272 ++ cpu_to_fdt64(atag->u.mem.start);
10273 ++ mem_reg_prop64[memcount++] =
10274 ++ cpu_to_fdt64(atag->u.mem.size);
10275 ++ } else {
10276 ++ mem_reg_property[memcount++] =
10277 ++ cpu_to_fdt32(atag->u.mem.start);
10278 ++ mem_reg_property[memcount++] =
10279 ++ cpu_to_fdt32(atag->u.mem.size);
10280 ++ }
10281 ++
10282 + } else if (atag->hdr.tag == ATAG_INITRD2) {
10283 + uint32_t initrd_start, initrd_size;
10284 + initrd_start = atag->u.initrd.start;
10285 +@@ -150,8 +180,10 @@ int atags_to_fdt(void *atag_list, void *fdt, int total_space)
10286 + }
10287 + }
10288 +
10289 +- if (memcount)
10290 +- setprop(fdt, "/memory", "reg", mem_reg_property, 4*memcount);
10291 ++ if (memcount) {
10292 ++ setprop(fdt, "/memory", "reg", mem_reg_property,
10293 ++ 4 * memcount * memsize);
10294 ++ }
10295 +
10296 + return fdt_pack(fdt);
10297 + }
10298 +diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
10299 +index c1df590..49fa55b 100644
10300 +--- a/arch/powerpc/include/asm/module.h
10301 ++++ b/arch/powerpc/include/asm/module.h
10302 +@@ -82,10 +82,9 @@ struct exception_table_entry;
10303 + void sort_ex_table(struct exception_table_entry *start,
10304 + struct exception_table_entry *finish);
10305 +
10306 +-#ifdef CONFIG_MODVERSIONS
10307 ++#if defined(CONFIG_MODVERSIONS) && defined(CONFIG_PPC64)
10308 + #define ARCH_RELOCATES_KCRCTAB
10309 +-
10310 +-extern const unsigned long reloc_start[];
10311 ++#define reloc_start PHYSICAL_START
10312 + #endif
10313 + #endif /* __KERNEL__ */
10314 + #endif /* _ASM_POWERPC_MODULE_H */
10315 +diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
10316 +index 654e479..f096e72 100644
10317 +--- a/arch/powerpc/kernel/vmlinux.lds.S
10318 ++++ b/arch/powerpc/kernel/vmlinux.lds.S
10319 +@@ -38,9 +38,6 @@ jiffies = jiffies_64 + 4;
10320 + #endif
10321 + SECTIONS
10322 + {
10323 +- . = 0;
10324 +- reloc_start = .;
10325 +-
10326 + . = KERNELBASE;
10327 +
10328 + /*
10329 +diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
10330 +index b44577b..ec94e11 100644
10331 +--- a/arch/x86/kernel/acpi/sleep.c
10332 ++++ b/arch/x86/kernel/acpi/sleep.c
10333 +@@ -48,9 +48,20 @@ int acpi_suspend_lowlevel(void)
10334 + #ifndef CONFIG_64BIT
10335 + native_store_gdt((struct desc_ptr *)&header->pmode_gdt);
10336 +
10337 ++ /*
10338 ++ * We have to check that we can write back the value, and not
10339 ++ * just read it. At least on 90 nm Pentium M (Family 6, Model
10340 ++ * 13), reading an invalid MSR is not guaranteed to trap, see
10341 ++ * Erratum X4 in "Intel Pentium M Processor on 90 nm Process
10342 ++ * with 2-MB L2 Cache and Intel® Processor A100 and A110 on 90
10343 ++ * nm process with 512-KB L2 Cache Specification Update".
10344 ++ */
10345 + if (!rdmsr_safe(MSR_EFER,
10346 + &header->pmode_efer_low,
10347 +- &header->pmode_efer_high))
10348 ++ &header->pmode_efer_high) &&
10349 ++ !wrmsr_safe(MSR_EFER,
10350 ++ header->pmode_efer_low,
10351 ++ header->pmode_efer_high))
10352 + header->pmode_behavior |= (1 << WAKEUP_BEHAVIOR_RESTORE_EFER);
10353 + #endif /* !CONFIG_64BIT */
10354 +
10355 +@@ -61,7 +72,10 @@ int acpi_suspend_lowlevel(void)
10356 + }
10357 + if (!rdmsr_safe(MSR_IA32_MISC_ENABLE,
10358 + &header->pmode_misc_en_low,
10359 +- &header->pmode_misc_en_high))
10360 ++ &header->pmode_misc_en_high) &&
10361 ++ !wrmsr_safe(MSR_IA32_MISC_ENABLE,
10362 ++ header->pmode_misc_en_low,
10363 ++ header->pmode_misc_en_high))
10364 + header->pmode_behavior |=
10365 + (1 << WAKEUP_BEHAVIOR_RESTORE_MISC_ENABLE);
10366 + header->realmode_flags = acpi_realmode_flags;
10367 +diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
10368 +index fa72a39..3982357 100644
10369 +--- a/arch/x86/kernel/cpu/mtrr/generic.c
10370 ++++ b/arch/x86/kernel/cpu/mtrr/generic.c
10371 +@@ -510,8 +510,9 @@ generic_get_free_region(unsigned long base, unsigned long size, int replace_reg)
10372 + static void generic_get_mtrr(unsigned int reg, unsigned long *base,
10373 + unsigned long *size, mtrr_type *type)
10374 + {
10375 +- unsigned int mask_lo, mask_hi, base_lo, base_hi;
10376 +- unsigned int tmp, hi;
10377 ++ u32 mask_lo, mask_hi, base_lo, base_hi;
10378 ++ unsigned int hi;
10379 ++ u64 tmp, mask;
10380 +
10381 + /*
10382 + * get_mtrr doesn't need to update mtrr_state, also it could be called
10383 +@@ -532,18 +533,18 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
10384 + rdmsr(MTRRphysBase_MSR(reg), base_lo, base_hi);
10385 +
10386 + /* Work out the shifted address mask: */
10387 +- tmp = mask_hi << (32 - PAGE_SHIFT) | mask_lo >> PAGE_SHIFT;
10388 +- mask_lo = size_or_mask | tmp;
10389 ++ tmp = (u64)mask_hi << (32 - PAGE_SHIFT) | mask_lo >> PAGE_SHIFT;
10390 ++ mask = size_or_mask | tmp;
10391 +
10392 + /* Expand tmp with high bits to all 1s: */
10393 +- hi = fls(tmp);
10394 ++ hi = fls64(tmp);
10395 + if (hi > 0) {
10396 +- tmp |= ~((1<<(hi - 1)) - 1);
10397 ++ tmp |= ~((1ULL<<(hi - 1)) - 1);
10398 +
10399 +- if (tmp != mask_lo) {
10400 ++ if (tmp != mask) {
10401 + printk(KERN_WARNING "mtrr: your BIOS has configured an incorrect mask, fixing it.\n");
10402 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
10403 +- mask_lo = tmp;
10404 ++ mask = tmp;
10405 + }
10406 + }
10407 +
10408 +@@ -551,8 +552,8 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
10409 + * This works correctly if size is a power of two, i.e. a
10410 + * contiguous range:
10411 + */
10412 +- *size = -mask_lo;
10413 +- *base = base_hi << (32 - PAGE_SHIFT) | base_lo >> PAGE_SHIFT;
10414 ++ *size = -mask;
10415 ++ *base = (u64)base_hi << (32 - PAGE_SHIFT) | base_lo >> PAGE_SHIFT;
10416 + *type = base_lo & 0xff;
10417 +
10418 + out_put_cpu:
10419 +diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
10420 +index 726bf96..ca22b73 100644
10421 +--- a/arch/x86/kernel/cpu/mtrr/main.c
10422 ++++ b/arch/x86/kernel/cpu/mtrr/main.c
10423 +@@ -305,7 +305,8 @@ int mtrr_add_page(unsigned long base, unsigned long size,
10424 + return -EINVAL;
10425 + }
10426 +
10427 +- if (base & size_or_mask || size & size_or_mask) {
10428 ++ if ((base | (base + size - 1)) >>
10429 ++ (boot_cpu_data.x86_phys_bits - PAGE_SHIFT)) {
10430 + pr_warning("mtrr: base or size exceeds the MTRR width\n");
10431 + return -EINVAL;
10432 + }
10433 +@@ -583,6 +584,7 @@ static struct syscore_ops mtrr_syscore_ops = {
10434 +
10435 + int __initdata changed_by_mtrr_cleanup;
10436 +
10437 ++#define SIZE_OR_MASK_BITS(n) (~((1ULL << ((n) - PAGE_SHIFT)) - 1))
10438 + /**
10439 + * mtrr_bp_init - initialize mtrrs on the boot CPU
10440 + *
10441 +@@ -600,7 +602,7 @@ void __init mtrr_bp_init(void)
10442 +
10443 + if (cpu_has_mtrr) {
10444 + mtrr_if = &generic_mtrr_ops;
10445 +- size_or_mask = 0xff000000; /* 36 bits */
10446 ++ size_or_mask = SIZE_OR_MASK_BITS(36);
10447 + size_and_mask = 0x00f00000;
10448 + phys_addr = 36;
10449 +
10450 +@@ -619,7 +621,7 @@ void __init mtrr_bp_init(void)
10451 + boot_cpu_data.x86_mask == 0x4))
10452 + phys_addr = 36;
10453 +
10454 +- size_or_mask = ~((1ULL << (phys_addr - PAGE_SHIFT)) - 1);
10455 ++ size_or_mask = SIZE_OR_MASK_BITS(phys_addr);
10456 + size_and_mask = ~size_or_mask & 0xfffff00000ULL;
10457 + } else if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR &&
10458 + boot_cpu_data.x86 == 6) {
10459 +@@ -627,7 +629,7 @@ void __init mtrr_bp_init(void)
10460 + * VIA C* family have Intel style MTRRs,
10461 + * but don't support PAE
10462 + */
10463 +- size_or_mask = 0xfff00000; /* 32 bits */
10464 ++ size_or_mask = SIZE_OR_MASK_BITS(32);
10465 + size_and_mask = 0;
10466 + phys_addr = 32;
10467 + }
10468 +@@ -637,21 +639,21 @@ void __init mtrr_bp_init(void)
10469 + if (cpu_has_k6_mtrr) {
10470 + /* Pre-Athlon (K6) AMD CPU MTRRs */
10471 + mtrr_if = mtrr_ops[X86_VENDOR_AMD];
10472 +- size_or_mask = 0xfff00000; /* 32 bits */
10473 ++ size_or_mask = SIZE_OR_MASK_BITS(32);
10474 + size_and_mask = 0;
10475 + }
10476 + break;
10477 + case X86_VENDOR_CENTAUR:
10478 + if (cpu_has_centaur_mcr) {
10479 + mtrr_if = mtrr_ops[X86_VENDOR_CENTAUR];
10480 +- size_or_mask = 0xfff00000; /* 32 bits */
10481 ++ size_or_mask = SIZE_OR_MASK_BITS(32);
10482 + size_and_mask = 0;
10483 + }
10484 + break;
10485 + case X86_VENDOR_CYRIX:
10486 + if (cpu_has_cyrix_arr) {
10487 + mtrr_if = mtrr_ops[X86_VENDOR_CYRIX];
10488 +- size_or_mask = 0xfff00000; /* 32 bits */
10489 ++ size_or_mask = SIZE_OR_MASK_BITS(32);
10490 + size_and_mask = 0;
10491 + }
10492 + break;
10493 +diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
10494 +index 321d65e..a836860 100644
10495 +--- a/arch/x86/kernel/head_64.S
10496 ++++ b/arch/x86/kernel/head_64.S
10497 +@@ -513,7 +513,7 @@ ENTRY(phys_base)
10498 + #include "../../x86/xen/xen-head.S"
10499 +
10500 + .section .bss, "aw", @nobits
10501 +- .align L1_CACHE_BYTES
10502 ++ .align PAGE_SIZE
10503 + ENTRY(idt_table)
10504 + .skip IDT_ENTRIES * 16
10505 +
10506 +diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c
10507 +index 5e6301e..2cf0244 100644
10508 +--- a/drivers/acpi/acpi_memhotplug.c
10509 ++++ b/drivers/acpi/acpi_memhotplug.c
10510 +@@ -283,6 +283,7 @@ static int acpi_memory_device_add(struct acpi_device *device,
10511 + /* Get the range from the _CRS */
10512 + result = acpi_memory_get_device_resources(mem_device);
10513 + if (result) {
10514 ++ device->driver_data = NULL;
10515 + kfree(mem_device);
10516 + return result;
10517 + }
10518 +diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
10519 +index 14807e5..af658b2 100644
10520 +--- a/drivers/acpi/scan.c
10521 ++++ b/drivers/acpi/scan.c
10522 +@@ -237,10 +237,12 @@ static void acpi_scan_bus_device_check(acpi_handle handle, u32 ost_source)
10523 +
10524 + mutex_lock(&acpi_scan_lock);
10525 +
10526 +- acpi_bus_get_device(handle, &device);
10527 +- if (device) {
10528 +- dev_warn(&device->dev, "Attempt to re-insert\n");
10529 +- goto out;
10530 ++ if (ost_source != ACPI_NOTIFY_BUS_CHECK) {
10531 ++ acpi_bus_get_device(handle, &device);
10532 ++ if (device) {
10533 ++ dev_warn(&device->dev, "Attempt to re-insert\n");
10534 ++ goto out;
10535 ++ }
10536 + }
10537 + acpi_evaluate_hotplug_ost(handle, ost_source,
10538 + ACPI_OST_SC_INSERT_IN_PROGRESS, NULL);
10539 +@@ -1890,6 +1892,9 @@ static acpi_status acpi_bus_device_attach(acpi_handle handle, u32 lvl_not_used,
10540 + if (acpi_bus_get_device(handle, &device))
10541 + return AE_CTRL_DEPTH;
10542 +
10543 ++ if (device->handler)
10544 ++ return AE_OK;
10545 ++
10546 + ret = acpi_scan_attach_handler(device);
10547 + if (ret)
10548 + return ret > 0 ? AE_OK : AE_CTRL_DEPTH;
10549 +diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c
10550 +index 440eadf..0e4b96b 100644
10551 +--- a/drivers/acpi/video.c
10552 ++++ b/drivers/acpi/video.c
10553 +@@ -450,6 +450,14 @@ static struct dmi_system_id video_dmi_table[] __initdata = {
10554 + },
10555 + {
10556 + .callback = video_ignore_initial_backlight,
10557 ++ .ident = "Fujitsu E753",
10558 ++ .matches = {
10559 ++ DMI_MATCH(DMI_BOARD_VENDOR, "FUJITSU"),
10560 ++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E753"),
10561 ++ },
10562 ++ },
10563 ++ {
10564 ++ .callback = video_ignore_initial_backlight,
10565 + .ident = "HP Pavilion dm4",
10566 + .matches = {
10567 + DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
10568 +diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
10569 +index a5a3ebc..78eabff 100644
10570 +--- a/drivers/ata/Kconfig
10571 ++++ b/drivers/ata/Kconfig
10572 +@@ -107,7 +107,7 @@ config SATA_FSL
10573 + If unsure, say N.
10574 +
10575 + config SATA_INIC162X
10576 +- tristate "Initio 162x SATA support"
10577 ++ tristate "Initio 162x SATA support (Very Experimental)"
10578 + depends on PCI
10579 + help
10580 + This option enables support for Initio 162x Serial ATA.
10581 +diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
10582 +index 8eae659..b92913a 100644
10583 +--- a/drivers/ata/ata_piix.c
10584 ++++ b/drivers/ata/ata_piix.c
10585 +@@ -330,7 +330,7 @@ static const struct pci_device_id piix_pci_tbl[] = {
10586 + /* SATA Controller IDE (Wellsburg) */
10587 + { 0x8086, 0x8d00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
10588 + /* SATA Controller IDE (Wellsburg) */
10589 +- { 0x8086, 0x8d08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata },
10590 ++ { 0x8086, 0x8d08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb },
10591 + /* SATA Controller IDE (Wellsburg) */
10592 + { 0x8086, 0x8d60, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb },
10593 + /* SATA Controller IDE (Wellsburg) */
10594 +diff --git a/drivers/ata/sata_inic162x.c b/drivers/ata/sata_inic162x.c
10595 +index 1e6827c..74456fa 100644
10596 +--- a/drivers/ata/sata_inic162x.c
10597 ++++ b/drivers/ata/sata_inic162x.c
10598 +@@ -6,6 +6,18 @@
10599 + *
10600 + * This file is released under GPL v2.
10601 + *
10602 ++ * **** WARNING ****
10603 ++ *
10604 ++ * This driver never worked properly and unfortunately data corruption is
10605 ++ * relatively common. There isn't anyone working on the driver and there's
10606 ++ * no support from the vendor. Do not use this driver in any production
10607 ++ * environment.
10608 ++ *
10609 ++ * http://thread.gmane.org/gmane.linux.debian.devel.bugs.rc/378525/focus=54491
10610 ++ * https://bugzilla.kernel.org/show_bug.cgi?id=60565
10611 ++ *
10612 ++ * *****************
10613 ++ *
10614 + * This controller is eccentric and easily locks up if something isn't
10615 + * right. Documentation is available at initio's website but it only
10616 + * documents registers (not programming model).
10617 +@@ -807,6 +819,8 @@ static int inic_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
10618 +
10619 + ata_print_version_once(&pdev->dev, DRV_VERSION);
10620 +
10621 ++ dev_alert(&pdev->dev, "inic162x support is broken with common data corruption issues and will be disabled by default, contact linux-ide@×××××××××××.org if in production use\n");
10622 ++
10623 + /* alloc host */
10624 + host = ata_host_alloc_pinfo(&pdev->dev, ppi, NR_PORTS);
10625 + hpriv = devm_kzalloc(&pdev->dev, sizeof(*hpriv), GFP_KERNEL);
10626 +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
10627 +index a941dcf..d0c81d1 100644
10628 +--- a/drivers/base/regmap/regmap.c
10629 ++++ b/drivers/base/regmap/regmap.c
10630 +@@ -1717,7 +1717,7 @@ int regmap_async_complete(struct regmap *map)
10631 + int ret;
10632 +
10633 + /* Nothing to do with no async support */
10634 +- if (!map->bus->async_write)
10635 ++ if (!map->bus || !map->bus->async_write)
10636 + return 0;
10637 +
10638 + trace_regmap_async_complete_start(map->dev);
10639 +diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
10640 +index dd5b2fe..d81dfca 100644
10641 +--- a/drivers/block/xen-blkback/blkback.c
10642 ++++ b/drivers/block/xen-blkback/blkback.c
10643 +@@ -647,7 +647,18 @@ static int dispatch_discard_io(struct xen_blkif *blkif,
10644 + int status = BLKIF_RSP_OKAY;
10645 + struct block_device *bdev = blkif->vbd.bdev;
10646 + unsigned long secure;
10647 ++ struct phys_req preq;
10648 ++
10649 ++ preq.sector_number = req->u.discard.sector_number;
10650 ++ preq.nr_sects = req->u.discard.nr_sectors;
10651 +
10652 ++ err = xen_vbd_translate(&preq, blkif, WRITE);
10653 ++ if (err) {
10654 ++ pr_warn(DRV_PFX "access denied: DISCARD [%llu->%llu] on dev=%04x\n",
10655 ++ preq.sector_number,
10656 ++ preq.sector_number + preq.nr_sects, blkif->vbd.pdevice);
10657 ++ goto fail_response;
10658 ++ }
10659 + blkif->st_ds_req++;
10660 +
10661 + xen_blkif_get(blkif);
10662 +@@ -658,7 +669,7 @@ static int dispatch_discard_io(struct xen_blkif *blkif,
10663 + err = blkdev_issue_discard(bdev, req->u.discard.sector_number,
10664 + req->u.discard.nr_sectors,
10665 + GFP_KERNEL, secure);
10666 +-
10667 ++fail_response:
10668 + if (err == -EOPNOTSUPP) {
10669 + pr_debug(DRV_PFX "discard op failed, not supported\n");
10670 + status = BLKIF_RSP_EOPNOTSUPP;
10671 +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
10672 +index 07f2840..6d6a0b4 100644
10673 +--- a/drivers/cpufreq/intel_pstate.c
10674 ++++ b/drivers/cpufreq/intel_pstate.c
10675 +@@ -103,10 +103,10 @@ struct pstate_adjust_policy {
10676 + static struct pstate_adjust_policy default_policy = {
10677 + .sample_rate_ms = 10,
10678 + .deadband = 0,
10679 +- .setpoint = 109,
10680 +- .p_gain_pct = 17,
10681 ++ .setpoint = 97,
10682 ++ .p_gain_pct = 20,
10683 + .d_gain_pct = 0,
10684 +- .i_gain_pct = 4,
10685 ++ .i_gain_pct = 0,
10686 + };
10687 +
10688 + struct perf_limits {
10689 +@@ -468,12 +468,12 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
10690 + static inline int intel_pstate_get_scaled_busy(struct cpudata *cpu)
10691 + {
10692 + int32_t busy_scaled;
10693 +- int32_t core_busy, turbo_pstate, current_pstate;
10694 ++ int32_t core_busy, max_pstate, current_pstate;
10695 +
10696 + core_busy = int_tofp(cpu->samples[cpu->sample_ptr].core_pct_busy);
10697 +- turbo_pstate = int_tofp(cpu->pstate.turbo_pstate);
10698 ++ max_pstate = int_tofp(cpu->pstate.max_pstate);
10699 + current_pstate = int_tofp(cpu->pstate.current_pstate);
10700 +- busy_scaled = mul_fp(core_busy, div_fp(turbo_pstate, current_pstate));
10701 ++ busy_scaled = mul_fp(core_busy, div_fp(max_pstate, current_pstate));
10702 +
10703 + return fp_toint(busy_scaled);
10704 + }
10705 +diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
10706 +index 5996521..84573b4 100644
10707 +--- a/drivers/crypto/caam/caamhash.c
10708 ++++ b/drivers/crypto/caam/caamhash.c
10709 +@@ -429,7 +429,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
10710 + dma_addr_t src_dma, dst_dma;
10711 + int ret = 0;
10712 +
10713 +- desc = kmalloc(CAAM_CMD_SZ * 6 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
10714 ++ desc = kmalloc(CAAM_CMD_SZ * 8 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
10715 + if (!desc) {
10716 + dev_err(jrdev, "unable to allocate key input memory\n");
10717 + return -ENOMEM;
10718 +diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
10719 +index 7ef316f..ac1b43a 100644
10720 +--- a/drivers/firewire/core-cdev.c
10721 ++++ b/drivers/firewire/core-cdev.c
10722 +@@ -54,6 +54,7 @@
10723 + #define FW_CDEV_KERNEL_VERSION 5
10724 + #define FW_CDEV_VERSION_EVENT_REQUEST2 4
10725 + #define FW_CDEV_VERSION_ALLOCATE_REGION_END 4
10726 ++#define FW_CDEV_VERSION_AUTO_FLUSH_ISO_OVERFLOW 5
10727 +
10728 + struct client {
10729 + u32 version;
10730 +@@ -1005,6 +1006,8 @@ static int ioctl_create_iso_context(struct client *client, union ioctl_arg *arg)
10731 + a->channel, a->speed, a->header_size, cb, client);
10732 + if (IS_ERR(context))
10733 + return PTR_ERR(context);
10734 ++ if (client->version < FW_CDEV_VERSION_AUTO_FLUSH_ISO_OVERFLOW)
10735 ++ context->drop_overflow_headers = true;
10736 +
10737 + /* We only support one context at this time. */
10738 + spin_lock_irq(&client->lock);
10739 +diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
10740 +index 9e1db64..afb701e 100644
10741 +--- a/drivers/firewire/ohci.c
10742 ++++ b/drivers/firewire/ohci.c
10743 +@@ -2749,8 +2749,11 @@ static void copy_iso_headers(struct iso_context *ctx, const u32 *dma_hdr)
10744 + {
10745 + u32 *ctx_hdr;
10746 +
10747 +- if (ctx->header_length + ctx->base.header_size > PAGE_SIZE)
10748 ++ if (ctx->header_length + ctx->base.header_size > PAGE_SIZE) {
10749 ++ if (ctx->base.drop_overflow_headers)
10750 ++ return;
10751 + flush_iso_completions(ctx);
10752 ++ }
10753 +
10754 + ctx_hdr = ctx->header + ctx->header_length;
10755 + ctx->last_timestamp = (u16)le32_to_cpu((__force __le32)dma_hdr[0]);
10756 +@@ -2910,8 +2913,11 @@ static int handle_it_packet(struct context *context,
10757 +
10758 + sync_it_packet_for_cpu(context, d);
10759 +
10760 +- if (ctx->header_length + 4 > PAGE_SIZE)
10761 ++ if (ctx->header_length + 4 > PAGE_SIZE) {
10762 ++ if (ctx->base.drop_overflow_headers)
10763 ++ return 1;
10764 + flush_iso_completions(ctx);
10765 ++ }
10766 +
10767 + ctx_hdr = ctx->header + ctx->header_length;
10768 + ctx->last_timestamp = le16_to_cpu(last->res_count);
10769 +diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
10770 +index 3b315ba..f968590 100644
10771 +--- a/drivers/gpu/drm/i915/i915_dma.c
10772 ++++ b/drivers/gpu/drm/i915/i915_dma.c
10773 +@@ -1511,6 +1511,13 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
10774 + dev_priv->dev = dev;
10775 + dev_priv->info = info;
10776 +
10777 ++ spin_lock_init(&dev_priv->irq_lock);
10778 ++ spin_lock_init(&dev_priv->gpu_error.lock);
10779 ++ spin_lock_init(&dev_priv->rps.lock);
10780 ++ mutex_init(&dev_priv->dpio_lock);
10781 ++ mutex_init(&dev_priv->rps.hw_lock);
10782 ++ mutex_init(&dev_priv->modeset_restore_lock);
10783 ++
10784 + i915_dump_device_info(dev_priv);
10785 +
10786 + if (i915_get_bridge_dev(dev)) {
10787 +@@ -1601,6 +1608,8 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
10788 + intel_detect_pch(dev);
10789 +
10790 + intel_irq_init(dev);
10791 ++ intel_pm_init(dev);
10792 ++ intel_gt_sanitize(dev);
10793 + intel_gt_init(dev);
10794 +
10795 + /* Try to make sure MCHBAR is enabled before poking at it */
10796 +@@ -1626,14 +1635,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
10797 + if (!IS_I945G(dev) && !IS_I945GM(dev))
10798 + pci_enable_msi(dev->pdev);
10799 +
10800 +- spin_lock_init(&dev_priv->irq_lock);
10801 +- spin_lock_init(&dev_priv->gpu_error.lock);
10802 +- spin_lock_init(&dev_priv->rps.lock);
10803 +- mutex_init(&dev_priv->dpio_lock);
10804 +-
10805 +- mutex_init(&dev_priv->rps.hw_lock);
10806 +- mutex_init(&dev_priv->modeset_restore_lock);
10807 +-
10808 + dev_priv->num_plane = 1;
10809 + if (IS_VALLEYVIEW(dev))
10810 + dev_priv->num_plane = 2;
10811 +diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
10812 +index a2e4953..bc6cd31 100644
10813 +--- a/drivers/gpu/drm/i915/i915_drv.c
10814 ++++ b/drivers/gpu/drm/i915/i915_drv.c
10815 +@@ -685,7 +685,7 @@ static int i915_drm_thaw(struct drm_device *dev)
10816 + {
10817 + int error = 0;
10818 +
10819 +- intel_gt_reset(dev);
10820 ++ intel_gt_sanitize(dev);
10821 +
10822 + if (drm_core_check_feature(dev, DRIVER_MODESET)) {
10823 + mutex_lock(&dev->struct_mutex);
10824 +@@ -711,7 +711,7 @@ int i915_resume(struct drm_device *dev)
10825 +
10826 + pci_set_master(dev->pdev);
10827 +
10828 +- intel_gt_reset(dev);
10829 ++ intel_gt_sanitize(dev);
10830 +
10831 + /*
10832 + * Platforms with opregion should have sane BIOS, older ones (gen3 and
10833 +@@ -1247,21 +1247,21 @@ hsw_unclaimed_reg_check(struct drm_i915_private *dev_priv, u32 reg)
10834 +
10835 + #define __i915_read(x, y) \
10836 + u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg) { \
10837 ++ unsigned long irqflags; \
10838 + u##x val = 0; \
10839 ++ spin_lock_irqsave(&dev_priv->gt_lock, irqflags); \
10840 + if (IS_GEN5(dev_priv->dev)) \
10841 + ilk_dummy_write(dev_priv); \
10842 + if (NEEDS_FORCE_WAKE((dev_priv), (reg))) { \
10843 +- unsigned long irqflags; \
10844 +- spin_lock_irqsave(&dev_priv->gt_lock, irqflags); \
10845 + if (dev_priv->forcewake_count == 0) \
10846 + dev_priv->gt.force_wake_get(dev_priv); \
10847 + val = read##y(dev_priv->regs + reg); \
10848 + if (dev_priv->forcewake_count == 0) \
10849 + dev_priv->gt.force_wake_put(dev_priv); \
10850 +- spin_unlock_irqrestore(&dev_priv->gt_lock, irqflags); \
10851 + } else { \
10852 + val = read##y(dev_priv->regs + reg); \
10853 + } \
10854 ++ spin_unlock_irqrestore(&dev_priv->gt_lock, irqflags); \
10855 + trace_i915_reg_rw(false, reg, val, sizeof(val)); \
10856 + return val; \
10857 + }
10858 +@@ -1274,8 +1274,10 @@ __i915_read(64, q)
10859 +
10860 + #define __i915_write(x, y) \
10861 + void i915_write##x(struct drm_i915_private *dev_priv, u32 reg, u##x val) { \
10862 ++ unsigned long irqflags; \
10863 + u32 __fifo_ret = 0; \
10864 + trace_i915_reg_rw(true, reg, val, sizeof(val)); \
10865 ++ spin_lock_irqsave(&dev_priv->gt_lock, irqflags); \
10866 + if (NEEDS_FORCE_WAKE((dev_priv), (reg))) { \
10867 + __fifo_ret = __gen6_gt_wait_for_fifo(dev_priv); \
10868 + } \
10869 +@@ -1287,6 +1289,7 @@ void i915_write##x(struct drm_i915_private *dev_priv, u32 reg, u##x val) { \
10870 + gen6_gt_check_fifodbg(dev_priv); \
10871 + } \
10872 + hsw_unclaimed_reg_check(dev_priv, reg); \
10873 ++ spin_unlock_irqrestore(&dev_priv->gt_lock, irqflags); \
10874 + }
10875 + __i915_write(8, b)
10876 + __i915_write(16, w)
10877 +diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
10878 +index 9669a0b..47d8b68 100644
10879 +--- a/drivers/gpu/drm/i915/i915_drv.h
10880 ++++ b/drivers/gpu/drm/i915/i915_drv.h
10881 +@@ -491,6 +491,7 @@ enum intel_sbi_destination {
10882 + #define QUIRK_PIPEA_FORCE (1<<0)
10883 + #define QUIRK_LVDS_SSC_DISABLE (1<<1)
10884 + #define QUIRK_INVERT_BRIGHTNESS (1<<2)
10885 ++#define QUIRK_NO_PCH_PWM_ENABLE (1<<3)
10886 +
10887 + struct intel_fbdev;
10888 + struct intel_fbc_work;
10889 +@@ -1474,9 +1475,10 @@ void i915_hangcheck_elapsed(unsigned long data);
10890 + void i915_handle_error(struct drm_device *dev, bool wedged);
10891 +
10892 + extern void intel_irq_init(struct drm_device *dev);
10893 ++extern void intel_pm_init(struct drm_device *dev);
10894 + extern void intel_hpd_init(struct drm_device *dev);
10895 + extern void intel_gt_init(struct drm_device *dev);
10896 +-extern void intel_gt_reset(struct drm_device *dev);
10897 ++extern void intel_gt_sanitize(struct drm_device *dev);
10898 +
10899 + void i915_error_state_free(struct kref *error_ref);
10900 +
10901 +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
10902 +index 34118b0..0a30088 100644
10903 +--- a/drivers/gpu/drm/i915/i915_gem.c
10904 ++++ b/drivers/gpu/drm/i915/i915_gem.c
10905 +@@ -1881,6 +1881,10 @@ i915_gem_object_move_to_active(struct drm_i915_gem_object *obj,
10906 + u32 seqno = intel_ring_get_seqno(ring);
10907 +
10908 + BUG_ON(ring == NULL);
10909 ++ if (obj->ring != ring && obj->last_write_seqno) {
10910 ++ /* Keep the seqno relative to the current ring */
10911 ++ obj->last_write_seqno = seqno;
10912 ++ }
10913 + obj->ring = ring;
10914 +
10915 + /* Add a reference if we're newly entering the active list. */
10916 +@@ -2134,7 +2138,17 @@ void i915_gem_restore_fences(struct drm_device *dev)
10917 +
10918 + for (i = 0; i < dev_priv->num_fence_regs; i++) {
10919 + struct drm_i915_fence_reg *reg = &dev_priv->fence_regs[i];
10920 +- i915_gem_write_fence(dev, i, reg->obj);
10921 ++
10922 ++ /*
10923 ++ * Commit delayed tiling changes if we have an object still
10924 ++ * attached to the fence, otherwise just clear the fence.
10925 ++ */
10926 ++ if (reg->obj) {
10927 ++ i915_gem_object_update_fence(reg->obj, reg,
10928 ++ reg->obj->tiling_mode);
10929 ++ } else {
10930 ++ i915_gem_write_fence(dev, i, NULL);
10931 ++ }
10932 + }
10933 + }
10934 +
10935 +@@ -2534,7 +2548,6 @@ static void i965_write_fence_reg(struct drm_device *dev, int reg,
10936 + drm_i915_private_t *dev_priv = dev->dev_private;
10937 + int fence_reg;
10938 + int fence_pitch_shift;
10939 +- uint64_t val;
10940 +
10941 + if (INTEL_INFO(dev)->gen >= 6) {
10942 + fence_reg = FENCE_REG_SANDYBRIDGE_0;
10943 +@@ -2544,8 +2557,23 @@ static void i965_write_fence_reg(struct drm_device *dev, int reg,
10944 + fence_pitch_shift = I965_FENCE_PITCH_SHIFT;
10945 + }
10946 +
10947 ++ fence_reg += reg * 8;
10948 ++
10949 ++ /* To w/a incoherency with non-atomic 64-bit register updates,
10950 ++ * we split the 64-bit update into two 32-bit writes. In order
10951 ++ * for a partial fence not to be evaluated between writes, we
10952 ++ * precede the update with write to turn off the fence register,
10953 ++ * and only enable the fence as the last step.
10954 ++ *
10955 ++ * For extra levels of paranoia, we make sure each step lands
10956 ++ * before applying the next step.
10957 ++ */
10958 ++ I915_WRITE(fence_reg, 0);
10959 ++ POSTING_READ(fence_reg);
10960 ++
10961 + if (obj) {
10962 + u32 size = obj->gtt_space->size;
10963 ++ uint64_t val;
10964 +
10965 + val = (uint64_t)((obj->gtt_offset + size - 4096) &
10966 + 0xfffff000) << 32;
10967 +@@ -2554,12 +2582,16 @@ static void i965_write_fence_reg(struct drm_device *dev, int reg,
10968 + if (obj->tiling_mode == I915_TILING_Y)
10969 + val |= 1 << I965_FENCE_TILING_Y_SHIFT;
10970 + val |= I965_FENCE_REG_VALID;
10971 +- } else
10972 +- val = 0;
10973 +
10974 +- fence_reg += reg * 8;
10975 +- I915_WRITE64(fence_reg, val);
10976 +- POSTING_READ(fence_reg);
10977 ++ I915_WRITE(fence_reg + 4, val >> 32);
10978 ++ POSTING_READ(fence_reg + 4);
10979 ++
10980 ++ I915_WRITE(fence_reg + 0, val);
10981 ++ POSTING_READ(fence_reg);
10982 ++ } else {
10983 ++ I915_WRITE(fence_reg + 4, 0);
10984 ++ POSTING_READ(fence_reg + 4);
10985 ++ }
10986 + }
10987 +
10988 + static void i915_write_fence_reg(struct drm_device *dev, int reg,
10989 +@@ -2654,6 +2686,10 @@ static void i915_gem_write_fence(struct drm_device *dev, int reg,
10990 + if (i915_gem_object_needs_mb(dev_priv->fence_regs[reg].obj))
10991 + mb();
10992 +
10993 ++ WARN(obj && (!obj->stride || !obj->tiling_mode),
10994 ++ "bogus fence setup with stride: 0x%x, tiling mode: %i\n",
10995 ++ obj->stride, obj->tiling_mode);
10996 ++
10997 + switch (INTEL_INFO(dev)->gen) {
10998 + case 7:
10999 + case 6:
11000 +@@ -2713,6 +2749,7 @@ static void i915_gem_object_update_fence(struct drm_i915_gem_object *obj,
11001 + fence->obj = NULL;
11002 + list_del_init(&fence->lru_list);
11003 + }
11004 ++ obj->fence_dirty = false;
11005 + }
11006 +
11007 + static int
11008 +@@ -2842,7 +2879,6 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj)
11009 + return 0;
11010 +
11011 + i915_gem_object_update_fence(obj, reg, enable);
11012 +- obj->fence_dirty = false;
11013 +
11014 + return 0;
11015 + }
11016 +@@ -4457,7 +4493,7 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
11017 + list_for_each_entry(obj, &dev_priv->mm.unbound_list, gtt_list)
11018 + if (obj->pages_pin_count == 0)
11019 + cnt += obj->base.size >> PAGE_SHIFT;
11020 +- list_for_each_entry(obj, &dev_priv->mm.inactive_list, gtt_list)
11021 ++ list_for_each_entry(obj, &dev_priv->mm.inactive_list, mm_list)
11022 + if (obj->pin_count == 0 && obj->pages_pin_count == 0)
11023 + cnt += obj->base.size >> PAGE_SHIFT;
11024 +
11025 +diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
11026 +index 56746dc..e1f4e6e 100644
11027 +--- a/drivers/gpu/drm/i915/intel_display.c
11028 ++++ b/drivers/gpu/drm/i915/intel_display.c
11029 +@@ -8146,15 +8146,20 @@ static void intel_set_config_restore_state(struct drm_device *dev,
11030 + }
11031 +
11032 + static bool
11033 +-is_crtc_connector_off(struct drm_crtc *crtc, struct drm_connector *connectors,
11034 +- int num_connectors)
11035 ++is_crtc_connector_off(struct drm_mode_set *set)
11036 + {
11037 + int i;
11038 +
11039 +- for (i = 0; i < num_connectors; i++)
11040 +- if (connectors[i].encoder &&
11041 +- connectors[i].encoder->crtc == crtc &&
11042 +- connectors[i].dpms != DRM_MODE_DPMS_ON)
11043 ++ if (set->num_connectors == 0)
11044 ++ return false;
11045 ++
11046 ++ if (WARN_ON(set->connectors == NULL))
11047 ++ return false;
11048 ++
11049 ++ for (i = 0; i < set->num_connectors; i++)
11050 ++ if (set->connectors[i]->encoder &&
11051 ++ set->connectors[i]->encoder->crtc == set->crtc &&
11052 ++ set->connectors[i]->dpms != DRM_MODE_DPMS_ON)
11053 + return true;
11054 +
11055 + return false;
11056 +@@ -8167,10 +8172,8 @@ intel_set_config_compute_mode_changes(struct drm_mode_set *set,
11057 +
11058 + /* We should be able to check here if the fb has the same properties
11059 + * and then just flip_or_move it */
11060 +- if (set->connectors != NULL &&
11061 +- is_crtc_connector_off(set->crtc, *set->connectors,
11062 +- set->num_connectors)) {
11063 +- config->mode_changed = true;
11064 ++ if (is_crtc_connector_off(set)) {
11065 ++ config->mode_changed = true;
11066 + } else if (set->crtc->fb != set->fb) {
11067 + /* If we have no fb then treat it as a full mode set */
11068 + if (set->crtc->fb == NULL) {
11069 +@@ -8914,6 +8917,17 @@ static void quirk_invert_brightness(struct drm_device *dev)
11070 + DRM_INFO("applying inverted panel brightness quirk\n");
11071 + }
11072 +
11073 ++/*
11074 ++ * Some machines (Dell XPS13) suffer broken backlight controls if
11075 ++ * BLM_PCH_PWM_ENABLE is set.
11076 ++ */
11077 ++static void quirk_no_pcm_pwm_enable(struct drm_device *dev)
11078 ++{
11079 ++ struct drm_i915_private *dev_priv = dev->dev_private;
11080 ++ dev_priv->quirks |= QUIRK_NO_PCH_PWM_ENABLE;
11081 ++ DRM_INFO("applying no-PCH_PWM_ENABLE quirk\n");
11082 ++}
11083 ++
11084 + struct intel_quirk {
11085 + int device;
11086 + int subsystem_vendor;
11087 +@@ -8983,6 +8997,11 @@ static struct intel_quirk intel_quirks[] = {
11088 +
11089 + /* Acer Aspire 4736Z */
11090 + { 0x2a42, 0x1025, 0x0260, quirk_invert_brightness },
11091 ++
11092 ++ /* Dell XPS13 HD Sandy Bridge */
11093 ++ { 0x0116, 0x1028, 0x052e, quirk_no_pcm_pwm_enable },
11094 ++ /* Dell XPS13 HD and XPS13 FHD Ivy Bridge */
11095 ++ { 0x0166, 0x1028, 0x058b, quirk_no_pcm_pwm_enable },
11096 + };
11097 +
11098 + static void intel_init_quirks(struct drm_device *dev)
11099 +diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
11100 +index eb5e6e9..33cb87f 100644
11101 +--- a/drivers/gpu/drm/i915/intel_panel.c
11102 ++++ b/drivers/gpu/drm/i915/intel_panel.c
11103 +@@ -354,7 +354,8 @@ void intel_panel_enable_backlight(struct drm_device *dev,
11104 + POSTING_READ(reg);
11105 + I915_WRITE(reg, tmp | BLM_PWM_ENABLE);
11106 +
11107 +- if (HAS_PCH_SPLIT(dev)) {
11108 ++ if (HAS_PCH_SPLIT(dev) &&
11109 ++ !(dev_priv->quirks & QUIRK_NO_PCH_PWM_ENABLE)) {
11110 + tmp = I915_READ(BLC_PWM_PCH_CTL1);
11111 + tmp |= BLM_PCH_PWM_ENABLE;
11112 + tmp &= ~BLM_PCH_OVERRIDE_ENABLE;
11113 +diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
11114 +index aa01128..2cfe9f6 100644
11115 +--- a/drivers/gpu/drm/i915/intel_pm.c
11116 ++++ b/drivers/gpu/drm/i915/intel_pm.c
11117 +@@ -4486,7 +4486,7 @@ static void vlv_force_wake_put(struct drm_i915_private *dev_priv)
11118 + gen6_gt_check_fifodbg(dev_priv);
11119 + }
11120 +
11121 +-void intel_gt_reset(struct drm_device *dev)
11122 ++void intel_gt_sanitize(struct drm_device *dev)
11123 + {
11124 + struct drm_i915_private *dev_priv = dev->dev_private;
11125 +
11126 +@@ -4497,6 +4497,10 @@ void intel_gt_reset(struct drm_device *dev)
11127 + if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
11128 + __gen6_gt_force_wake_mt_reset(dev_priv);
11129 + }
11130 ++
11131 ++ /* BIOS often leaves RC6 enabled, but disable it for hw init */
11132 ++ if (INTEL_INFO(dev)->gen >= 6)
11133 ++ intel_disable_gt_powersave(dev);
11134 + }
11135 +
11136 + void intel_gt_init(struct drm_device *dev)
11137 +@@ -4505,18 +4509,51 @@ void intel_gt_init(struct drm_device *dev)
11138 +
11139 + spin_lock_init(&dev_priv->gt_lock);
11140 +
11141 +- intel_gt_reset(dev);
11142 +-
11143 + if (IS_VALLEYVIEW(dev)) {
11144 + dev_priv->gt.force_wake_get = vlv_force_wake_get;
11145 + dev_priv->gt.force_wake_put = vlv_force_wake_put;
11146 +- } else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
11147 ++ } else if (IS_HASWELL(dev)) {
11148 + dev_priv->gt.force_wake_get = __gen6_gt_force_wake_mt_get;
11149 + dev_priv->gt.force_wake_put = __gen6_gt_force_wake_mt_put;
11150 ++ } else if (IS_IVYBRIDGE(dev)) {
11151 ++ u32 ecobus;
11152 ++
11153 ++ /* IVB configs may use multi-threaded forcewake */
11154 ++
11155 ++ /* A small trick here - if the bios hasn't configured
11156 ++ * MT forcewake, and if the device is in RC6, then
11157 ++ * force_wake_mt_get will not wake the device and the
11158 ++ * ECOBUS read will return zero. Which will be
11159 ++ * (correctly) interpreted by the test below as MT
11160 ++ * forcewake being disabled.
11161 ++ */
11162 ++ mutex_lock(&dev->struct_mutex);
11163 ++ __gen6_gt_force_wake_mt_get(dev_priv);
11164 ++ ecobus = I915_READ_NOTRACE(ECOBUS);
11165 ++ __gen6_gt_force_wake_mt_put(dev_priv);
11166 ++ mutex_unlock(&dev->struct_mutex);
11167 ++
11168 ++ if (ecobus & FORCEWAKE_MT_ENABLE) {
11169 ++ dev_priv->gt.force_wake_get =
11170 ++ __gen6_gt_force_wake_mt_get;
11171 ++ dev_priv->gt.force_wake_put =
11172 ++ __gen6_gt_force_wake_mt_put;
11173 ++ } else {
11174 ++ DRM_INFO("No MT forcewake available on Ivybridge, this can result in issues\n");
11175 ++ DRM_INFO("when using vblank-synced partial screen updates.\n");
11176 ++ dev_priv->gt.force_wake_get = __gen6_gt_force_wake_get;
11177 ++ dev_priv->gt.force_wake_put = __gen6_gt_force_wake_put;
11178 ++ }
11179 + } else if (IS_GEN6(dev)) {
11180 + dev_priv->gt.force_wake_get = __gen6_gt_force_wake_get;
11181 + dev_priv->gt.force_wake_put = __gen6_gt_force_wake_put;
11182 + }
11183 ++}
11184 ++
11185 ++void intel_pm_init(struct drm_device *dev)
11186 ++{
11187 ++ struct drm_i915_private *dev_priv = dev->dev_private;
11188 ++
11189 + INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,
11190 + intel_gen6_powersave_work);
11191 + }
11192 +diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
11193 +index 1d5d613..1424f20 100644
11194 +--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
11195 ++++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
11196 +@@ -490,9 +490,6 @@ cleanup_pipe_control(struct intel_ring_buffer *ring)
11197 + struct pipe_control *pc = ring->private;
11198 + struct drm_i915_gem_object *obj;
11199 +
11200 +- if (!ring->private)
11201 +- return;
11202 +-
11203 + obj = pc->obj;
11204 +
11205 + kunmap(sg_page(obj->pages->sgl));
11206 +@@ -500,7 +497,6 @@ cleanup_pipe_control(struct intel_ring_buffer *ring)
11207 + drm_gem_object_unreference(&obj->base);
11208 +
11209 + kfree(pc);
11210 +- ring->private = NULL;
11211 + }
11212 +
11213 + static int init_render_ring(struct intel_ring_buffer *ring)
11214 +@@ -571,7 +567,10 @@ static void render_ring_cleanup(struct intel_ring_buffer *ring)
11215 + if (HAS_BROKEN_CS_TLB(dev))
11216 + drm_gem_object_unreference(to_gem_object(ring->private));
11217 +
11218 +- cleanup_pipe_control(ring);
11219 ++ if (INTEL_INFO(dev)->gen >= 5)
11220 ++ cleanup_pipe_control(ring);
11221 ++
11222 ++ ring->private = NULL;
11223 + }
11224 +
11225 + static void
11226 +diff --git a/drivers/gpu/drm/nouveau/nv17_fence.c b/drivers/gpu/drm/nouveau/nv17_fence.c
11227 +index 8e47a9b..22aa996 100644
11228 +--- a/drivers/gpu/drm/nouveau/nv17_fence.c
11229 ++++ b/drivers/gpu/drm/nouveau/nv17_fence.c
11230 +@@ -76,7 +76,7 @@ nv17_fence_context_new(struct nouveau_channel *chan)
11231 + struct ttm_mem_reg *mem = &priv->bo->bo.mem;
11232 + struct nouveau_object *object;
11233 + u32 start = mem->start * PAGE_SIZE;
11234 +- u32 limit = mem->start + mem->size - 1;
11235 ++ u32 limit = start + mem->size - 1;
11236 + int ret = 0;
11237 +
11238 + fctx = chan->fence = kzalloc(sizeof(*fctx), GFP_KERNEL);
11239 +diff --git a/drivers/gpu/drm/nouveau/nv50_fence.c b/drivers/gpu/drm/nouveau/nv50_fence.c
11240 +index f9701e5..0ee3638 100644
11241 +--- a/drivers/gpu/drm/nouveau/nv50_fence.c
11242 ++++ b/drivers/gpu/drm/nouveau/nv50_fence.c
11243 +@@ -39,6 +39,8 @@ nv50_fence_context_new(struct nouveau_channel *chan)
11244 + struct nv10_fence_chan *fctx;
11245 + struct ttm_mem_reg *mem = &priv->bo->bo.mem;
11246 + struct nouveau_object *object;
11247 ++ u32 start = mem->start * PAGE_SIZE;
11248 ++ u32 limit = start + mem->size - 1;
11249 + int ret, i;
11250 +
11251 + fctx = chan->fence = kzalloc(sizeof(*fctx), GFP_KERNEL);
11252 +@@ -51,26 +53,28 @@ nv50_fence_context_new(struct nouveau_channel *chan)
11253 + fctx->base.sync = nv17_fence_sync;
11254 +
11255 + ret = nouveau_object_new(nv_object(chan->cli), chan->handle,
11256 +- NvSema, 0x0002,
11257 ++ NvSema, 0x003d,
11258 + &(struct nv_dma_class) {
11259 + .flags = NV_DMA_TARGET_VRAM |
11260 + NV_DMA_ACCESS_RDWR,
11261 +- .start = mem->start * PAGE_SIZE,
11262 +- .limit = mem->size - 1,
11263 ++ .start = start,
11264 ++ .limit = limit,
11265 + }, sizeof(struct nv_dma_class),
11266 + &object);
11267 +
11268 + /* dma objects for display sync channel semaphore blocks */
11269 + for (i = 0; !ret && i < dev->mode_config.num_crtc; i++) {
11270 + struct nouveau_bo *bo = nv50_display_crtc_sema(dev, i);
11271 ++ u32 start = bo->bo.mem.start * PAGE_SIZE;
11272 ++ u32 limit = start + bo->bo.mem.size - 1;
11273 +
11274 + ret = nouveau_object_new(nv_object(chan->cli), chan->handle,
11275 + NvEvoSema0 + i, 0x003d,
11276 + &(struct nv_dma_class) {
11277 + .flags = NV_DMA_TARGET_VRAM |
11278 + NV_DMA_ACCESS_RDWR,
11279 +- .start = bo->bo.offset,
11280 +- .limit = bo->bo.offset + 0xfff,
11281 ++ .start = start,
11282 ++ .limit = limit,
11283 + }, sizeof(struct nv_dma_class),
11284 + &object);
11285 + }
11286 +diff --git a/drivers/gpu/drm/radeon/atom.c b/drivers/gpu/drm/radeon/atom.c
11287 +index fb441a7..15da7ef 100644
11288 +--- a/drivers/gpu/drm/radeon/atom.c
11289 ++++ b/drivers/gpu/drm/radeon/atom.c
11290 +@@ -1222,12 +1222,17 @@ int atom_execute_table(struct atom_context *ctx, int index, uint32_t * params)
11291 + int r;
11292 +
11293 + mutex_lock(&ctx->mutex);
11294 ++ /* reset data block */
11295 ++ ctx->data_block = 0;
11296 + /* reset reg block */
11297 + ctx->reg_block = 0;
11298 + /* reset fb window */
11299 + ctx->fb_base = 0;
11300 + /* reset io mode */
11301 + ctx->io_mode = ATOM_IO_MM;
11302 ++ /* reset divmul */
11303 ++ ctx->divmul[0] = 0;
11304 ++ ctx->divmul[1] = 0;
11305 + r = atom_execute_table_locked(ctx, index, params);
11306 + mutex_unlock(&ctx->mutex);
11307 + return r;
11308 +diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
11309 +index 064023b..32501f6 100644
11310 +--- a/drivers/gpu/drm/radeon/atombios_dp.c
11311 ++++ b/drivers/gpu/drm/radeon/atombios_dp.c
11312 +@@ -44,6 +44,41 @@ static char *pre_emph_names[] = {
11313 + };
11314 +
11315 + /***** radeon AUX functions *****/
11316 ++
11317 ++/* Atom needs data in little endian format
11318 ++ * so swap as appropriate when copying data to
11319 ++ * or from atom. Note that atom operates on
11320 ++ * dw units.
11321 ++ */
11322 ++static void radeon_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le)
11323 ++{
11324 ++#ifdef __BIG_ENDIAN
11325 ++ u8 src_tmp[20], dst_tmp[20]; /* used for byteswapping */
11326 ++ u32 *dst32, *src32;
11327 ++ int i;
11328 ++
11329 ++ memcpy(src_tmp, src, num_bytes);
11330 ++ src32 = (u32 *)src_tmp;
11331 ++ dst32 = (u32 *)dst_tmp;
11332 ++ if (to_le) {
11333 ++ for (i = 0; i < ((num_bytes + 3) / 4); i++)
11334 ++ dst32[i] = cpu_to_le32(src32[i]);
11335 ++ memcpy(dst, dst_tmp, num_bytes);
11336 ++ } else {
11337 ++ u8 dws = num_bytes & ~3;
11338 ++ for (i = 0; i < ((num_bytes + 3) / 4); i++)
11339 ++ dst32[i] = le32_to_cpu(src32[i]);
11340 ++ memcpy(dst, dst_tmp, dws);
11341 ++ if (num_bytes % 4) {
11342 ++ for (i = 0; i < (num_bytes % 4); i++)
11343 ++ dst[dws+i] = dst_tmp[dws+i];
11344 ++ }
11345 ++ }
11346 ++#else
11347 ++ memcpy(dst, src, num_bytes);
11348 ++#endif
11349 ++}
11350 ++
11351 + union aux_channel_transaction {
11352 + PROCESS_AUX_CHANNEL_TRANSACTION_PS_ALLOCATION v1;
11353 + PROCESS_AUX_CHANNEL_TRANSACTION_PARAMETERS_V2 v2;
11354 +@@ -65,10 +100,10 @@ static int radeon_process_aux_ch(struct radeon_i2c_chan *chan,
11355 +
11356 + base = (unsigned char *)(rdev->mode_info.atom_context->scratch + 1);
11357 +
11358 +- memcpy(base, send, send_bytes);
11359 ++ radeon_copy_swap(base, send, send_bytes, true);
11360 +
11361 +- args.v1.lpAuxRequest = 0 + 4;
11362 +- args.v1.lpDataOut = 16 + 4;
11363 ++ args.v1.lpAuxRequest = cpu_to_le16((u16)(0 + 4));
11364 ++ args.v1.lpDataOut = cpu_to_le16((u16)(16 + 4));
11365 + args.v1.ucDataOutLen = 0;
11366 + args.v1.ucChannelID = chan->rec.i2c_id;
11367 + args.v1.ucDelay = delay / 10;
11368 +@@ -102,7 +137,7 @@ static int radeon_process_aux_ch(struct radeon_i2c_chan *chan,
11369 + recv_bytes = recv_size;
11370 +
11371 + if (recv && recv_size)
11372 +- memcpy(recv, base + 16, recv_bytes);
11373 ++ radeon_copy_swap(recv, base + 16, recv_bytes, false);
11374 +
11375 + return recv_bytes;
11376 + }
11377 +diff --git a/drivers/gpu/drm/radeon/evergreen_hdmi.c b/drivers/gpu/drm/radeon/evergreen_hdmi.c
11378 +index b9c6f76..bb9ea36 100644
11379 +--- a/drivers/gpu/drm/radeon/evergreen_hdmi.c
11380 ++++ b/drivers/gpu/drm/radeon/evergreen_hdmi.c
11381 +@@ -157,9 +157,9 @@ static void evergreen_audio_set_dto(struct drm_encoder *encoder, u32 clock)
11382 + * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE
11383 + * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator
11384 + */
11385 ++ WREG32(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO0_SOURCE_SEL(radeon_crtc->crtc_id));
11386 + WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100);
11387 + WREG32(DCCG_AUDIO_DTO0_MODULE, clock * 100);
11388 +- WREG32(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO0_SOURCE_SEL(radeon_crtc->crtc_id));
11389 + }
11390 +
11391 +
11392 +@@ -177,6 +177,9 @@ void evergreen_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode
11393 + uint32_t offset;
11394 + ssize_t err;
11395 +
11396 ++ if (!dig || !dig->afmt)
11397 ++ return;
11398 ++
11399 + /* Silent, r600_hdmi_enable will raise WARN for us */
11400 + if (!dig->afmt->enabled)
11401 + return;
11402 +@@ -280,6 +283,9 @@ void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable)
11403 + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
11404 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
11405 +
11406 ++ if (!dig || !dig->afmt)
11407 ++ return;
11408 ++
11409 + /* Silent, r600_hdmi_enable will raise WARN for us */
11410 + if (enable && dig->afmt->enabled)
11411 + return;
11412 +diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
11413 +index 6948eb8..b60004e 100644
11414 +--- a/drivers/gpu/drm/radeon/r600.c
11415 ++++ b/drivers/gpu/drm/radeon/r600.c
11416 +@@ -2986,7 +2986,7 @@ void r600_uvd_fence_emit(struct radeon_device *rdev,
11417 + struct radeon_fence *fence)
11418 + {
11419 + struct radeon_ring *ring = &rdev->ring[fence->ring];
11420 +- uint32_t addr = rdev->fence_drv[fence->ring].gpu_addr;
11421 ++ uint64_t addr = rdev->fence_drv[fence->ring].gpu_addr;
11422 +
11423 + radeon_ring_write(ring, PACKET0(UVD_CONTEXT_ID, 0));
11424 + radeon_ring_write(ring, fence->seq);
11425 +diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
11426 +index e73b2a7..f48240b 100644
11427 +--- a/drivers/gpu/drm/radeon/r600_hdmi.c
11428 ++++ b/drivers/gpu/drm/radeon/r600_hdmi.c
11429 +@@ -266,6 +266,9 @@ void r600_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mod
11430 + uint32_t offset;
11431 + ssize_t err;
11432 +
11433 ++ if (!dig || !dig->afmt)
11434 ++ return;
11435 ++
11436 + /* Silent, r600_hdmi_enable will raise WARN for us */
11437 + if (!dig->afmt->enabled)
11438 + return;
11439 +@@ -448,6 +451,9 @@ void r600_hdmi_enable(struct drm_encoder *encoder, bool enable)
11440 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
11441 + u32 hdmi = HDMI0_ERROR_ACK;
11442 +
11443 ++ if (!dig || !dig->afmt)
11444 ++ return;
11445 ++
11446 + /* Silent, r600_hdmi_enable will raise WARN for us */
11447 + if (enable && dig->afmt->enabled)
11448 + return;
11449 +diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
11450 +index 142ce6c..f4dcfdd 100644
11451 +--- a/drivers/gpu/drm/radeon/radeon.h
11452 ++++ b/drivers/gpu/drm/radeon/radeon.h
11453 +@@ -408,6 +408,7 @@ struct radeon_sa_manager {
11454 + uint64_t gpu_addr;
11455 + void *cpu_ptr;
11456 + uint32_t domain;
11457 ++ uint32_t align;
11458 + };
11459 +
11460 + struct radeon_sa_bo;
11461 +diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
11462 +index 78edadc..68ce360 100644
11463 +--- a/drivers/gpu/drm/radeon/radeon_combios.c
11464 ++++ b/drivers/gpu/drm/radeon/radeon_combios.c
11465 +@@ -147,7 +147,7 @@ static uint16_t combios_get_table_offset(struct drm_device *dev,
11466 + enum radeon_combios_table_offset table)
11467 + {
11468 + struct radeon_device *rdev = dev->dev_private;
11469 +- int rev;
11470 ++ int rev, size;
11471 + uint16_t offset = 0, check_offset;
11472 +
11473 + if (!rdev->bios)
11474 +@@ -156,174 +156,106 @@ static uint16_t combios_get_table_offset(struct drm_device *dev,
11475 + switch (table) {
11476 + /* absolute offset tables */
11477 + case COMBIOS_ASIC_INIT_1_TABLE:
11478 +- check_offset = RBIOS16(rdev->bios_header_start + 0xc);
11479 +- if (check_offset)
11480 +- offset = check_offset;
11481 ++ check_offset = 0xc;
11482 + break;
11483 + case COMBIOS_BIOS_SUPPORT_TABLE:
11484 +- check_offset = RBIOS16(rdev->bios_header_start + 0x14);
11485 +- if (check_offset)
11486 +- offset = check_offset;
11487 ++ check_offset = 0x14;
11488 + break;
11489 + case COMBIOS_DAC_PROGRAMMING_TABLE:
11490 +- check_offset = RBIOS16(rdev->bios_header_start + 0x2a);
11491 +- if (check_offset)
11492 +- offset = check_offset;
11493 ++ check_offset = 0x2a;
11494 + break;
11495 + case COMBIOS_MAX_COLOR_DEPTH_TABLE:
11496 +- check_offset = RBIOS16(rdev->bios_header_start + 0x2c);
11497 +- if (check_offset)
11498 +- offset = check_offset;
11499 ++ check_offset = 0x2c;
11500 + break;
11501 + case COMBIOS_CRTC_INFO_TABLE:
11502 +- check_offset = RBIOS16(rdev->bios_header_start + 0x2e);
11503 +- if (check_offset)
11504 +- offset = check_offset;
11505 ++ check_offset = 0x2e;
11506 + break;
11507 + case COMBIOS_PLL_INFO_TABLE:
11508 +- check_offset = RBIOS16(rdev->bios_header_start + 0x30);
11509 +- if (check_offset)
11510 +- offset = check_offset;
11511 ++ check_offset = 0x30;
11512 + break;
11513 + case COMBIOS_TV_INFO_TABLE:
11514 +- check_offset = RBIOS16(rdev->bios_header_start + 0x32);
11515 +- if (check_offset)
11516 +- offset = check_offset;
11517 ++ check_offset = 0x32;
11518 + break;
11519 + case COMBIOS_DFP_INFO_TABLE:
11520 +- check_offset = RBIOS16(rdev->bios_header_start + 0x34);
11521 +- if (check_offset)
11522 +- offset = check_offset;
11523 ++ check_offset = 0x34;
11524 + break;
11525 + case COMBIOS_HW_CONFIG_INFO_TABLE:
11526 +- check_offset = RBIOS16(rdev->bios_header_start + 0x36);
11527 +- if (check_offset)
11528 +- offset = check_offset;
11529 ++ check_offset = 0x36;
11530 + break;
11531 + case COMBIOS_MULTIMEDIA_INFO_TABLE:
11532 +- check_offset = RBIOS16(rdev->bios_header_start + 0x38);
11533 +- if (check_offset)
11534 +- offset = check_offset;
11535 ++ check_offset = 0x38;
11536 + break;
11537 + case COMBIOS_TV_STD_PATCH_TABLE:
11538 +- check_offset = RBIOS16(rdev->bios_header_start + 0x3e);
11539 +- if (check_offset)
11540 +- offset = check_offset;
11541 ++ check_offset = 0x3e;
11542 + break;
11543 + case COMBIOS_LCD_INFO_TABLE:
11544 +- check_offset = RBIOS16(rdev->bios_header_start + 0x40);
11545 +- if (check_offset)
11546 +- offset = check_offset;
11547 ++ check_offset = 0x40;
11548 + break;
11549 + case COMBIOS_MOBILE_INFO_TABLE:
11550 +- check_offset = RBIOS16(rdev->bios_header_start + 0x42);
11551 +- if (check_offset)
11552 +- offset = check_offset;
11553 ++ check_offset = 0x42;
11554 + break;
11555 + case COMBIOS_PLL_INIT_TABLE:
11556 +- check_offset = RBIOS16(rdev->bios_header_start + 0x46);
11557 +- if (check_offset)
11558 +- offset = check_offset;
11559 ++ check_offset = 0x46;
11560 + break;
11561 + case COMBIOS_MEM_CONFIG_TABLE:
11562 +- check_offset = RBIOS16(rdev->bios_header_start + 0x48);
11563 +- if (check_offset)
11564 +- offset = check_offset;
11565 ++ check_offset = 0x48;
11566 + break;
11567 + case COMBIOS_SAVE_MASK_TABLE:
11568 +- check_offset = RBIOS16(rdev->bios_header_start + 0x4a);
11569 +- if (check_offset)
11570 +- offset = check_offset;
11571 ++ check_offset = 0x4a;
11572 + break;
11573 + case COMBIOS_HARDCODED_EDID_TABLE:
11574 +- check_offset = RBIOS16(rdev->bios_header_start + 0x4c);
11575 +- if (check_offset)
11576 +- offset = check_offset;
11577 ++ check_offset = 0x4c;
11578 + break;
11579 + case COMBIOS_ASIC_INIT_2_TABLE:
11580 +- check_offset = RBIOS16(rdev->bios_header_start + 0x4e);
11581 +- if (check_offset)
11582 +- offset = check_offset;
11583 ++ check_offset = 0x4e;
11584 + break;
11585 + case COMBIOS_CONNECTOR_INFO_TABLE:
11586 +- check_offset = RBIOS16(rdev->bios_header_start + 0x50);
11587 +- if (check_offset)
11588 +- offset = check_offset;
11589 ++ check_offset = 0x50;
11590 + break;
11591 + case COMBIOS_DYN_CLK_1_TABLE:
11592 +- check_offset = RBIOS16(rdev->bios_header_start + 0x52);
11593 +- if (check_offset)
11594 +- offset = check_offset;
11595 ++ check_offset = 0x52;
11596 + break;
11597 + case COMBIOS_RESERVED_MEM_TABLE:
11598 +- check_offset = RBIOS16(rdev->bios_header_start + 0x54);
11599 +- if (check_offset)
11600 +- offset = check_offset;
11601 ++ check_offset = 0x54;
11602 + break;
11603 + case COMBIOS_EXT_TMDS_INFO_TABLE:
11604 +- check_offset = RBIOS16(rdev->bios_header_start + 0x58);
11605 +- if (check_offset)
11606 +- offset = check_offset;
11607 ++ check_offset = 0x58;
11608 + break;
11609 + case COMBIOS_MEM_CLK_INFO_TABLE:
11610 +- check_offset = RBIOS16(rdev->bios_header_start + 0x5a);
11611 +- if (check_offset)
11612 +- offset = check_offset;
11613 ++ check_offset = 0x5a;
11614 + break;
11615 + case COMBIOS_EXT_DAC_INFO_TABLE:
11616 +- check_offset = RBIOS16(rdev->bios_header_start + 0x5c);
11617 +- if (check_offset)
11618 +- offset = check_offset;
11619 ++ check_offset = 0x5c;
11620 + break;
11621 + case COMBIOS_MISC_INFO_TABLE:
11622 +- check_offset = RBIOS16(rdev->bios_header_start + 0x5e);
11623 +- if (check_offset)
11624 +- offset = check_offset;
11625 ++ check_offset = 0x5e;
11626 + break;
11627 + case COMBIOS_CRT_INFO_TABLE:
11628 +- check_offset = RBIOS16(rdev->bios_header_start + 0x60);
11629 +- if (check_offset)
11630 +- offset = check_offset;
11631 ++ check_offset = 0x60;
11632 + break;
11633 + case COMBIOS_INTEGRATED_SYSTEM_INFO_TABLE:
11634 +- check_offset = RBIOS16(rdev->bios_header_start + 0x62);
11635 +- if (check_offset)
11636 +- offset = check_offset;
11637 ++ check_offset = 0x62;
11638 + break;
11639 + case COMBIOS_COMPONENT_VIDEO_INFO_TABLE:
11640 +- check_offset = RBIOS16(rdev->bios_header_start + 0x64);
11641 +- if (check_offset)
11642 +- offset = check_offset;
11643 ++ check_offset = 0x64;
11644 + break;
11645 + case COMBIOS_FAN_SPEED_INFO_TABLE:
11646 +- check_offset = RBIOS16(rdev->bios_header_start + 0x66);
11647 +- if (check_offset)
11648 +- offset = check_offset;
11649 ++ check_offset = 0x66;
11650 + break;
11651 + case COMBIOS_OVERDRIVE_INFO_TABLE:
11652 +- check_offset = RBIOS16(rdev->bios_header_start + 0x68);
11653 +- if (check_offset)
11654 +- offset = check_offset;
11655 ++ check_offset = 0x68;
11656 + break;
11657 + case COMBIOS_OEM_INFO_TABLE:
11658 +- check_offset = RBIOS16(rdev->bios_header_start + 0x6a);
11659 +- if (check_offset)
11660 +- offset = check_offset;
11661 ++ check_offset = 0x6a;
11662 + break;
11663 + case COMBIOS_DYN_CLK_2_TABLE:
11664 +- check_offset = RBIOS16(rdev->bios_header_start + 0x6c);
11665 +- if (check_offset)
11666 +- offset = check_offset;
11667 ++ check_offset = 0x6c;
11668 + break;
11669 + case COMBIOS_POWER_CONNECTOR_INFO_TABLE:
11670 +- check_offset = RBIOS16(rdev->bios_header_start + 0x6e);
11671 +- if (check_offset)
11672 +- offset = check_offset;
11673 ++ check_offset = 0x6e;
11674 + break;
11675 + case COMBIOS_I2C_INFO_TABLE:
11676 +- check_offset = RBIOS16(rdev->bios_header_start + 0x70);
11677 +- if (check_offset)
11678 +- offset = check_offset;
11679 ++ check_offset = 0x70;
11680 + break;
11681 + /* relative offset tables */
11682 + case COMBIOS_ASIC_INIT_3_TABLE: /* offset from misc info */
11683 +@@ -439,11 +371,16 @@ static uint16_t combios_get_table_offset(struct drm_device *dev,
11684 + }
11685 + break;
11686 + default:
11687 ++ check_offset = 0;
11688 + break;
11689 + }
11690 +
11691 +- return offset;
11692 ++ size = RBIOS8(rdev->bios_header_start + 0x6);
11693 ++ /* check absolute offset tables */
11694 ++ if (table < COMBIOS_ASIC_INIT_3_TABLE && check_offset && check_offset < size)
11695 ++ offset = RBIOS16(rdev->bios_header_start + check_offset);
11696 +
11697 ++ return offset;
11698 + }
11699 +
11700 + bool radeon_combios_check_hardcoded_edid(struct radeon_device *rdev)
11701 +@@ -965,16 +902,22 @@ struct radeon_encoder_primary_dac *radeon_combios_get_primary_dac_info(struct
11702 + dac = RBIOS8(dac_info + 0x3) & 0xf;
11703 + p_dac->ps2_pdac_adj = (bg << 8) | (dac);
11704 + }
11705 +- /* if the values are all zeros, use the table */
11706 +- if (p_dac->ps2_pdac_adj)
11707 ++ /* if the values are zeros, use the table */
11708 ++ if ((dac == 0) || (bg == 0))
11709 ++ found = 0;
11710 ++ else
11711 + found = 1;
11712 + }
11713 +
11714 + /* quirks */
11715 ++ /* Radeon 7000 (RV100) */
11716 ++ if (((dev->pdev->device == 0x5159) &&
11717 ++ (dev->pdev->subsystem_vendor == 0x174B) &&
11718 ++ (dev->pdev->subsystem_device == 0x7c28)) ||
11719 + /* Radeon 9100 (R200) */
11720 +- if ((dev->pdev->device == 0x514D) &&
11721 ++ ((dev->pdev->device == 0x514D) &&
11722 + (dev->pdev->subsystem_vendor == 0x174B) &&
11723 +- (dev->pdev->subsystem_device == 0x7149)) {
11724 ++ (dev->pdev->subsystem_device == 0x7149))) {
11725 + /* vbios value is bad, use the default */
11726 + found = 0;
11727 + }
11728 +diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
11729 +index 43ec4a4..5ce190b 100644
11730 +--- a/drivers/gpu/drm/radeon/radeon_gart.c
11731 ++++ b/drivers/gpu/drm/radeon/radeon_gart.c
11732 +@@ -467,6 +467,7 @@ int radeon_vm_manager_init(struct radeon_device *rdev)
11733 + size *= 2;
11734 + r = radeon_sa_bo_manager_init(rdev, &rdev->vm_manager.sa_manager,
11735 + RADEON_GPU_PAGE_ALIGN(size),
11736 ++ RADEON_GPU_PAGE_SIZE,
11737 + RADEON_GEM_DOMAIN_VRAM);
11738 + if (r) {
11739 + dev_err(rdev->dev, "failed to allocate vm bo (%dKB)\n",
11740 +diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
11741 +index 5a99d43..1fe12ab 100644
11742 +--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
11743 ++++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
11744 +@@ -241,9 +241,6 @@ int radeon_irq_kms_init(struct radeon_device *rdev)
11745 + {
11746 + int r = 0;
11747 +
11748 +- INIT_WORK(&rdev->hotplug_work, radeon_hotplug_work_func);
11749 +- INIT_WORK(&rdev->audio_work, r600_audio_update_hdmi);
11750 +-
11751 + spin_lock_init(&rdev->irq.lock);
11752 + r = drm_vblank_init(rdev->ddev, rdev->num_crtc);
11753 + if (r) {
11754 +@@ -265,6 +262,10 @@ int radeon_irq_kms_init(struct radeon_device *rdev)
11755 + rdev->irq.installed = false;
11756 + return r;
11757 + }
11758 ++
11759 ++ INIT_WORK(&rdev->hotplug_work, radeon_hotplug_work_func);
11760 ++ INIT_WORK(&rdev->audio_work, r600_audio_update_hdmi);
11761 ++
11762 + DRM_INFO("radeon: irq initialized.\n");
11763 + return 0;
11764 + }
11765 +@@ -284,8 +285,8 @@ void radeon_irq_kms_fini(struct radeon_device *rdev)
11766 + rdev->irq.installed = false;
11767 + if (rdev->msi_enabled)
11768 + pci_disable_msi(rdev->pdev);
11769 ++ flush_work(&rdev->hotplug_work);
11770 + }
11771 +- flush_work(&rdev->hotplug_work);
11772 + }
11773 +
11774 + /**
11775 +diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
11776 +index e2cb80a..2943823 100644
11777 +--- a/drivers/gpu/drm/radeon/radeon_object.h
11778 ++++ b/drivers/gpu/drm/radeon/radeon_object.h
11779 +@@ -158,7 +158,7 @@ static inline void * radeon_sa_bo_cpu_addr(struct radeon_sa_bo *sa_bo)
11780 +
11781 + extern int radeon_sa_bo_manager_init(struct radeon_device *rdev,
11782 + struct radeon_sa_manager *sa_manager,
11783 +- unsigned size, u32 domain);
11784 ++ unsigned size, u32 align, u32 domain);
11785 + extern void radeon_sa_bo_manager_fini(struct radeon_device *rdev,
11786 + struct radeon_sa_manager *sa_manager);
11787 + extern int radeon_sa_bo_manager_start(struct radeon_device *rdev,
11788 +diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
11789 +index 82434018..83f6295 100644
11790 +--- a/drivers/gpu/drm/radeon/radeon_ring.c
11791 ++++ b/drivers/gpu/drm/radeon/radeon_ring.c
11792 +@@ -224,6 +224,7 @@ int radeon_ib_pool_init(struct radeon_device *rdev)
11793 + }
11794 + r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo,
11795 + RADEON_IB_POOL_SIZE*64*1024,
11796 ++ RADEON_GPU_PAGE_SIZE,
11797 + RADEON_GEM_DOMAIN_GTT);
11798 + if (r) {
11799 + return r;
11800 +diff --git a/drivers/gpu/drm/radeon/radeon_sa.c b/drivers/gpu/drm/radeon/radeon_sa.c
11801 +index 0abe5a9..f0bac68 100644
11802 +--- a/drivers/gpu/drm/radeon/radeon_sa.c
11803 ++++ b/drivers/gpu/drm/radeon/radeon_sa.c
11804 +@@ -49,7 +49,7 @@ static void radeon_sa_bo_try_free(struct radeon_sa_manager *sa_manager);
11805 +
11806 + int radeon_sa_bo_manager_init(struct radeon_device *rdev,
11807 + struct radeon_sa_manager *sa_manager,
11808 +- unsigned size, u32 domain)
11809 ++ unsigned size, u32 align, u32 domain)
11810 + {
11811 + int i, r;
11812 +
11813 +@@ -57,13 +57,14 @@ int radeon_sa_bo_manager_init(struct radeon_device *rdev,
11814 + sa_manager->bo = NULL;
11815 + sa_manager->size = size;
11816 + sa_manager->domain = domain;
11817 ++ sa_manager->align = align;
11818 + sa_manager->hole = &sa_manager->olist;
11819 + INIT_LIST_HEAD(&sa_manager->olist);
11820 + for (i = 0; i < RADEON_NUM_RINGS; ++i) {
11821 + INIT_LIST_HEAD(&sa_manager->flist[i]);
11822 + }
11823 +
11824 +- r = radeon_bo_create(rdev, size, RADEON_GPU_PAGE_SIZE, true,
11825 ++ r = radeon_bo_create(rdev, size, align, true,
11826 + domain, NULL, &sa_manager->bo);
11827 + if (r) {
11828 + dev_err(rdev->dev, "(%d) failed to allocate bo for manager\n", r);
11829 +@@ -317,7 +318,7 @@ int radeon_sa_bo_new(struct radeon_device *rdev,
11830 + unsigned tries[RADEON_NUM_RINGS];
11831 + int i, r;
11832 +
11833 +- BUG_ON(align > RADEON_GPU_PAGE_SIZE);
11834 ++ BUG_ON(align > sa_manager->align);
11835 + BUG_ON(size > sa_manager->size);
11836 +
11837 + *sa_bo = kmalloc(sizeof(struct radeon_sa_bo), GFP_KERNEL);
11838 +diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
11839 +index 4c605c7..deb5c25 100644
11840 +--- a/drivers/hv/hv_balloon.c
11841 ++++ b/drivers/hv/hv_balloon.c
11842 +@@ -562,7 +562,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
11843 + struct hv_hotadd_state *has)
11844 + {
11845 + int ret = 0;
11846 +- int i, nid, t;
11847 ++ int i, nid;
11848 + unsigned long start_pfn;
11849 + unsigned long processed_pfn;
11850 + unsigned long total_pfn = pfn_count;
11851 +@@ -607,14 +607,11 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
11852 +
11853 + /*
11854 + * Wait for the memory block to be onlined.
11855 ++ * Since the hot add has succeeded, it is ok to
11856 ++ * proceed even if the pages in the hot added region
11857 ++ * have not been "onlined" within the allowed time.
11858 + */
11859 +- t = wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ);
11860 +- if (t == 0) {
11861 +- pr_info("hot_add memory timedout\n");
11862 +- has->ha_end_pfn -= HA_CHUNK;
11863 +- has->covered_end_pfn -= processed_pfn;
11864 +- break;
11865 +- }
11866 ++ wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ);
11867 +
11868 + }
11869 +
11870 +@@ -978,6 +975,14 @@ static void post_status(struct hv_dynmem_device *dm)
11871 + dm->num_pages_ballooned +
11872 + compute_balloon_floor();
11873 +
11874 ++ /*
11875 ++ * If our transaction ID is no longer current, just don't
11876 ++ * send the status. This can happen if we were interrupted
11877 ++ * after we picked our transaction ID.
11878 ++ */
11879 ++ if (status.hdr.trans_id != atomic_read(&trans_id))
11880 ++ return;
11881 ++
11882 + vmbus_sendpacket(dm->dev->channel, &status,
11883 + sizeof(struct dm_status),
11884 + (unsigned long)NULL,
11885 +diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
11886 +index 41712f0..5849dc0 100644
11887 +--- a/drivers/infiniband/ulp/isert/ib_isert.c
11888 ++++ b/drivers/infiniband/ulp/isert/ib_isert.c
11889 +@@ -388,6 +388,7 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
11890 + init_waitqueue_head(&isert_conn->conn_wait_comp_err);
11891 + kref_init(&isert_conn->conn_kref);
11892 + kref_get(&isert_conn->conn_kref);
11893 ++ mutex_init(&isert_conn->conn_mutex);
11894 +
11895 + cma_id->context = isert_conn;
11896 + isert_conn->conn_cm_id = cma_id;
11897 +@@ -540,15 +541,32 @@ isert_disconnect_work(struct work_struct *work)
11898 + struct isert_conn, conn_logout_work);
11899 +
11900 + pr_debug("isert_disconnect_work(): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n");
11901 +-
11902 ++ mutex_lock(&isert_conn->conn_mutex);
11903 + isert_conn->state = ISER_CONN_DOWN;
11904 +
11905 + if (isert_conn->post_recv_buf_count == 0 &&
11906 + atomic_read(&isert_conn->post_send_buf_count) == 0) {
11907 + pr_debug("Calling wake_up(&isert_conn->conn_wait);\n");
11908 +- wake_up(&isert_conn->conn_wait);
11909 ++ mutex_unlock(&isert_conn->conn_mutex);
11910 ++ goto wake_up;
11911 ++ }
11912 ++ if (!isert_conn->conn_cm_id) {
11913 ++ mutex_unlock(&isert_conn->conn_mutex);
11914 ++ isert_put_conn(isert_conn);
11915 ++ return;
11916 ++ }
11917 ++ if (!isert_conn->logout_posted) {
11918 ++ pr_debug("Calling rdma_disconnect for !logout_posted from"
11919 ++ " isert_disconnect_work\n");
11920 ++ rdma_disconnect(isert_conn->conn_cm_id);
11921 ++ mutex_unlock(&isert_conn->conn_mutex);
11922 ++ iscsit_cause_connection_reinstatement(isert_conn->conn, 0);
11923 ++ goto wake_up;
11924 + }
11925 ++ mutex_unlock(&isert_conn->conn_mutex);
11926 +
11927 ++wake_up:
11928 ++ wake_up(&isert_conn->conn_wait);
11929 + isert_put_conn(isert_conn);
11930 + }
11931 +
11932 +@@ -934,16 +952,11 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
11933 + }
11934 +
11935 + sequence_cmd:
11936 +- rc = iscsit_sequence_cmd(conn, cmd, hdr->cmdsn);
11937 ++ rc = iscsit_sequence_cmd(conn, cmd, buf, hdr->cmdsn);
11938 +
11939 + if (!rc && dump_payload == false && unsol_data)
11940 + iscsit_set_unsoliticed_dataout(cmd);
11941 +
11942 +- if (rc == CMDSN_ERROR_CANNOT_RECOVER)
11943 +- return iscsit_add_reject_from_cmd(
11944 +- ISCSI_REASON_PROTOCOL_ERROR,
11945 +- 1, 0, (unsigned char *)hdr, cmd);
11946 +-
11947 + return 0;
11948 + }
11949 +
11950 +@@ -1184,14 +1197,12 @@ isert_put_cmd(struct isert_cmd *isert_cmd)
11951 + {
11952 + struct iscsi_cmd *cmd = &isert_cmd->iscsi_cmd;
11953 + struct isert_conn *isert_conn = isert_cmd->conn;
11954 +- struct iscsi_conn *conn;
11955 ++ struct iscsi_conn *conn = isert_conn->conn;
11956 +
11957 + pr_debug("Entering isert_put_cmd: %p\n", isert_cmd);
11958 +
11959 + switch (cmd->iscsi_opcode) {
11960 + case ISCSI_OP_SCSI_CMD:
11961 +- conn = isert_conn->conn;
11962 +-
11963 + spin_lock_bh(&conn->cmd_lock);
11964 + if (!list_empty(&cmd->i_conn_node))
11965 + list_del(&cmd->i_conn_node);
11966 +@@ -1201,16 +1212,18 @@ isert_put_cmd(struct isert_cmd *isert_cmd)
11967 + iscsit_stop_dataout_timer(cmd);
11968 +
11969 + isert_unmap_cmd(isert_cmd, isert_conn);
11970 +- /*
11971 +- * Fall-through
11972 +- */
11973 ++ transport_generic_free_cmd(&cmd->se_cmd, 0);
11974 ++ break;
11975 + case ISCSI_OP_SCSI_TMFUNC:
11976 ++ spin_lock_bh(&conn->cmd_lock);
11977 ++ if (!list_empty(&cmd->i_conn_node))
11978 ++ list_del(&cmd->i_conn_node);
11979 ++ spin_unlock_bh(&conn->cmd_lock);
11980 ++
11981 + transport_generic_free_cmd(&cmd->se_cmd, 0);
11982 + break;
11983 + case ISCSI_OP_REJECT:
11984 + case ISCSI_OP_NOOP_OUT:
11985 +- conn = isert_conn->conn;
11986 +-
11987 + spin_lock_bh(&conn->cmd_lock);
11988 + if (!list_empty(&cmd->i_conn_node))
11989 + list_del(&cmd->i_conn_node);
11990 +@@ -1222,6 +1235,9 @@ isert_put_cmd(struct isert_cmd *isert_cmd)
11991 + * associated cmd->se_cmd needs to be released.
11992 + */
11993 + if (cmd->se_cmd.se_tfo != NULL) {
11994 ++ pr_debug("Calling transport_generic_free_cmd from"
11995 ++ " isert_put_cmd for 0x%02x\n",
11996 ++ cmd->iscsi_opcode);
11997 + transport_generic_free_cmd(&cmd->se_cmd, 0);
11998 + break;
11999 + }
12000 +@@ -1318,8 +1334,8 @@ isert_do_control_comp(struct work_struct *work)
12001 + atomic_dec(&isert_conn->post_send_buf_count);
12002 +
12003 + cmd->i_state = ISTATE_SENT_STATUS;
12004 +- complete(&cmd->reject_comp);
12005 + isert_completion_put(&isert_cmd->tx_desc, isert_cmd, ib_dev);
12006 ++ break;
12007 + case ISTATE_SEND_LOGOUTRSP:
12008 + pr_debug("Calling iscsit_logout_post_handler >>>>>>>>>>>>>>\n");
12009 + /*
12010 +@@ -1345,7 +1361,8 @@ isert_response_completion(struct iser_tx_desc *tx_desc,
12011 + struct iscsi_cmd *cmd = &isert_cmd->iscsi_cmd;
12012 +
12013 + if (cmd->i_state == ISTATE_SEND_TASKMGTRSP ||
12014 +- cmd->i_state == ISTATE_SEND_LOGOUTRSP) {
12015 ++ cmd->i_state == ISTATE_SEND_LOGOUTRSP ||
12016 ++ cmd->i_state == ISTATE_SEND_REJECT) {
12017 + isert_unmap_tx_desc(tx_desc, ib_dev);
12018 +
12019 + INIT_WORK(&isert_cmd->comp_work, isert_do_control_comp);
12020 +@@ -1419,7 +1436,11 @@ isert_cq_comp_err(struct iser_tx_desc *tx_desc, struct isert_conn *isert_conn)
12021 + pr_debug("isert_cq_comp_err >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n");
12022 + pr_debug("Calling wake_up from isert_cq_comp_err\n");
12023 +
12024 +- isert_conn->state = ISER_CONN_TERMINATING;
12025 ++ mutex_lock(&isert_conn->conn_mutex);
12026 ++ if (isert_conn->state != ISER_CONN_DOWN)
12027 ++ isert_conn->state = ISER_CONN_TERMINATING;
12028 ++ mutex_unlock(&isert_conn->conn_mutex);
12029 ++
12030 + wake_up(&isert_conn->conn_wait_comp_err);
12031 + }
12032 + }
12033 +@@ -1637,11 +1658,25 @@ isert_put_reject(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
12034 + struct isert_cmd, iscsi_cmd);
12035 + struct isert_conn *isert_conn = (struct isert_conn *)conn->context;
12036 + struct ib_send_wr *send_wr = &isert_cmd->tx_desc.send_wr;
12037 ++ struct ib_device *ib_dev = isert_conn->conn_cm_id->device;
12038 ++ struct ib_sge *tx_dsg = &isert_cmd->tx_desc.tx_sg[1];
12039 ++ struct iscsi_reject *hdr =
12040 ++ (struct iscsi_reject *)&isert_cmd->tx_desc.iscsi_header;
12041 +
12042 + isert_create_send_desc(isert_conn, isert_cmd, &isert_cmd->tx_desc);
12043 +- iscsit_build_reject(cmd, conn, (struct iscsi_reject *)
12044 +- &isert_cmd->tx_desc.iscsi_header);
12045 ++ iscsit_build_reject(cmd, conn, hdr);
12046 + isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
12047 ++
12048 ++ hton24(hdr->dlength, ISCSI_HDR_LEN);
12049 ++ isert_cmd->sense_buf_dma = ib_dma_map_single(ib_dev,
12050 ++ (void *)cmd->buf_ptr, ISCSI_HDR_LEN,
12051 ++ DMA_TO_DEVICE);
12052 ++ isert_cmd->sense_buf_len = ISCSI_HDR_LEN;
12053 ++ tx_dsg->addr = isert_cmd->sense_buf_dma;
12054 ++ tx_dsg->length = ISCSI_HDR_LEN;
12055 ++ tx_dsg->lkey = isert_conn->conn_mr->lkey;
12056 ++ isert_cmd->tx_desc.num_sge = 2;
12057 ++
12058 + isert_init_send_wr(isert_cmd, send_wr);
12059 +
12060 + pr_debug("Posting Reject IB_WR_SEND >>>>>>>>>>>>>>>>>>>>>>\n");
12061 +@@ -2175,6 +2210,17 @@ isert_free_np(struct iscsi_np *np)
12062 + kfree(isert_np);
12063 + }
12064 +
12065 ++static int isert_check_state(struct isert_conn *isert_conn, int state)
12066 ++{
12067 ++ int ret;
12068 ++
12069 ++ mutex_lock(&isert_conn->conn_mutex);
12070 ++ ret = (isert_conn->state == state);
12071 ++ mutex_unlock(&isert_conn->conn_mutex);
12072 ++
12073 ++ return ret;
12074 ++}
12075 ++
12076 + static void isert_free_conn(struct iscsi_conn *conn)
12077 + {
12078 + struct isert_conn *isert_conn = conn->context;
12079 +@@ -2184,26 +2230,43 @@ static void isert_free_conn(struct iscsi_conn *conn)
12080 + * Decrement post_send_buf_count for special case when called
12081 + * from isert_do_control_comp() -> iscsit_logout_post_handler()
12082 + */
12083 ++ mutex_lock(&isert_conn->conn_mutex);
12084 + if (isert_conn->logout_posted)
12085 + atomic_dec(&isert_conn->post_send_buf_count);
12086 +
12087 +- if (isert_conn->conn_cm_id)
12088 ++ if (isert_conn->conn_cm_id && isert_conn->state != ISER_CONN_DOWN) {
12089 ++ pr_debug("Calling rdma_disconnect from isert_free_conn\n");
12090 + rdma_disconnect(isert_conn->conn_cm_id);
12091 ++ }
12092 + /*
12093 + * Only wait for conn_wait_comp_err if the isert_conn made it
12094 + * into full feature phase..
12095 + */
12096 +- if (isert_conn->state > ISER_CONN_INIT) {
12097 ++ if (isert_conn->state == ISER_CONN_UP) {
12098 + pr_debug("isert_free_conn: Before wait_event comp_err %d\n",
12099 + isert_conn->state);
12100 ++ mutex_unlock(&isert_conn->conn_mutex);
12101 ++
12102 + wait_event(isert_conn->conn_wait_comp_err,
12103 +- isert_conn->state == ISER_CONN_TERMINATING);
12104 +- pr_debug("isert_free_conn: After wait_event #1 >>>>>>>>>>>>\n");
12105 ++ (isert_check_state(isert_conn, ISER_CONN_TERMINATING)));
12106 ++
12107 ++ wait_event(isert_conn->conn_wait,
12108 ++ (isert_check_state(isert_conn, ISER_CONN_DOWN)));
12109 ++
12110 ++ isert_put_conn(isert_conn);
12111 ++ return;
12112 ++ }
12113 ++ if (isert_conn->state == ISER_CONN_INIT) {
12114 ++ mutex_unlock(&isert_conn->conn_mutex);
12115 ++ isert_put_conn(isert_conn);
12116 ++ return;
12117 + }
12118 ++ pr_debug("isert_free_conn: wait_event conn_wait %d\n",
12119 ++ isert_conn->state);
12120 ++ mutex_unlock(&isert_conn->conn_mutex);
12121 +
12122 +- pr_debug("isert_free_conn: wait_event conn_wait %d\n", isert_conn->state);
12123 +- wait_event(isert_conn->conn_wait, isert_conn->state == ISER_CONN_DOWN);
12124 +- pr_debug("isert_free_conn: After wait_event #2 >>>>>>>>>>>>>>>>>>>>\n");
12125 ++ wait_event(isert_conn->conn_wait,
12126 ++ (isert_check_state(isert_conn, ISER_CONN_DOWN)));
12127 +
12128 + isert_put_conn(isert_conn);
12129 + }
12130 +diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
12131 +index b104f4c..5795c82 100644
12132 +--- a/drivers/infiniband/ulp/isert/ib_isert.h
12133 ++++ b/drivers/infiniband/ulp/isert/ib_isert.h
12134 +@@ -102,6 +102,7 @@ struct isert_conn {
12135 + struct ib_qp *conn_qp;
12136 + struct isert_device *conn_device;
12137 + struct work_struct conn_logout_work;
12138 ++ struct mutex conn_mutex;
12139 + wait_queue_head_t conn_wait;
12140 + wait_queue_head_t conn_wait_comp_err;
12141 + struct kref conn_kref;
12142 +diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
12143 +index aa04f02..81a79b7 100644
12144 +--- a/drivers/md/dm-ioctl.c
12145 ++++ b/drivers/md/dm-ioctl.c
12146 +@@ -1644,7 +1644,10 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
12147 + }
12148 +
12149 + if (!dmi) {
12150 ++ unsigned noio_flag;
12151 ++ noio_flag = memalloc_noio_save();
12152 + dmi = __vmalloc(param_kernel->data_size, GFP_NOIO | __GFP_REPEAT | __GFP_HIGH, PAGE_KERNEL);
12153 ++ memalloc_noio_restore(noio_flag);
12154 + if (dmi)
12155 + *param_flags |= DM_PARAMS_VMALLOC;
12156 + }
12157 +diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
12158 +index bdf26f5..5adede1 100644
12159 +--- a/drivers/md/dm-mpath.c
12160 ++++ b/drivers/md/dm-mpath.c
12161 +@@ -1561,7 +1561,6 @@ static int multipath_ioctl(struct dm_target *ti, unsigned int cmd,
12162 + unsigned long flags;
12163 + int r;
12164 +
12165 +-again:
12166 + bdev = NULL;
12167 + mode = 0;
12168 + r = 0;
12169 +@@ -1579,7 +1578,7 @@ again:
12170 + }
12171 +
12172 + if ((pgpath && m->queue_io) || (!pgpath && m->queue_if_no_path))
12173 +- r = -EAGAIN;
12174 ++ r = -ENOTCONN;
12175 + else if (!bdev)
12176 + r = -EIO;
12177 +
12178 +@@ -1591,11 +1590,8 @@ again:
12179 + if (!r && ti->len != i_size_read(bdev->bd_inode) >> SECTOR_SHIFT)
12180 + r = scsi_verify_blk_ioctl(NULL, cmd);
12181 +
12182 +- if (r == -EAGAIN && !fatal_signal_pending(current)) {
12183 ++ if (r == -ENOTCONN && !fatal_signal_pending(current))
12184 + queue_work(kmultipathd, &m->process_queued_ios);
12185 +- msleep(10);
12186 +- goto again;
12187 +- }
12188 +
12189 + return r ? : __blkdev_driver_ioctl(bdev, mode, cmd, arg);
12190 + }
12191 +diff --git a/drivers/md/dm-verity.c b/drivers/md/dm-verity.c
12192 +index b948fd8..0d2e812 100644
12193 +--- a/drivers/md/dm-verity.c
12194 ++++ b/drivers/md/dm-verity.c
12195 +@@ -831,9 +831,8 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
12196 + for (i = v->levels - 1; i >= 0; i--) {
12197 + sector_t s;
12198 + v->hash_level_block[i] = hash_position;
12199 +- s = verity_position_at_level(v, v->data_blocks, i);
12200 +- s = (s >> v->hash_per_block_bits) +
12201 +- !!(s & ((1 << v->hash_per_block_bits) - 1));
12202 ++ s = (v->data_blocks + ((sector_t)1 << ((i + 1) * v->hash_per_block_bits)) - 1)
12203 ++ >> ((i + 1) * v->hash_per_block_bits);
12204 + if (hash_position + s < hash_position) {
12205 + ti->error = "Hash device offset overflow";
12206 + r = -E2BIG;
12207 +diff --git a/drivers/md/dm.c b/drivers/md/dm.c
12208 +index d5370a9..33f2010 100644
12209 +--- a/drivers/md/dm.c
12210 ++++ b/drivers/md/dm.c
12211 +@@ -386,10 +386,12 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
12212 + unsigned int cmd, unsigned long arg)
12213 + {
12214 + struct mapped_device *md = bdev->bd_disk->private_data;
12215 +- struct dm_table *map = dm_get_live_table(md);
12216 ++ struct dm_table *map;
12217 + struct dm_target *tgt;
12218 + int r = -ENOTTY;
12219 +
12220 ++retry:
12221 ++ map = dm_get_live_table(md);
12222 + if (!map || !dm_table_get_size(map))
12223 + goto out;
12224 +
12225 +@@ -410,6 +412,11 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
12226 + out:
12227 + dm_table_put(map);
12228 +
12229 ++ if (r == -ENOTCONN) {
12230 ++ msleep(10);
12231 ++ goto retry;
12232 ++ }
12233 ++
12234 + return r;
12235 + }
12236 +
12237 +diff --git a/drivers/md/md.c b/drivers/md/md.c
12238 +index 9b82377..51f0345 100644
12239 +--- a/drivers/md/md.c
12240 ++++ b/drivers/md/md.c
12241 +@@ -7697,20 +7697,6 @@ static int remove_and_add_spares(struct mddev *mddev,
12242 + continue;
12243 +
12244 + rdev->recovery_offset = 0;
12245 +- if (rdev->saved_raid_disk >= 0 && mddev->in_sync) {
12246 +- spin_lock_irq(&mddev->write_lock);
12247 +- if (mddev->in_sync)
12248 +- /* OK, this device, which is in_sync,
12249 +- * will definitely be noticed before
12250 +- * the next write, so recovery isn't
12251 +- * needed.
12252 +- */
12253 +- rdev->recovery_offset = mddev->recovery_cp;
12254 +- spin_unlock_irq(&mddev->write_lock);
12255 +- }
12256 +- if (mddev->ro && rdev->recovery_offset != MaxSector)
12257 +- /* not safe to add this disk now */
12258 +- continue;
12259 + if (mddev->pers->
12260 + hot_add_disk(mddev, rdev) == 0) {
12261 + if (sysfs_link_rdev(mddev, rdev))
12262 +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
12263 +index 6e17f81..6f48244 100644
12264 +--- a/drivers/md/raid1.c
12265 ++++ b/drivers/md/raid1.c
12266 +@@ -1848,6 +1848,36 @@ static int process_checks(struct r1bio *r1_bio)
12267 + int i;
12268 + int vcnt;
12269 +
12270 ++ /* Fix variable parts of all bios */
12271 ++ vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9);
12272 ++ for (i = 0; i < conf->raid_disks * 2; i++) {
12273 ++ int j;
12274 ++ int size;
12275 ++ struct bio *b = r1_bio->bios[i];
12276 ++ if (b->bi_end_io != end_sync_read)
12277 ++ continue;
12278 ++ /* fixup the bio for reuse */
12279 ++ bio_reset(b);
12280 ++ b->bi_vcnt = vcnt;
12281 ++ b->bi_size = r1_bio->sectors << 9;
12282 ++ b->bi_sector = r1_bio->sector +
12283 ++ conf->mirrors[i].rdev->data_offset;
12284 ++ b->bi_bdev = conf->mirrors[i].rdev->bdev;
12285 ++ b->bi_end_io = end_sync_read;
12286 ++ b->bi_private = r1_bio;
12287 ++
12288 ++ size = b->bi_size;
12289 ++ for (j = 0; j < vcnt ; j++) {
12290 ++ struct bio_vec *bi;
12291 ++ bi = &b->bi_io_vec[j];
12292 ++ bi->bv_offset = 0;
12293 ++ if (size > PAGE_SIZE)
12294 ++ bi->bv_len = PAGE_SIZE;
12295 ++ else
12296 ++ bi->bv_len = size;
12297 ++ size -= PAGE_SIZE;
12298 ++ }
12299 ++ }
12300 + for (primary = 0; primary < conf->raid_disks * 2; primary++)
12301 + if (r1_bio->bios[primary]->bi_end_io == end_sync_read &&
12302 + test_bit(BIO_UPTODATE, &r1_bio->bios[primary]->bi_flags)) {
12303 +@@ -1856,12 +1886,10 @@ static int process_checks(struct r1bio *r1_bio)
12304 + break;
12305 + }
12306 + r1_bio->read_disk = primary;
12307 +- vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9);
12308 + for (i = 0; i < conf->raid_disks * 2; i++) {
12309 + int j;
12310 + struct bio *pbio = r1_bio->bios[primary];
12311 + struct bio *sbio = r1_bio->bios[i];
12312 +- int size;
12313 +
12314 + if (sbio->bi_end_io != end_sync_read)
12315 + continue;
12316 +@@ -1887,27 +1915,6 @@ static int process_checks(struct r1bio *r1_bio)
12317 + rdev_dec_pending(conf->mirrors[i].rdev, mddev);
12318 + continue;
12319 + }
12320 +- /* fixup the bio for reuse */
12321 +- bio_reset(sbio);
12322 +- sbio->bi_vcnt = vcnt;
12323 +- sbio->bi_size = r1_bio->sectors << 9;
12324 +- sbio->bi_sector = r1_bio->sector +
12325 +- conf->mirrors[i].rdev->data_offset;
12326 +- sbio->bi_bdev = conf->mirrors[i].rdev->bdev;
12327 +- sbio->bi_end_io = end_sync_read;
12328 +- sbio->bi_private = r1_bio;
12329 +-
12330 +- size = sbio->bi_size;
12331 +- for (j = 0; j < vcnt ; j++) {
12332 +- struct bio_vec *bi;
12333 +- bi = &sbio->bi_io_vec[j];
12334 +- bi->bv_offset = 0;
12335 +- if (size > PAGE_SIZE)
12336 +- bi->bv_len = PAGE_SIZE;
12337 +- else
12338 +- bi->bv_len = size;
12339 +- size -= PAGE_SIZE;
12340 +- }
12341 +
12342 + bio_copy_data(sbio, pbio);
12343 + }
12344 +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
12345 +index d61eb7e..081bb33 100644
12346 +--- a/drivers/md/raid10.c
12347 ++++ b/drivers/md/raid10.c
12348 +@@ -2268,12 +2268,18 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
12349 + d = r10_bio->devs[1].devnum;
12350 + wbio = r10_bio->devs[1].bio;
12351 + wbio2 = r10_bio->devs[1].repl_bio;
12352 ++ /* Need to test wbio2->bi_end_io before we call
12353 ++ * generic_make_request as if the former is NULL,
12354 ++ * the latter is free to free wbio2.
12355 ++ */
12356 ++ if (wbio2 && !wbio2->bi_end_io)
12357 ++ wbio2 = NULL;
12358 + if (wbio->bi_end_io) {
12359 + atomic_inc(&conf->mirrors[d].rdev->nr_pending);
12360 + md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio));
12361 + generic_make_request(wbio);
12362 + }
12363 +- if (wbio2 && wbio2->bi_end_io) {
12364 ++ if (wbio2) {
12365 + atomic_inc(&conf->mirrors[d].replacement->nr_pending);
12366 + md_sync_acct(conf->mirrors[d].replacement->bdev,
12367 + bio_sectors(wbio2));
12368 +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
12369 +index 05e4a10..a35b846 100644
12370 +--- a/drivers/md/raid5.c
12371 ++++ b/drivers/md/raid5.c
12372 +@@ -3462,6 +3462,7 @@ static void handle_stripe(struct stripe_head *sh)
12373 + test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
12374 + set_bit(STRIPE_SYNCING, &sh->state);
12375 + clear_bit(STRIPE_INSYNC, &sh->state);
12376 ++ clear_bit(STRIPE_REPLACED, &sh->state);
12377 + }
12378 + spin_unlock(&sh->stripe_lock);
12379 + }
12380 +@@ -3607,19 +3608,23 @@ static void handle_stripe(struct stripe_head *sh)
12381 + handle_parity_checks5(conf, sh, &s, disks);
12382 + }
12383 +
12384 +- if (s.replacing && s.locked == 0
12385 +- && !test_bit(STRIPE_INSYNC, &sh->state)) {
12386 ++ if ((s.replacing || s.syncing) && s.locked == 0
12387 ++ && !test_bit(STRIPE_COMPUTE_RUN, &sh->state)
12388 ++ && !test_bit(STRIPE_REPLACED, &sh->state)) {
12389 + /* Write out to replacement devices where possible */
12390 + for (i = 0; i < conf->raid_disks; i++)
12391 +- if (test_bit(R5_UPTODATE, &sh->dev[i].flags) &&
12392 +- test_bit(R5_NeedReplace, &sh->dev[i].flags)) {
12393 ++ if (test_bit(R5_NeedReplace, &sh->dev[i].flags)) {
12394 ++ WARN_ON(!test_bit(R5_UPTODATE, &sh->dev[i].flags));
12395 + set_bit(R5_WantReplace, &sh->dev[i].flags);
12396 + set_bit(R5_LOCKED, &sh->dev[i].flags);
12397 + s.locked++;
12398 + }
12399 +- set_bit(STRIPE_INSYNC, &sh->state);
12400 ++ if (s.replacing)
12401 ++ set_bit(STRIPE_INSYNC, &sh->state);
12402 ++ set_bit(STRIPE_REPLACED, &sh->state);
12403 + }
12404 + if ((s.syncing || s.replacing) && s.locked == 0 &&
12405 ++ !test_bit(STRIPE_COMPUTE_RUN, &sh->state) &&
12406 + test_bit(STRIPE_INSYNC, &sh->state)) {
12407 + md_done_sync(conf->mddev, STRIPE_SECTORS, 1);
12408 + clear_bit(STRIPE_SYNCING, &sh->state);
12409 +diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
12410 +index b0b663b..70c4932 100644
12411 +--- a/drivers/md/raid5.h
12412 ++++ b/drivers/md/raid5.h
12413 +@@ -306,6 +306,7 @@ enum {
12414 + STRIPE_SYNC_REQUESTED,
12415 + STRIPE_SYNCING,
12416 + STRIPE_INSYNC,
12417 ++ STRIPE_REPLACED,
12418 + STRIPE_PREREAD_ACTIVE,
12419 + STRIPE_DELAYED,
12420 + STRIPE_DEGRADED,
12421 +diff --git a/drivers/net/wireless/rtlwifi/pci.c b/drivers/net/wireless/rtlwifi/pci.c
12422 +index c97e9d3..e70b4ff 100644
12423 +--- a/drivers/net/wireless/rtlwifi/pci.c
12424 ++++ b/drivers/net/wireless/rtlwifi/pci.c
12425 +@@ -1008,19 +1008,6 @@ static void _rtl_pci_prepare_bcn_tasklet(struct ieee80211_hw *hw)
12426 + return;
12427 + }
12428 +
12429 +-static void rtl_lps_change_work_callback(struct work_struct *work)
12430 +-{
12431 +- struct rtl_works *rtlworks =
12432 +- container_of(work, struct rtl_works, lps_change_work);
12433 +- struct ieee80211_hw *hw = rtlworks->hw;
12434 +- struct rtl_priv *rtlpriv = rtl_priv(hw);
12435 +-
12436 +- if (rtlpriv->enter_ps)
12437 +- rtl_lps_enter(hw);
12438 +- else
12439 +- rtl_lps_leave(hw);
12440 +-}
12441 +-
12442 + static void _rtl_pci_init_trx_var(struct ieee80211_hw *hw)
12443 + {
12444 + struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
12445 +diff --git a/drivers/net/wireless/rtlwifi/ps.c b/drivers/net/wireless/rtlwifi/ps.c
12446 +index 884bcea..71e917d 100644
12447 +--- a/drivers/net/wireless/rtlwifi/ps.c
12448 ++++ b/drivers/net/wireless/rtlwifi/ps.c
12449 +@@ -611,6 +611,18 @@ void rtl_swlps_rf_sleep(struct ieee80211_hw *hw)
12450 + MSECS(sleep_intv * mac->vif->bss_conf.beacon_int - 40));
12451 + }
12452 +
12453 ++void rtl_lps_change_work_callback(struct work_struct *work)
12454 ++{
12455 ++ struct rtl_works *rtlworks =
12456 ++ container_of(work, struct rtl_works, lps_change_work);
12457 ++ struct ieee80211_hw *hw = rtlworks->hw;
12458 ++ struct rtl_priv *rtlpriv = rtl_priv(hw);
12459 ++
12460 ++ if (rtlpriv->enter_ps)
12461 ++ rtl_lps_enter(hw);
12462 ++ else
12463 ++ rtl_lps_leave(hw);
12464 ++}
12465 +
12466 + void rtl_swlps_wq_callback(void *data)
12467 + {
12468 +diff --git a/drivers/net/wireless/rtlwifi/ps.h b/drivers/net/wireless/rtlwifi/ps.h
12469 +index 4d682b7..88bd76e 100644
12470 +--- a/drivers/net/wireless/rtlwifi/ps.h
12471 ++++ b/drivers/net/wireless/rtlwifi/ps.h
12472 +@@ -49,5 +49,6 @@ void rtl_swlps_rf_awake(struct ieee80211_hw *hw);
12473 + void rtl_swlps_rf_sleep(struct ieee80211_hw *hw);
12474 + void rtl_p2p_ps_cmd(struct ieee80211_hw *hw, u8 p2p_ps_state);
12475 + void rtl_p2p_info(struct ieee80211_hw *hw, void *data, unsigned int len);
12476 ++void rtl_lps_change_work_callback(struct work_struct *work);
12477 +
12478 + #endif
12479 +diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c
12480 +index a3532e0..1feebdc 100644
12481 +--- a/drivers/net/wireless/rtlwifi/usb.c
12482 ++++ b/drivers/net/wireless/rtlwifi/usb.c
12483 +@@ -1070,6 +1070,8 @@ int rtl_usb_probe(struct usb_interface *intf,
12484 + spin_lock_init(&rtlpriv->locks.usb_lock);
12485 + INIT_WORK(&rtlpriv->works.fill_h2c_cmd,
12486 + rtl_fill_h2c_cmd_work_callback);
12487 ++ INIT_WORK(&rtlpriv->works.lps_change_work,
12488 ++ rtl_lps_change_work_callback);
12489 +
12490 + rtlpriv->usb_data_index = 0;
12491 + init_completion(&rtlpriv->firmware_loading_complete);
12492 +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
12493 +index 1db10141..0c01b8e 100644
12494 +--- a/drivers/net/xen-netfront.c
12495 ++++ b/drivers/net/xen-netfront.c
12496 +@@ -276,8 +276,7 @@ no_skb:
12497 + break;
12498 + }
12499 +
12500 +- __skb_fill_page_desc(skb, 0, page, 0, 0);
12501 +- skb_shinfo(skb)->nr_frags = 1;
12502 ++ skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
12503 + __skb_queue_tail(&np->rx_batch, skb);
12504 + }
12505 +
12506 +@@ -822,7 +821,6 @@ static RING_IDX xennet_fill_frags(struct netfront_info *np,
12507 + struct sk_buff_head *list)
12508 + {
12509 + struct skb_shared_info *shinfo = skb_shinfo(skb);
12510 +- int nr_frags = shinfo->nr_frags;
12511 + RING_IDX cons = np->rx.rsp_cons;
12512 + struct sk_buff *nskb;
12513 +
12514 +@@ -831,19 +829,21 @@ static RING_IDX xennet_fill_frags(struct netfront_info *np,
12515 + RING_GET_RESPONSE(&np->rx, ++cons);
12516 + skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
12517 +
12518 +- __skb_fill_page_desc(skb, nr_frags,
12519 +- skb_frag_page(nfrag),
12520 +- rx->offset, rx->status);
12521 ++ if (shinfo->nr_frags == MAX_SKB_FRAGS) {
12522 ++ unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
12523 +
12524 +- skb->data_len += rx->status;
12525 ++ BUG_ON(pull_to <= skb_headlen(skb));
12526 ++ __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
12527 ++ }
12528 ++ BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
12529 ++
12530 ++ skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
12531 ++ rx->offset, rx->status, PAGE_SIZE);
12532 +
12533 + skb_shinfo(nskb)->nr_frags = 0;
12534 + kfree_skb(nskb);
12535 +-
12536 +- nr_frags++;
12537 + }
12538 +
12539 +- shinfo->nr_frags = nr_frags;
12540 + return cons;
12541 + }
12542 +
12543 +@@ -929,7 +929,8 @@ static int handle_incoming_queue(struct net_device *dev,
12544 + while ((skb = __skb_dequeue(rxq)) != NULL) {
12545 + int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
12546 +
12547 +- __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
12548 ++ if (pull_to > skb_headlen(skb))
12549 ++ __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
12550 +
12551 + /* Ethernet work: Delayed to here as it peeks the header. */
12552 + skb->protocol = eth_type_trans(skb, dev);
12553 +@@ -1015,16 +1016,10 @@ err:
12554 + skb_shinfo(skb)->frags[0].page_offset = rx->offset;
12555 + skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status);
12556 + skb->data_len = rx->status;
12557 ++ skb->len += rx->status;
12558 +
12559 + i = xennet_fill_frags(np, skb, &tmpq);
12560 +
12561 +- /*
12562 +- * Truesize is the actual allocation size, even if the
12563 +- * allocation is only partially used.
12564 +- */
12565 +- skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags;
12566 +- skb->len += skb->data_len;
12567 +-
12568 + if (rx->flags & XEN_NETRXF_csum_blank)
12569 + skb->ip_summed = CHECKSUM_PARTIAL;
12570 + else if (rx->flags & XEN_NETRXF_data_validated)
12571 +diff --git a/drivers/scsi/isci/task.c b/drivers/scsi/isci/task.c
12572 +index 9bb020a..0d30ca8 100644
12573 +--- a/drivers/scsi/isci/task.c
12574 ++++ b/drivers/scsi/isci/task.c
12575 +@@ -491,6 +491,7 @@ int isci_task_abort_task(struct sas_task *task)
12576 + struct isci_tmf tmf;
12577 + int ret = TMF_RESP_FUNC_FAILED;
12578 + unsigned long flags;
12579 ++ int target_done_already = 0;
12580 +
12581 + /* Get the isci_request reference from the task. Note that
12582 + * this check does not depend on the pending request list
12583 +@@ -505,9 +506,11 @@ int isci_task_abort_task(struct sas_task *task)
12584 + /* If task is already done, the request isn't valid */
12585 + if (!(task->task_state_flags & SAS_TASK_STATE_DONE) &&
12586 + (task->task_state_flags & SAS_TASK_AT_INITIATOR) &&
12587 +- old_request)
12588 ++ old_request) {
12589 + idev = isci_get_device(task->dev->lldd_dev);
12590 +-
12591 ++ target_done_already = test_bit(IREQ_COMPLETE_IN_TARGET,
12592 ++ &old_request->flags);
12593 ++ }
12594 + spin_unlock(&task->task_state_lock);
12595 + spin_unlock_irqrestore(&ihost->scic_lock, flags);
12596 +
12597 +@@ -561,7 +564,7 @@ int isci_task_abort_task(struct sas_task *task)
12598 +
12599 + if (task->task_proto == SAS_PROTOCOL_SMP ||
12600 + sas_protocol_ata(task->task_proto) ||
12601 +- test_bit(IREQ_COMPLETE_IN_TARGET, &old_request->flags) ||
12602 ++ target_done_already ||
12603 + test_bit(IDEV_GONE, &idev->flags)) {
12604 +
12605 + spin_unlock_irqrestore(&ihost->scic_lock, flags);
12606 +diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
12607 +index 15e4080..51cd27a 100644
12608 +--- a/drivers/scsi/qla2xxx/qla_iocb.c
12609 ++++ b/drivers/scsi/qla2xxx/qla_iocb.c
12610 +@@ -419,6 +419,8 @@ qla2x00_start_scsi(srb_t *sp)
12611 + __constant_cpu_to_le16(CF_SIMPLE_TAG);
12612 + break;
12613 + }
12614 ++ } else {
12615 ++ cmd_pkt->control_flags = __constant_cpu_to_le16(CF_SIMPLE_TAG);
12616 + }
12617 +
12618 + /* Load SCSI command packet. */
12619 +@@ -1308,11 +1310,11 @@ qla24xx_build_scsi_crc_2_iocbs(srb_t *sp, struct cmd_type_crc_2 *cmd_pkt,
12620 + fcp_cmnd->task_attribute = TSK_ORDERED;
12621 + break;
12622 + default:
12623 +- fcp_cmnd->task_attribute = 0;
12624 ++ fcp_cmnd->task_attribute = TSK_SIMPLE;
12625 + break;
12626 + }
12627 + } else {
12628 +- fcp_cmnd->task_attribute = 0;
12629 ++ fcp_cmnd->task_attribute = TSK_SIMPLE;
12630 + }
12631 +
12632 + cmd_pkt->fcp_rsp_dseg_len = 0; /* Let response come in status iocb */
12633 +@@ -1527,7 +1529,12 @@ qla24xx_start_scsi(srb_t *sp)
12634 + case ORDERED_QUEUE_TAG:
12635 + cmd_pkt->task = TSK_ORDERED;
12636 + break;
12637 ++ default:
12638 ++ cmd_pkt->task = TSK_SIMPLE;
12639 ++ break;
12640 + }
12641 ++ } else {
12642 ++ cmd_pkt->task = TSK_SIMPLE;
12643 + }
12644 +
12645 + /* Load SCSI command packet. */
12646 +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
12647 +index 1b1125e..610417e 100644
12648 +--- a/drivers/scsi/sd.c
12649 ++++ b/drivers/scsi/sd.c
12650 +@@ -828,10 +828,17 @@ static int scsi_setup_flush_cmnd(struct scsi_device *sdp, struct request *rq)
12651 +
12652 + static void sd_unprep_fn(struct request_queue *q, struct request *rq)
12653 + {
12654 ++ struct scsi_cmnd *SCpnt = rq->special;
12655 ++
12656 + if (rq->cmd_flags & REQ_DISCARD) {
12657 + free_page((unsigned long)rq->buffer);
12658 + rq->buffer = NULL;
12659 + }
12660 ++ if (SCpnt->cmnd != rq->cmd) {
12661 ++ mempool_free(SCpnt->cmnd, sd_cdb_pool);
12662 ++ SCpnt->cmnd = NULL;
12663 ++ SCpnt->cmd_len = 0;
12664 ++ }
12665 + }
12666 +
12667 + /**
12668 +@@ -1710,21 +1717,6 @@ static int sd_done(struct scsi_cmnd *SCpnt)
12669 + if (rq_data_dir(SCpnt->request) == READ && scsi_prot_sg_count(SCpnt))
12670 + sd_dif_complete(SCpnt, good_bytes);
12671 +
12672 +- if (scsi_host_dif_capable(sdkp->device->host, sdkp->protection_type)
12673 +- == SD_DIF_TYPE2_PROTECTION && SCpnt->cmnd != SCpnt->request->cmd) {
12674 +-
12675 +- /* We have to print a failed command here as the
12676 +- * extended CDB gets freed before scsi_io_completion()
12677 +- * is called.
12678 +- */
12679 +- if (result)
12680 +- scsi_print_command(SCpnt);
12681 +-
12682 +- mempool_free(SCpnt->cmnd, sd_cdb_pool);
12683 +- SCpnt->cmnd = NULL;
12684 +- SCpnt->cmd_len = 0;
12685 +- }
12686 +-
12687 + return good_bytes;
12688 + }
12689 +
12690 +diff --git a/drivers/staging/android/logger.c b/drivers/staging/android/logger.c
12691 +index 9bd8747..34519ea 100644
12692 +--- a/drivers/staging/android/logger.c
12693 ++++ b/drivers/staging/android/logger.c
12694 +@@ -469,7 +469,7 @@ static ssize_t logger_aio_write(struct kiocb *iocb, const struct iovec *iov,
12695 + unsigned long nr_segs, loff_t ppos)
12696 + {
12697 + struct logger_log *log = file_get_log(iocb->ki_filp);
12698 +- size_t orig = log->w_off;
12699 ++ size_t orig;
12700 + struct logger_entry header;
12701 + struct timespec now;
12702 + ssize_t ret = 0;
12703 +@@ -490,6 +490,8 @@ static ssize_t logger_aio_write(struct kiocb *iocb, const struct iovec *iov,
12704 +
12705 + mutex_lock(&log->mutex);
12706 +
12707 ++ orig = log->w_off;
12708 ++
12709 + /*
12710 + * Fix up any readers, pulling them forward to the first readable
12711 + * entry after (what will be) the new write offset. We do this now
12712 +diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
12713 +index 924c54c..0ae406a 100644
12714 +--- a/drivers/staging/comedi/comedi_fops.c
12715 ++++ b/drivers/staging/comedi/comedi_fops.c
12716 +@@ -1401,22 +1401,19 @@ static int do_cmd_ioctl(struct comedi_device *dev,
12717 + DPRINTK("subdevice busy\n");
12718 + return -EBUSY;
12719 + }
12720 +- s->busy = file;
12721 +
12722 + /* make sure channel/gain list isn't too long */
12723 + if (cmd.chanlist_len > s->len_chanlist) {
12724 + DPRINTK("channel/gain list too long %u > %d\n",
12725 + cmd.chanlist_len, s->len_chanlist);
12726 +- ret = -EINVAL;
12727 +- goto cleanup;
12728 ++ return -EINVAL;
12729 + }
12730 +
12731 + /* make sure channel/gain list isn't too short */
12732 + if (cmd.chanlist_len < 1) {
12733 + DPRINTK("channel/gain list too short %u < 1\n",
12734 + cmd.chanlist_len);
12735 +- ret = -EINVAL;
12736 +- goto cleanup;
12737 ++ return -EINVAL;
12738 + }
12739 +
12740 + async->cmd = cmd;
12741 +@@ -1426,8 +1423,7 @@ static int do_cmd_ioctl(struct comedi_device *dev,
12742 + kmalloc(async->cmd.chanlist_len * sizeof(int), GFP_KERNEL);
12743 + if (!async->cmd.chanlist) {
12744 + DPRINTK("allocation failed\n");
12745 +- ret = -ENOMEM;
12746 +- goto cleanup;
12747 ++ return -ENOMEM;
12748 + }
12749 +
12750 + if (copy_from_user(async->cmd.chanlist, user_chanlist,
12751 +@@ -1479,6 +1475,9 @@ static int do_cmd_ioctl(struct comedi_device *dev,
12752 +
12753 + comedi_set_subdevice_runflags(s, ~0, SRF_USER | SRF_RUNNING);
12754 +
12755 ++ /* set s->busy _after_ setting SRF_RUNNING flag to avoid race with
12756 ++ * comedi_read() or comedi_write() */
12757 ++ s->busy = file;
12758 + ret = s->do_cmd(dev, s);
12759 + if (ret == 0)
12760 + return 0;
12761 +@@ -1693,6 +1692,7 @@ static int do_cancel_ioctl(struct comedi_device *dev, unsigned int arg,
12762 + void *file)
12763 + {
12764 + struct comedi_subdevice *s;
12765 ++ int ret;
12766 +
12767 + if (arg >= dev->n_subdevices)
12768 + return -EINVAL;
12769 +@@ -1709,7 +1709,11 @@ static int do_cancel_ioctl(struct comedi_device *dev, unsigned int arg,
12770 + if (s->busy != file)
12771 + return -EBUSY;
12772 +
12773 +- return do_cancel(dev, s);
12774 ++ ret = do_cancel(dev, s);
12775 ++ if (comedi_get_subdevice_runflags(s) & SRF_USER)
12776 ++ wake_up_interruptible(&s->async->wait_head);
12777 ++
12778 ++ return ret;
12779 + }
12780 +
12781 + /*
12782 +@@ -2041,11 +2045,13 @@ static ssize_t comedi_write(struct file *file, const char __user *buf,
12783 +
12784 + if (!comedi_is_subdevice_running(s)) {
12785 + if (count == 0) {
12786 ++ mutex_lock(&dev->mutex);
12787 + if (comedi_is_subdevice_in_error(s))
12788 + retval = -EPIPE;
12789 + else
12790 + retval = 0;
12791 + do_become_nonbusy(dev, s);
12792 ++ mutex_unlock(&dev->mutex);
12793 + }
12794 + break;
12795 + }
12796 +@@ -2144,11 +2150,13 @@ static ssize_t comedi_read(struct file *file, char __user *buf, size_t nbytes,
12797 +
12798 + if (n == 0) {
12799 + if (!comedi_is_subdevice_running(s)) {
12800 ++ mutex_lock(&dev->mutex);
12801 + do_become_nonbusy(dev, s);
12802 + if (comedi_is_subdevice_in_error(s))
12803 + retval = -EPIPE;
12804 + else
12805 + retval = 0;
12806 ++ mutex_unlock(&dev->mutex);
12807 + break;
12808 + }
12809 + if (file->f_flags & O_NONBLOCK) {
12810 +@@ -2186,9 +2194,11 @@ static ssize_t comedi_read(struct file *file, char __user *buf, size_t nbytes,
12811 + buf += n;
12812 + break; /* makes device work like a pipe */
12813 + }
12814 +- if (comedi_is_subdevice_idle(s) &&
12815 +- async->buf_read_count - async->buf_write_count == 0) {
12816 +- do_become_nonbusy(dev, s);
12817 ++ if (comedi_is_subdevice_idle(s)) {
12818 ++ mutex_lock(&dev->mutex);
12819 ++ if (async->buf_read_count - async->buf_write_count == 0)
12820 ++ do_become_nonbusy(dev, s);
12821 ++ mutex_unlock(&dev->mutex);
12822 + }
12823 + set_current_state(TASK_RUNNING);
12824 + remove_wait_queue(&async->wait_head, &wait);
12825 +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
12826 +index d7705e5..012ff8b 100644
12827 +--- a/drivers/target/iscsi/iscsi_target.c
12828 ++++ b/drivers/target/iscsi/iscsi_target.c
12829 +@@ -628,25 +628,18 @@ static void __exit iscsi_target_cleanup_module(void)
12830 + }
12831 +
12832 + static int iscsit_add_reject(
12833 ++ struct iscsi_conn *conn,
12834 + u8 reason,
12835 +- int fail_conn,
12836 +- unsigned char *buf,
12837 +- struct iscsi_conn *conn)
12838 ++ unsigned char *buf)
12839 + {
12840 + struct iscsi_cmd *cmd;
12841 +- struct iscsi_reject *hdr;
12842 +- int ret;
12843 +
12844 + cmd = iscsit_allocate_cmd(conn, GFP_KERNEL);
12845 + if (!cmd)
12846 + return -1;
12847 +
12848 + cmd->iscsi_opcode = ISCSI_OP_REJECT;
12849 +- if (fail_conn)
12850 +- cmd->cmd_flags |= ICF_REJECT_FAIL_CONN;
12851 +-
12852 +- hdr = (struct iscsi_reject *) cmd->pdu;
12853 +- hdr->reason = reason;
12854 ++ cmd->reject_reason = reason;
12855 +
12856 + cmd->buf_ptr = kmemdup(buf, ISCSI_HDR_LEN, GFP_KERNEL);
12857 + if (!cmd->buf_ptr) {
12858 +@@ -662,23 +655,16 @@ static int iscsit_add_reject(
12859 + cmd->i_state = ISTATE_SEND_REJECT;
12860 + iscsit_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
12861 +
12862 +- ret = wait_for_completion_interruptible(&cmd->reject_comp);
12863 +- if (ret != 0)
12864 +- return -1;
12865 +-
12866 +- return (!fail_conn) ? 0 : -1;
12867 ++ return -1;
12868 + }
12869 +
12870 +-int iscsit_add_reject_from_cmd(
12871 ++static int iscsit_add_reject_from_cmd(
12872 ++ struct iscsi_cmd *cmd,
12873 + u8 reason,
12874 +- int fail_conn,
12875 +- int add_to_conn,
12876 +- unsigned char *buf,
12877 +- struct iscsi_cmd *cmd)
12878 ++ bool add_to_conn,
12879 ++ unsigned char *buf)
12880 + {
12881 + struct iscsi_conn *conn;
12882 +- struct iscsi_reject *hdr;
12883 +- int ret;
12884 +
12885 + if (!cmd->conn) {
12886 + pr_err("cmd->conn is NULL for ITT: 0x%08x\n",
12887 +@@ -688,11 +674,7 @@ int iscsit_add_reject_from_cmd(
12888 + conn = cmd->conn;
12889 +
12890 + cmd->iscsi_opcode = ISCSI_OP_REJECT;
12891 +- if (fail_conn)
12892 +- cmd->cmd_flags |= ICF_REJECT_FAIL_CONN;
12893 +-
12894 +- hdr = (struct iscsi_reject *) cmd->pdu;
12895 +- hdr->reason = reason;
12896 ++ cmd->reject_reason = reason;
12897 +
12898 + cmd->buf_ptr = kmemdup(buf, ISCSI_HDR_LEN, GFP_KERNEL);
12899 + if (!cmd->buf_ptr) {
12900 +@@ -709,8 +691,6 @@ int iscsit_add_reject_from_cmd(
12901 +
12902 + cmd->i_state = ISTATE_SEND_REJECT;
12903 + iscsit_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
12904 +-
12905 +- ret = wait_for_completion_interruptible(&cmd->reject_comp);
12906 + /*
12907 + * Perform the kref_put now if se_cmd has already been setup by
12908 + * scsit_setup_scsi_cmd()
12909 +@@ -719,12 +699,19 @@ int iscsit_add_reject_from_cmd(
12910 + pr_debug("iscsi reject: calling target_put_sess_cmd >>>>>>\n");
12911 + target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
12912 + }
12913 +- if (ret != 0)
12914 +- return -1;
12915 ++ return -1;
12916 ++}
12917 +
12918 +- return (!fail_conn) ? 0 : -1;
12919 ++static int iscsit_add_reject_cmd(struct iscsi_cmd *cmd, u8 reason,
12920 ++ unsigned char *buf)
12921 ++{
12922 ++ return iscsit_add_reject_from_cmd(cmd, reason, true, buf);
12923 ++}
12924 ++
12925 ++int iscsit_reject_cmd(struct iscsi_cmd *cmd, u8 reason, unsigned char *buf)
12926 ++{
12927 ++ return iscsit_add_reject_from_cmd(cmd, reason, false, buf);
12928 + }
12929 +-EXPORT_SYMBOL(iscsit_add_reject_from_cmd);
12930 +
12931 + /*
12932 + * Map some portion of the allocated scatterlist to an iovec, suitable for
12933 +@@ -844,8 +831,8 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
12934 + !(hdr->flags & ISCSI_FLAG_CMD_FINAL)) {
12935 + pr_err("ISCSI_FLAG_CMD_WRITE & ISCSI_FLAG_CMD_FINAL"
12936 + " not set. Bad iSCSI Initiator.\n");
12937 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
12938 +- 1, 1, buf, cmd);
12939 ++ return iscsit_add_reject_cmd(cmd,
12940 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
12941 + }
12942 +
12943 + if (((hdr->flags & ISCSI_FLAG_CMD_READ) ||
12944 +@@ -865,8 +852,8 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
12945 + pr_err("ISCSI_FLAG_CMD_READ or ISCSI_FLAG_CMD_WRITE"
12946 + " set when Expected Data Transfer Length is 0 for"
12947 + " CDB: 0x%02x. Bad iSCSI Initiator.\n", hdr->cdb[0]);
12948 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
12949 +- 1, 1, buf, cmd);
12950 ++ return iscsit_add_reject_cmd(cmd,
12951 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
12952 + }
12953 + done:
12954 +
12955 +@@ -875,62 +862,62 @@ done:
12956 + pr_err("ISCSI_FLAG_CMD_READ and/or ISCSI_FLAG_CMD_WRITE"
12957 + " MUST be set if Expected Data Transfer Length is not 0."
12958 + " Bad iSCSI Initiator\n");
12959 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
12960 +- 1, 1, buf, cmd);
12961 ++ return iscsit_add_reject_cmd(cmd,
12962 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
12963 + }
12964 +
12965 + if ((hdr->flags & ISCSI_FLAG_CMD_READ) &&
12966 + (hdr->flags & ISCSI_FLAG_CMD_WRITE)) {
12967 + pr_err("Bidirectional operations not supported!\n");
12968 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
12969 +- 1, 1, buf, cmd);
12970 ++ return iscsit_add_reject_cmd(cmd,
12971 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
12972 + }
12973 +
12974 + if (hdr->opcode & ISCSI_OP_IMMEDIATE) {
12975 + pr_err("Illegally set Immediate Bit in iSCSI Initiator"
12976 + " Scsi Command PDU.\n");
12977 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
12978 +- 1, 1, buf, cmd);
12979 ++ return iscsit_add_reject_cmd(cmd,
12980 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
12981 + }
12982 +
12983 + if (payload_length && !conn->sess->sess_ops->ImmediateData) {
12984 + pr_err("ImmediateData=No but DataSegmentLength=%u,"
12985 + " protocol error.\n", payload_length);
12986 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
12987 +- 1, 1, buf, cmd);
12988 ++ return iscsit_add_reject_cmd(cmd,
12989 ++ ISCSI_REASON_PROTOCOL_ERROR, buf);
12990 + }
12991 +
12992 +- if ((be32_to_cpu(hdr->data_length )== payload_length) &&
12993 ++ if ((be32_to_cpu(hdr->data_length) == payload_length) &&
12994 + (!(hdr->flags & ISCSI_FLAG_CMD_FINAL))) {
12995 + pr_err("Expected Data Transfer Length and Length of"
12996 + " Immediate Data are the same, but ISCSI_FLAG_CMD_FINAL"
12997 + " bit is not set protocol error\n");
12998 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
12999 +- 1, 1, buf, cmd);
13000 ++ return iscsit_add_reject_cmd(cmd,
13001 ++ ISCSI_REASON_PROTOCOL_ERROR, buf);
13002 + }
13003 +
13004 + if (payload_length > be32_to_cpu(hdr->data_length)) {
13005 + pr_err("DataSegmentLength: %u is greater than"
13006 + " EDTL: %u, protocol error.\n", payload_length,
13007 + hdr->data_length);
13008 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
13009 +- 1, 1, buf, cmd);
13010 ++ return iscsit_add_reject_cmd(cmd,
13011 ++ ISCSI_REASON_PROTOCOL_ERROR, buf);
13012 + }
13013 +
13014 + if (payload_length > conn->conn_ops->MaxXmitDataSegmentLength) {
13015 + pr_err("DataSegmentLength: %u is greater than"
13016 + " MaxXmitDataSegmentLength: %u, protocol error.\n",
13017 + payload_length, conn->conn_ops->MaxXmitDataSegmentLength);
13018 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
13019 +- 1, 1, buf, cmd);
13020 ++ return iscsit_add_reject_cmd(cmd,
13021 ++ ISCSI_REASON_PROTOCOL_ERROR, buf);
13022 + }
13023 +
13024 + if (payload_length > conn->sess->sess_ops->FirstBurstLength) {
13025 + pr_err("DataSegmentLength: %u is greater than"
13026 + " FirstBurstLength: %u, protocol error.\n",
13027 + payload_length, conn->sess->sess_ops->FirstBurstLength);
13028 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
13029 +- 1, 1, buf, cmd);
13030 ++ return iscsit_add_reject_cmd(cmd,
13031 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
13032 + }
13033 +
13034 + data_direction = (hdr->flags & ISCSI_FLAG_CMD_WRITE) ? DMA_TO_DEVICE :
13035 +@@ -985,9 +972,8 @@ done:
13036 +
13037 + dr = iscsit_allocate_datain_req();
13038 + if (!dr)
13039 +- return iscsit_add_reject_from_cmd(
13040 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13041 +- 1, 1, buf, cmd);
13042 ++ return iscsit_add_reject_cmd(cmd,
13043 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13044 +
13045 + iscsit_attach_datain_req(cmd, dr);
13046 + }
13047 +@@ -1015,18 +1001,16 @@ done:
13048 + cmd->sense_reason = target_setup_cmd_from_cdb(&cmd->se_cmd, hdr->cdb);
13049 + if (cmd->sense_reason) {
13050 + if (cmd->sense_reason == TCM_OUT_OF_RESOURCES) {
13051 +- return iscsit_add_reject_from_cmd(
13052 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13053 +- 1, 1, buf, cmd);
13054 ++ return iscsit_add_reject_cmd(cmd,
13055 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13056 + }
13057 +
13058 + goto attach_cmd;
13059 + }
13060 +
13061 + if (iscsit_build_pdu_and_seq_lists(cmd, payload_length) < 0) {
13062 +- return iscsit_add_reject_from_cmd(
13063 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13064 +- 1, 1, buf, cmd);
13065 ++ return iscsit_add_reject_cmd(cmd,
13066 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13067 + }
13068 +
13069 + attach_cmd:
13070 +@@ -1068,17 +1052,13 @@ int iscsit_process_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13071 + * be acknowledged. (See below)
13072 + */
13073 + if (!cmd->immediate_data) {
13074 +- cmdsn_ret = iscsit_sequence_cmd(conn, cmd, hdr->cmdsn);
13075 +- if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
13076 +- if (!cmd->sense_reason)
13077 +- return 0;
13078 +-
13079 ++ cmdsn_ret = iscsit_sequence_cmd(conn, cmd,
13080 ++ (unsigned char *)hdr, hdr->cmdsn);
13081 ++ if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
13082 ++ return -1;
13083 ++ else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
13084 + target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
13085 + return 0;
13086 +- } else if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) {
13087 +- return iscsit_add_reject_from_cmd(
13088 +- ISCSI_REASON_PROTOCOL_ERROR,
13089 +- 1, 0, (unsigned char *)hdr, cmd);
13090 + }
13091 + }
13092 +
13093 +@@ -1103,6 +1083,9 @@ int iscsit_process_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13094 + * iscsit_check_received_cmdsn() in iscsit_get_immediate_data() below.
13095 + */
13096 + if (cmd->sense_reason) {
13097 ++ if (cmd->reject_reason)
13098 ++ return 0;
13099 ++
13100 + target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
13101 + return 1;
13102 + }
13103 +@@ -1111,10 +1094,8 @@ int iscsit_process_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13104 + * the backend memory allocation.
13105 + */
13106 + cmd->sense_reason = transport_generic_new_cmd(&cmd->se_cmd);
13107 +- if (cmd->sense_reason) {
13108 +- target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
13109 ++ if (cmd->sense_reason)
13110 + return 1;
13111 +- }
13112 +
13113 + return 0;
13114 + }
13115 +@@ -1124,6 +1105,7 @@ static int
13116 + iscsit_get_immediate_data(struct iscsi_cmd *cmd, struct iscsi_scsi_req *hdr,
13117 + bool dump_payload)
13118 + {
13119 ++ struct iscsi_conn *conn = cmd->conn;
13120 + int cmdsn_ret = 0, immed_ret = IMMEDIATE_DATA_NORMAL_OPERATION;
13121 + /*
13122 + * Special case for Unsupported SAM WRITE Opcodes and ImmediateData=Yes.
13123 +@@ -1140,20 +1122,25 @@ after_immediate_data:
13124 + * DataCRC, check against ExpCmdSN/MaxCmdSN if
13125 + * Immediate Bit is not set.
13126 + */
13127 +- cmdsn_ret = iscsit_sequence_cmd(cmd->conn, cmd, hdr->cmdsn);
13128 ++ cmdsn_ret = iscsit_sequence_cmd(cmd->conn, cmd,
13129 ++ (unsigned char *)hdr, hdr->cmdsn);
13130 ++ if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) {
13131 ++ return -1;
13132 ++ } else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
13133 ++ target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
13134 ++ return 0;
13135 ++ }
13136 +
13137 + if (cmd->sense_reason) {
13138 +- if (iscsit_dump_data_payload(cmd->conn,
13139 +- cmd->first_burst_len, 1) < 0)
13140 +- return -1;
13141 ++ int rc;
13142 ++
13143 ++ rc = iscsit_dump_data_payload(cmd->conn,
13144 ++ cmd->first_burst_len, 1);
13145 ++ target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
13146 ++ return rc;
13147 + } else if (cmd->unsolicited_data)
13148 + iscsit_set_unsoliticed_dataout(cmd);
13149 +
13150 +- if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
13151 +- return iscsit_add_reject_from_cmd(
13152 +- ISCSI_REASON_PROTOCOL_ERROR,
13153 +- 1, 0, (unsigned char *)hdr, cmd);
13154 +-
13155 + } else if (immed_ret == IMMEDIATE_DATA_ERL1_CRC_FAILURE) {
13156 + /*
13157 + * Immediate Data failed DataCRC and ERL>=1,
13158 +@@ -1184,15 +1171,14 @@ iscsit_handle_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13159 +
13160 + rc = iscsit_setup_scsi_cmd(conn, cmd, buf);
13161 + if (rc < 0)
13162 +- return rc;
13163 ++ return 0;
13164 + /*
13165 + * Allocation iovecs needed for struct socket operations for
13166 + * traditional iSCSI block I/O.
13167 + */
13168 + if (iscsit_allocate_iovecs(cmd) < 0) {
13169 +- return iscsit_add_reject_from_cmd(
13170 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13171 +- 1, 0, buf, cmd);
13172 ++ return iscsit_add_reject_cmd(cmd,
13173 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13174 + }
13175 + immed_data = cmd->immediate_data;
13176 +
13177 +@@ -1283,8 +1269,8 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf,
13178 +
13179 + if (!payload_length) {
13180 + pr_err("DataOUT payload is ZERO, protocol error.\n");
13181 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13182 +- buf, conn);
13183 ++ return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
13184 ++ buf);
13185 + }
13186 +
13187 + /* iSCSI write */
13188 +@@ -1301,8 +1287,8 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf,
13189 + pr_err("DataSegmentLength: %u is greater than"
13190 + " MaxXmitDataSegmentLength: %u\n", payload_length,
13191 + conn->conn_ops->MaxXmitDataSegmentLength);
13192 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13193 +- buf, conn);
13194 ++ return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
13195 ++ buf);
13196 + }
13197 +
13198 + cmd = iscsit_find_cmd_from_itt_or_dump(conn, hdr->itt,
13199 +@@ -1325,8 +1311,7 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf,
13200 + if (cmd->data_direction != DMA_TO_DEVICE) {
13201 + pr_err("Command ITT: 0x%08x received DataOUT for a"
13202 + " NON-WRITE command.\n", cmd->init_task_tag);
13203 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
13204 +- 1, 0, buf, cmd);
13205 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, buf);
13206 + }
13207 + se_cmd = &cmd->se_cmd;
13208 + iscsit_mod_dataout_timer(cmd);
13209 +@@ -1335,8 +1320,7 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf,
13210 + pr_err("DataOut Offset: %u, Length %u greater than"
13211 + " iSCSI Command EDTL %u, protocol error.\n",
13212 + hdr->offset, payload_length, cmd->se_cmd.data_length);
13213 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
13214 +- 1, 0, buf, cmd);
13215 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_BOOKMARK_INVALID, buf);
13216 + }
13217 +
13218 + if (cmd->unsolicited_data) {
13219 +@@ -1528,7 +1512,7 @@ static int iscsit_handle_data_out(struct iscsi_conn *conn, unsigned char *buf)
13220 +
13221 + rc = iscsit_check_dataout_hdr(conn, buf, &cmd);
13222 + if (rc < 0)
13223 +- return rc;
13224 ++ return 0;
13225 + else if (!cmd)
13226 + return 0;
13227 +
13228 +@@ -1557,8 +1541,8 @@ int iscsit_handle_nop_out(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13229 + if (hdr->itt == RESERVED_ITT && !(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
13230 + pr_err("NOPOUT ITT is reserved, but Immediate Bit is"
13231 + " not set, protocol error.\n");
13232 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13233 +- buf, conn);
13234 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR,
13235 ++ (unsigned char *)hdr);
13236 + }
13237 +
13238 + if (payload_length > conn->conn_ops->MaxXmitDataSegmentLength) {
13239 +@@ -1566,8 +1550,8 @@ int iscsit_handle_nop_out(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13240 + " greater than MaxXmitDataSegmentLength: %u, protocol"
13241 + " error.\n", payload_length,
13242 + conn->conn_ops->MaxXmitDataSegmentLength);
13243 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13244 +- buf, conn);
13245 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR,
13246 ++ (unsigned char *)hdr);
13247 + }
13248 +
13249 + pr_debug("Got NOPOUT Ping %s ITT: 0x%08x, TTT: 0x%08x,"
13250 +@@ -1584,9 +1568,9 @@ int iscsit_handle_nop_out(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13251 + */
13252 + if (hdr->ttt == cpu_to_be32(0xFFFFFFFF)) {
13253 + if (!cmd)
13254 +- return iscsit_add_reject(
13255 ++ return iscsit_reject_cmd(cmd,
13256 + ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13257 +- 1, buf, conn);
13258 ++ (unsigned char *)hdr);
13259 +
13260 + cmd->iscsi_opcode = ISCSI_OP_NOOP_OUT;
13261 + cmd->i_state = ISTATE_SEND_NOPIN;
13262 +@@ -1700,15 +1684,14 @@ int iscsit_handle_nop_out(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13263 + return 0;
13264 + }
13265 +
13266 +- cmdsn_ret = iscsit_sequence_cmd(conn, cmd, hdr->cmdsn);
13267 ++ cmdsn_ret = iscsit_sequence_cmd(conn, cmd,
13268 ++ (unsigned char *)hdr, hdr->cmdsn);
13269 + if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
13270 + ret = 0;
13271 + goto ping_out;
13272 + }
13273 + if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
13274 +- return iscsit_add_reject_from_cmd(
13275 +- ISCSI_REASON_PROTOCOL_ERROR,
13276 +- 1, 0, buf, cmd);
13277 ++ return -1;
13278 +
13279 + return 0;
13280 + }
13281 +@@ -1757,8 +1740,8 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13282 + struct se_tmr_req *se_tmr;
13283 + struct iscsi_tmr_req *tmr_req;
13284 + struct iscsi_tm *hdr;
13285 +- int out_of_order_cmdsn = 0;
13286 +- int ret;
13287 ++ int out_of_order_cmdsn = 0, ret;
13288 ++ bool sess_ref = false;
13289 + u8 function;
13290 +
13291 + hdr = (struct iscsi_tm *) buf;
13292 +@@ -1782,8 +1765,8 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13293 + pr_err("Task Management Request TASK_REASSIGN not"
13294 + " issued as immediate command, bad iSCSI Initiator"
13295 + "implementation\n");
13296 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
13297 +- 1, 1, buf, cmd);
13298 ++ return iscsit_add_reject_cmd(cmd,
13299 ++ ISCSI_REASON_PROTOCOL_ERROR, buf);
13300 + }
13301 + if ((function != ISCSI_TM_FUNC_ABORT_TASK) &&
13302 + be32_to_cpu(hdr->refcmdsn) != ISCSI_RESERVED_TAG)
13303 +@@ -1795,9 +1778,9 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13304 + if (!cmd->tmr_req) {
13305 + pr_err("Unable to allocate memory for"
13306 + " Task Management command!\n");
13307 +- return iscsit_add_reject_from_cmd(
13308 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13309 +- 1, 1, buf, cmd);
13310 ++ return iscsit_add_reject_cmd(cmd,
13311 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13312 ++ buf);
13313 + }
13314 +
13315 + /*
13316 +@@ -1814,6 +1797,9 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13317 + conn->sess->se_sess, 0, DMA_NONE,
13318 + MSG_SIMPLE_TAG, cmd->sense_buffer + 2);
13319 +
13320 ++ target_get_sess_cmd(conn->sess->se_sess, &cmd->se_cmd, true);
13321 ++ sess_ref = true;
13322 ++
13323 + switch (function) {
13324 + case ISCSI_TM_FUNC_ABORT_TASK:
13325 + tcm_function = TMR_ABORT_TASK;
13326 +@@ -1839,17 +1825,15 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13327 + default:
13328 + pr_err("Unknown iSCSI TMR Function:"
13329 + " 0x%02x\n", function);
13330 +- return iscsit_add_reject_from_cmd(
13331 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13332 +- 1, 1, buf, cmd);
13333 ++ return iscsit_add_reject_cmd(cmd,
13334 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13335 + }
13336 +
13337 + ret = core_tmr_alloc_req(&cmd->se_cmd, cmd->tmr_req,
13338 + tcm_function, GFP_KERNEL);
13339 + if (ret < 0)
13340 +- return iscsit_add_reject_from_cmd(
13341 +- ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13342 +- 1, 1, buf, cmd);
13343 ++ return iscsit_add_reject_cmd(cmd,
13344 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13345 +
13346 + cmd->tmr_req->se_tmr_req = cmd->se_cmd.se_tmr_req;
13347 + }
13348 +@@ -1908,9 +1892,8 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13349 + break;
13350 +
13351 + if (iscsit_check_task_reassign_expdatasn(tmr_req, conn) < 0)
13352 +- return iscsit_add_reject_from_cmd(
13353 +- ISCSI_REASON_BOOKMARK_INVALID, 1, 1,
13354 +- buf, cmd);
13355 ++ return iscsit_add_reject_cmd(cmd,
13356 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
13357 + break;
13358 + default:
13359 + pr_err("Unknown TMR function: 0x%02x, protocol"
13360 +@@ -1928,15 +1911,13 @@ attach:
13361 + spin_unlock_bh(&conn->cmd_lock);
13362 +
13363 + if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
13364 +- int cmdsn_ret = iscsit_sequence_cmd(conn, cmd, hdr->cmdsn);
13365 ++ int cmdsn_ret = iscsit_sequence_cmd(conn, cmd, buf, hdr->cmdsn);
13366 + if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP)
13367 + out_of_order_cmdsn = 1;
13368 + else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP)
13369 + return 0;
13370 + else if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
13371 +- return iscsit_add_reject_from_cmd(
13372 +- ISCSI_REASON_PROTOCOL_ERROR,
13373 +- 1, 0, buf, cmd);
13374 ++ return -1;
13375 + }
13376 + iscsit_ack_from_expstatsn(conn, be32_to_cpu(hdr->exp_statsn));
13377 +
13378 +@@ -1956,6 +1937,11 @@ attach:
13379 + * For connection recovery, this is also the default action for
13380 + * TMR TASK_REASSIGN.
13381 + */
13382 ++ if (sess_ref) {
13383 ++ pr_debug("Handle TMR, using sess_ref=true check\n");
13384 ++ target_put_sess_cmd(conn->sess->se_sess, &cmd->se_cmd);
13385 ++ }
13386 ++
13387 + iscsit_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
13388 + return 0;
13389 + }
13390 +@@ -1981,8 +1967,7 @@ static int iscsit_handle_text_cmd(
13391 + pr_err("Unable to accept text parameter length: %u"
13392 + "greater than MaxXmitDataSegmentLength %u.\n",
13393 + payload_length, conn->conn_ops->MaxXmitDataSegmentLength);
13394 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13395 +- buf, conn);
13396 ++ return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR, buf);
13397 + }
13398 +
13399 + pr_debug("Got Text Request: ITT: 0x%08x, CmdSN: 0x%08x,"
13400 +@@ -2084,8 +2069,8 @@ static int iscsit_handle_text_cmd(
13401 +
13402 + cmd = iscsit_allocate_cmd(conn, GFP_KERNEL);
13403 + if (!cmd)
13404 +- return iscsit_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13405 +- 1, buf, conn);
13406 ++ return iscsit_add_reject(conn,
13407 ++ ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13408 +
13409 + cmd->iscsi_opcode = ISCSI_OP_TEXT;
13410 + cmd->i_state = ISTATE_SEND_TEXTRSP;
13411 +@@ -2103,11 +2088,10 @@ static int iscsit_handle_text_cmd(
13412 + iscsit_ack_from_expstatsn(conn, be32_to_cpu(hdr->exp_statsn));
13413 +
13414 + if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) {
13415 +- cmdsn_ret = iscsit_sequence_cmd(conn, cmd, hdr->cmdsn);
13416 ++ cmdsn_ret = iscsit_sequence_cmd(conn, cmd,
13417 ++ (unsigned char *)hdr, hdr->cmdsn);
13418 + if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
13419 +- return iscsit_add_reject_from_cmd(
13420 +- ISCSI_REASON_PROTOCOL_ERROR,
13421 +- 1, 0, buf, cmd);
13422 ++ return -1;
13423 +
13424 + return 0;
13425 + }
13426 +@@ -2292,14 +2276,11 @@ iscsit_handle_logout_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13427 + if (ret < 0)
13428 + return ret;
13429 + } else {
13430 +- cmdsn_ret = iscsit_sequence_cmd(conn, cmd, hdr->cmdsn);
13431 +- if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) {
13432 ++ cmdsn_ret = iscsit_sequence_cmd(conn, cmd, buf, hdr->cmdsn);
13433 ++ if (cmdsn_ret == CMDSN_LOWER_THAN_EXP)
13434 + logout_remove = 0;
13435 +- } else if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) {
13436 +- return iscsit_add_reject_from_cmd(
13437 +- ISCSI_REASON_PROTOCOL_ERROR,
13438 +- 1, 0, buf, cmd);
13439 +- }
13440 ++ else if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER)
13441 ++ return -1;
13442 + }
13443 +
13444 + return logout_remove;
13445 +@@ -2323,8 +2304,8 @@ static int iscsit_handle_snack(
13446 + if (!conn->sess->sess_ops->ErrorRecoveryLevel) {
13447 + pr_err("Initiator sent SNACK request while in"
13448 + " ErrorRecoveryLevel=0.\n");
13449 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13450 +- buf, conn);
13451 ++ return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
13452 ++ buf);
13453 + }
13454 + /*
13455 + * SNACK_DATA and SNACK_R2T are both 0, so check which function to
13456 +@@ -2348,13 +2329,13 @@ static int iscsit_handle_snack(
13457 + case ISCSI_FLAG_SNACK_TYPE_RDATA:
13458 + /* FIXME: Support R-Data SNACK */
13459 + pr_err("R-Data SNACK Not Supported.\n");
13460 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13461 +- buf, conn);
13462 ++ return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
13463 ++ buf);
13464 + default:
13465 + pr_err("Unknown SNACK type 0x%02x, protocol"
13466 + " error.\n", hdr->flags & 0x0f);
13467 +- return iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13468 +- buf, conn);
13469 ++ return iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
13470 ++ buf);
13471 + }
13472 +
13473 + return 0;
13474 +@@ -2426,14 +2407,14 @@ static int iscsit_handle_immediate_data(
13475 + pr_err("Unable to recover from"
13476 + " Immediate Data digest failure while"
13477 + " in ERL=0.\n");
13478 +- iscsit_add_reject_from_cmd(
13479 ++ iscsit_reject_cmd(cmd,
13480 + ISCSI_REASON_DATA_DIGEST_ERROR,
13481 +- 1, 0, (unsigned char *)hdr, cmd);
13482 ++ (unsigned char *)hdr);
13483 + return IMMEDIATE_DATA_CANNOT_RECOVER;
13484 + } else {
13485 +- iscsit_add_reject_from_cmd(
13486 ++ iscsit_reject_cmd(cmd,
13487 + ISCSI_REASON_DATA_DIGEST_ERROR,
13488 +- 0, 0, (unsigned char *)hdr, cmd);
13489 ++ (unsigned char *)hdr);
13490 + return IMMEDIATE_DATA_ERL1_CRC_FAILURE;
13491 + }
13492 + } else {
13493 +@@ -3533,6 +3514,7 @@ iscsit_build_reject(struct iscsi_cmd *cmd, struct iscsi_conn *conn,
13494 + struct iscsi_reject *hdr)
13495 + {
13496 + hdr->opcode = ISCSI_OP_REJECT;
13497 ++ hdr->reason = cmd->reject_reason;
13498 + hdr->flags |= ISCSI_FLAG_CMD_FINAL;
13499 + hton24(hdr->dlength, ISCSI_HDR_LEN);
13500 + hdr->ffffffff = cpu_to_be32(0xffffffff);
13501 +@@ -3806,18 +3788,11 @@ check_rsp_state:
13502 + case ISTATE_SEND_STATUS_RECOVERY:
13503 + case ISTATE_SEND_TEXTRSP:
13504 + case ISTATE_SEND_TASKMGTRSP:
13505 ++ case ISTATE_SEND_REJECT:
13506 + spin_lock_bh(&cmd->istate_lock);
13507 + cmd->i_state = ISTATE_SENT_STATUS;
13508 + spin_unlock_bh(&cmd->istate_lock);
13509 + break;
13510 +- case ISTATE_SEND_REJECT:
13511 +- if (cmd->cmd_flags & ICF_REJECT_FAIL_CONN) {
13512 +- cmd->cmd_flags &= ~ICF_REJECT_FAIL_CONN;
13513 +- complete(&cmd->reject_comp);
13514 +- goto err;
13515 +- }
13516 +- complete(&cmd->reject_comp);
13517 +- break;
13518 + default:
13519 + pr_err("Unknown Opcode: 0x%02x ITT:"
13520 + " 0x%08x, i_state: %d on CID: %hu\n",
13521 +@@ -3922,8 +3897,7 @@ static int iscsi_target_rx_opcode(struct iscsi_conn *conn, unsigned char *buf)
13522 + case ISCSI_OP_SCSI_CMD:
13523 + cmd = iscsit_allocate_cmd(conn, GFP_KERNEL);
13524 + if (!cmd)
13525 +- return iscsit_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13526 +- 1, buf, conn);
13527 ++ goto reject;
13528 +
13529 + ret = iscsit_handle_scsi_cmd(conn, cmd, buf);
13530 + break;
13531 +@@ -3935,16 +3909,14 @@ static int iscsi_target_rx_opcode(struct iscsi_conn *conn, unsigned char *buf)
13532 + if (hdr->ttt == cpu_to_be32(0xFFFFFFFF)) {
13533 + cmd = iscsit_allocate_cmd(conn, GFP_KERNEL);
13534 + if (!cmd)
13535 +- return iscsit_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13536 +- 1, buf, conn);
13537 ++ goto reject;
13538 + }
13539 + ret = iscsit_handle_nop_out(conn, cmd, buf);
13540 + break;
13541 + case ISCSI_OP_SCSI_TMFUNC:
13542 + cmd = iscsit_allocate_cmd(conn, GFP_KERNEL);
13543 + if (!cmd)
13544 +- return iscsit_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13545 +- 1, buf, conn);
13546 ++ goto reject;
13547 +
13548 + ret = iscsit_handle_task_mgt_cmd(conn, cmd, buf);
13549 + break;
13550 +@@ -3954,8 +3926,7 @@ static int iscsi_target_rx_opcode(struct iscsi_conn *conn, unsigned char *buf)
13551 + case ISCSI_OP_LOGOUT:
13552 + cmd = iscsit_allocate_cmd(conn, GFP_KERNEL);
13553 + if (!cmd)
13554 +- return iscsit_add_reject(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13555 +- 1, buf, conn);
13556 ++ goto reject;
13557 +
13558 + ret = iscsit_handle_logout_cmd(conn, cmd, buf);
13559 + if (ret > 0)
13560 +@@ -3987,6 +3958,8 @@ static int iscsi_target_rx_opcode(struct iscsi_conn *conn, unsigned char *buf)
13561 + }
13562 +
13563 + return ret;
13564 ++reject:
13565 ++ return iscsit_add_reject(conn, ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf);
13566 + }
13567 +
13568 + int iscsi_target_rx_thread(void *arg)
13569 +@@ -4086,8 +4059,8 @@ restart:
13570 + (!(opcode & ISCSI_OP_LOGOUT)))) {
13571 + pr_err("Received illegal iSCSI Opcode: 0x%02x"
13572 + " while in Discovery Session, rejecting.\n", opcode);
13573 +- iscsit_add_reject(ISCSI_REASON_PROTOCOL_ERROR, 1,
13574 +- buffer, conn);
13575 ++ iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
13576 ++ buffer);
13577 + goto transport_err;
13578 + }
13579 +
13580 +diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
13581 +index a0050b2..2c437cb 100644
13582 +--- a/drivers/target/iscsi/iscsi_target.h
13583 ++++ b/drivers/target/iscsi/iscsi_target.h
13584 +@@ -15,7 +15,7 @@ extern struct iscsi_np *iscsit_add_np(struct __kernel_sockaddr_storage *,
13585 + extern int iscsit_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *,
13586 + struct iscsi_portal_group *);
13587 + extern int iscsit_del_np(struct iscsi_np *);
13588 +-extern int iscsit_add_reject_from_cmd(u8, int, int, unsigned char *, struct iscsi_cmd *);
13589 ++extern int iscsit_reject_cmd(struct iscsi_cmd *cmd, u8, unsigned char *);
13590 + extern void iscsit_set_unsoliticed_dataout(struct iscsi_cmd *);
13591 + extern int iscsit_logout_closesession(struct iscsi_cmd *, struct iscsi_conn *);
13592 + extern int iscsit_logout_closeconnection(struct iscsi_cmd *, struct iscsi_conn *);
13593 +diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
13594 +index 8d8b3ff..421344d 100644
13595 +--- a/drivers/target/iscsi/iscsi_target_configfs.c
13596 ++++ b/drivers/target/iscsi/iscsi_target_configfs.c
13597 +@@ -474,7 +474,7 @@ static ssize_t __iscsi_##prefix##_store_##name( \
13598 + if (!capable(CAP_SYS_ADMIN)) \
13599 + return -EPERM; \
13600 + \
13601 +- snprintf(auth->name, PAGE_SIZE, "%s", page); \
13602 ++ snprintf(auth->name, sizeof(auth->name), "%s", page); \
13603 + if (!strncmp("NULL", auth->name, 4)) \
13604 + auth->naf_flags &= ~flags; \
13605 + else \
13606 +diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
13607 +index 60ec4b9..8907dcd 100644
13608 +--- a/drivers/target/iscsi/iscsi_target_core.h
13609 ++++ b/drivers/target/iscsi/iscsi_target_core.h
13610 +@@ -132,7 +132,6 @@ enum cmd_flags_table {
13611 + ICF_CONTIG_MEMORY = 0x00000020,
13612 + ICF_ATTACHED_TO_RQUEUE = 0x00000040,
13613 + ICF_OOO_CMDSN = 0x00000080,
13614 +- ICF_REJECT_FAIL_CONN = 0x00000100,
13615 + };
13616 +
13617 + /* struct iscsi_cmd->i_state */
13618 +@@ -366,6 +365,8 @@ struct iscsi_cmd {
13619 + u8 maxcmdsn_inc;
13620 + /* Immediate Unsolicited Dataout */
13621 + u8 unsolicited_data;
13622 ++ /* Reject reason code */
13623 ++ u8 reject_reason;
13624 + /* CID contained in logout PDU when opcode == ISCSI_INIT_LOGOUT_CMND */
13625 + u16 logout_cid;
13626 + /* Command flags */
13627 +@@ -446,7 +447,6 @@ struct iscsi_cmd {
13628 + struct list_head datain_list;
13629 + /* R2T List */
13630 + struct list_head cmd_r2t_list;
13631 +- struct completion reject_comp;
13632 + /* Timer for DataOUT */
13633 + struct timer_list dataout_timer;
13634 + /* Iovecs for SCSI data payload RX/TX w/ kernel level sockets */
13635 +diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
13636 +index dcb199d..08bd878 100644
13637 +--- a/drivers/target/iscsi/iscsi_target_erl0.c
13638 ++++ b/drivers/target/iscsi/iscsi_target_erl0.c
13639 +@@ -746,13 +746,12 @@ int iscsit_check_post_dataout(
13640 + if (!conn->sess->sess_ops->ErrorRecoveryLevel) {
13641 + pr_err("Unable to recover from DataOUT CRC"
13642 + " failure while ERL=0, closing session.\n");
13643 +- iscsit_add_reject_from_cmd(ISCSI_REASON_DATA_DIGEST_ERROR,
13644 +- 1, 0, buf, cmd);
13645 ++ iscsit_reject_cmd(cmd, ISCSI_REASON_DATA_DIGEST_ERROR,
13646 ++ buf);
13647 + return DATAOUT_CANNOT_RECOVER;
13648 + }
13649 +
13650 +- iscsit_add_reject_from_cmd(ISCSI_REASON_DATA_DIGEST_ERROR,
13651 +- 0, 0, buf, cmd);
13652 ++ iscsit_reject_cmd(cmd, ISCSI_REASON_DATA_DIGEST_ERROR, buf);
13653 + return iscsit_dataout_post_crc_failed(cmd, buf);
13654 + }
13655 + }
13656 +@@ -909,6 +908,7 @@ void iscsit_cause_connection_reinstatement(struct iscsi_conn *conn, int sleep)
13657 + wait_for_completion(&conn->conn_wait_comp);
13658 + complete(&conn->conn_post_wait_comp);
13659 + }
13660 ++EXPORT_SYMBOL(iscsit_cause_connection_reinstatement);
13661 +
13662 + void iscsit_fall_back_to_erl0(struct iscsi_session *sess)
13663 + {
13664 +diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c
13665 +index 40d9dbc..586c268 100644
13666 +--- a/drivers/target/iscsi/iscsi_target_erl1.c
13667 ++++ b/drivers/target/iscsi/iscsi_target_erl1.c
13668 +@@ -162,9 +162,8 @@ static int iscsit_handle_r2t_snack(
13669 + " protocol error.\n", cmd->init_task_tag, begrun,
13670 + (begrun + runlength), cmd->acked_data_sn);
13671 +
13672 +- return iscsit_add_reject_from_cmd(
13673 +- ISCSI_REASON_PROTOCOL_ERROR,
13674 +- 1, 0, buf, cmd);
13675 ++ return iscsit_reject_cmd(cmd,
13676 ++ ISCSI_REASON_PROTOCOL_ERROR, buf);
13677 + }
13678 +
13679 + if (runlength) {
13680 +@@ -173,8 +172,8 @@ static int iscsit_handle_r2t_snack(
13681 + " with BegRun: 0x%08x, RunLength: 0x%08x, exceeds"
13682 + " current R2TSN: 0x%08x, protocol error.\n",
13683 + cmd->init_task_tag, begrun, runlength, cmd->r2t_sn);
13684 +- return iscsit_add_reject_from_cmd(
13685 +- ISCSI_REASON_BOOKMARK_INVALID, 1, 0, buf, cmd);
13686 ++ return iscsit_reject_cmd(cmd,
13687 ++ ISCSI_REASON_BOOKMARK_INVALID, buf);
13688 + }
13689 + last_r2tsn = (begrun + runlength);
13690 + } else
13691 +@@ -433,8 +432,7 @@ static int iscsit_handle_recovery_datain(
13692 + " protocol error.\n", cmd->init_task_tag, begrun,
13693 + (begrun + runlength), cmd->acked_data_sn);
13694 +
13695 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_PROTOCOL_ERROR,
13696 +- 1, 0, buf, cmd);
13697 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, buf);
13698 + }
13699 +
13700 + /*
13701 +@@ -445,14 +443,14 @@ static int iscsit_handle_recovery_datain(
13702 + pr_err("Initiator requesting BegRun: 0x%08x, RunLength"
13703 + ": 0x%08x greater than maximum DataSN: 0x%08x.\n",
13704 + begrun, runlength, (cmd->data_sn - 1));
13705 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_INVALID,
13706 +- 1, 0, buf, cmd);
13707 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_BOOKMARK_INVALID,
13708 ++ buf);
13709 + }
13710 +
13711 + dr = iscsit_allocate_datain_req();
13712 + if (!dr)
13713 +- return iscsit_add_reject_from_cmd(ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13714 +- 1, 0, buf, cmd);
13715 ++ return iscsit_reject_cmd(cmd, ISCSI_REASON_BOOKMARK_NO_RESOURCES,
13716 ++ buf);
13717 +
13718 + dr->data_sn = dr->begrun = begrun;
13719 + dr->runlength = runlength;
13720 +@@ -1090,7 +1088,7 @@ int iscsit_handle_ooo_cmdsn(
13721 +
13722 + ooo_cmdsn = iscsit_allocate_ooo_cmdsn();
13723 + if (!ooo_cmdsn)
13724 +- return CMDSN_ERROR_CANNOT_RECOVER;
13725 ++ return -ENOMEM;
13726 +
13727 + ooo_cmdsn->cmd = cmd;
13728 + ooo_cmdsn->batch_count = (batch) ?
13729 +@@ -1101,10 +1099,10 @@ int iscsit_handle_ooo_cmdsn(
13730 +
13731 + if (iscsit_attach_ooo_cmdsn(sess, ooo_cmdsn) < 0) {
13732 + kmem_cache_free(lio_ooo_cache, ooo_cmdsn);
13733 +- return CMDSN_ERROR_CANNOT_RECOVER;
13734 ++ return -ENOMEM;
13735 + }
13736 +
13737 +- return CMDSN_HIGHER_THAN_EXP;
13738 ++ return 0;
13739 + }
13740 +
13741 + static int iscsit_set_dataout_timeout_values(
13742 +diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
13743 +index 08a3bac..96e7fdb 100644
13744 +--- a/drivers/target/iscsi/iscsi_target_util.c
13745 ++++ b/drivers/target/iscsi/iscsi_target_util.c
13746 +@@ -178,7 +178,6 @@ struct iscsi_cmd *iscsit_allocate_cmd(struct iscsi_conn *conn, gfp_t gfp_mask)
13747 + INIT_LIST_HEAD(&cmd->i_conn_node);
13748 + INIT_LIST_HEAD(&cmd->datain_list);
13749 + INIT_LIST_HEAD(&cmd->cmd_r2t_list);
13750 +- init_completion(&cmd->reject_comp);
13751 + spin_lock_init(&cmd->datain_lock);
13752 + spin_lock_init(&cmd->dataout_timeout_lock);
13753 + spin_lock_init(&cmd->istate_lock);
13754 +@@ -284,13 +283,12 @@ static inline int iscsit_check_received_cmdsn(struct iscsi_session *sess, u32 cm
13755 + * Commands may be received out of order if MC/S is in use.
13756 + * Ensure they are executed in CmdSN order.
13757 + */
13758 +-int iscsit_sequence_cmd(
13759 +- struct iscsi_conn *conn,
13760 +- struct iscsi_cmd *cmd,
13761 +- __be32 cmdsn)
13762 ++int iscsit_sequence_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13763 ++ unsigned char *buf, __be32 cmdsn)
13764 + {
13765 +- int ret;
13766 +- int cmdsn_ret;
13767 ++ int ret, cmdsn_ret;
13768 ++ bool reject = false;
13769 ++ u8 reason = ISCSI_REASON_BOOKMARK_NO_RESOURCES;
13770 +
13771 + mutex_lock(&conn->sess->cmdsn_mutex);
13772 +
13773 +@@ -300,9 +298,19 @@ int iscsit_sequence_cmd(
13774 + ret = iscsit_execute_cmd(cmd, 0);
13775 + if ((ret >= 0) && !list_empty(&conn->sess->sess_ooo_cmdsn_list))
13776 + iscsit_execute_ooo_cmdsns(conn->sess);
13777 ++ else if (ret < 0) {
13778 ++ reject = true;
13779 ++ ret = CMDSN_ERROR_CANNOT_RECOVER;
13780 ++ }
13781 + break;
13782 + case CMDSN_HIGHER_THAN_EXP:
13783 + ret = iscsit_handle_ooo_cmdsn(conn->sess, cmd, be32_to_cpu(cmdsn));
13784 ++ if (ret < 0) {
13785 ++ reject = true;
13786 ++ ret = CMDSN_ERROR_CANNOT_RECOVER;
13787 ++ break;
13788 ++ }
13789 ++ ret = CMDSN_HIGHER_THAN_EXP;
13790 + break;
13791 + case CMDSN_LOWER_THAN_EXP:
13792 + cmd->i_state = ISTATE_REMOVE;
13793 +@@ -310,11 +318,16 @@ int iscsit_sequence_cmd(
13794 + ret = cmdsn_ret;
13795 + break;
13796 + default:
13797 ++ reason = ISCSI_REASON_PROTOCOL_ERROR;
13798 ++ reject = true;
13799 + ret = cmdsn_ret;
13800 + break;
13801 + }
13802 + mutex_unlock(&conn->sess->cmdsn_mutex);
13803 +
13804 ++ if (reject)
13805 ++ iscsit_reject_cmd(cmd, reason, buf);
13806 ++
13807 + return ret;
13808 + }
13809 + EXPORT_SYMBOL(iscsit_sequence_cmd);
13810 +diff --git a/drivers/target/iscsi/iscsi_target_util.h b/drivers/target/iscsi/iscsi_target_util.h
13811 +index a442265..e4fc34a 100644
13812 +--- a/drivers/target/iscsi/iscsi_target_util.h
13813 ++++ b/drivers/target/iscsi/iscsi_target_util.h
13814 +@@ -13,7 +13,8 @@ extern struct iscsi_cmd *iscsit_allocate_cmd(struct iscsi_conn *, gfp_t);
13815 + extern struct iscsi_seq *iscsit_get_seq_holder_for_datain(struct iscsi_cmd *, u32);
13816 + extern struct iscsi_seq *iscsit_get_seq_holder_for_r2t(struct iscsi_cmd *);
13817 + extern struct iscsi_r2t *iscsit_get_holder_for_r2tsn(struct iscsi_cmd *, u32);
13818 +-int iscsit_sequence_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd, __be32 cmdsn);
13819 ++extern int iscsit_sequence_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
13820 ++ unsigned char * ,__be32 cmdsn);
13821 + extern int iscsit_check_unsolicited_dataout(struct iscsi_cmd *, unsigned char *);
13822 + extern struct iscsi_cmd *iscsit_find_cmd_from_itt(struct iscsi_conn *, itt_t);
13823 + extern struct iscsi_cmd *iscsit_find_cmd_from_itt_or_dump(struct iscsi_conn *,
13824 +diff --git a/drivers/tty/tty_port.c b/drivers/tty/tty_port.c
13825 +index 121aeb9..f597e88 100644
13826 +--- a/drivers/tty/tty_port.c
13827 ++++ b/drivers/tty/tty_port.c
13828 +@@ -256,10 +256,9 @@ void tty_port_tty_hangup(struct tty_port *port, bool check_clocal)
13829 + {
13830 + struct tty_struct *tty = tty_port_tty_get(port);
13831 +
13832 +- if (tty && (!check_clocal || !C_CLOCAL(tty))) {
13833 ++ if (tty && (!check_clocal || !C_CLOCAL(tty)))
13834 + tty_hangup(tty);
13835 +- tty_kref_put(tty);
13836 +- }
13837 ++ tty_kref_put(tty);
13838 + }
13839 + EXPORT_SYMBOL_GPL(tty_port_tty_hangup);
13840 +
13841 +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
13842 +index feef935..b93fc88 100644
13843 +--- a/drivers/usb/core/hub.c
13844 ++++ b/drivers/usb/core/hub.c
13845 +@@ -668,6 +668,15 @@ resubmit:
13846 + static inline int
13847 + hub_clear_tt_buffer (struct usb_device *hdev, u16 devinfo, u16 tt)
13848 + {
13849 ++ /* Need to clear both directions for control ep */
13850 ++ if (((devinfo >> 11) & USB_ENDPOINT_XFERTYPE_MASK) ==
13851 ++ USB_ENDPOINT_XFER_CONTROL) {
13852 ++ int status = usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
13853 ++ HUB_CLEAR_TT_BUFFER, USB_RT_PORT,
13854 ++ devinfo ^ 0x8000, tt, NULL, 0, 1000);
13855 ++ if (status)
13856 ++ return status;
13857 ++ }
13858 + return usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
13859 + HUB_CLEAR_TT_BUFFER, USB_RT_PORT, devinfo,
13860 + tt, NULL, 0, 1000);
13861 +@@ -2846,6 +2855,15 @@ static int usb_disable_function_remotewakeup(struct usb_device *udev)
13862 + USB_CTRL_SET_TIMEOUT);
13863 + }
13864 +
13865 ++/* Count of wakeup-enabled devices at or below udev */
13866 ++static unsigned wakeup_enabled_descendants(struct usb_device *udev)
13867 ++{
13868 ++ struct usb_hub *hub = usb_hub_to_struct_hub(udev);
13869 ++
13870 ++ return udev->do_remote_wakeup +
13871 ++ (hub ? hub->wakeup_enabled_descendants : 0);
13872 ++}
13873 ++
13874 + /*
13875 + * usb_port_suspend - suspend a usb device's upstream port
13876 + * @udev: device that's no longer in active use, not a root hub
13877 +@@ -2886,8 +2904,8 @@ static int usb_disable_function_remotewakeup(struct usb_device *udev)
13878 + * Linux (2.6) currently has NO mechanisms to initiate that: no khubd
13879 + * timer, no SRP, no requests through sysfs.
13880 + *
13881 +- * If Runtime PM isn't enabled or used, non-SuperSpeed devices really get
13882 +- * suspended only when their bus goes into global suspend (i.e., the root
13883 ++ * If Runtime PM isn't enabled or used, non-SuperSpeed devices may not get
13884 ++ * suspended until their bus goes into global suspend (i.e., the root
13885 + * hub is suspended). Nevertheless, we change @udev->state to
13886 + * USB_STATE_SUSPENDED as this is the device's "logical" state. The actual
13887 + * upstream port setting is stored in @udev->port_is_suspended.
13888 +@@ -2958,15 +2976,21 @@ int usb_port_suspend(struct usb_device *udev, pm_message_t msg)
13889 + /* see 7.1.7.6 */
13890 + if (hub_is_superspeed(hub->hdev))
13891 + status = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_U3);
13892 +- else if (PMSG_IS_AUTO(msg))
13893 +- status = set_port_feature(hub->hdev, port1,
13894 +- USB_PORT_FEAT_SUSPEND);
13895 ++
13896 + /*
13897 + * For system suspend, we do not need to enable the suspend feature
13898 + * on individual USB-2 ports. The devices will automatically go
13899 + * into suspend a few ms after the root hub stops sending packets.
13900 + * The USB 2.0 spec calls this "global suspend".
13901 ++ *
13902 ++ * However, many USB hubs have a bug: They don't relay wakeup requests
13903 ++ * from a downstream port if the port's suspend feature isn't on.
13904 ++ * Therefore we will turn on the suspend feature if udev or any of its
13905 ++ * descendants is enabled for remote wakeup.
13906 + */
13907 ++ else if (PMSG_IS_AUTO(msg) || wakeup_enabled_descendants(udev) > 0)
13908 ++ status = set_port_feature(hub->hdev, port1,
13909 ++ USB_PORT_FEAT_SUSPEND);
13910 + else {
13911 + really_suspend = false;
13912 + status = 0;
13913 +@@ -3001,15 +3025,16 @@ int usb_port_suspend(struct usb_device *udev, pm_message_t msg)
13914 + if (!PMSG_IS_AUTO(msg))
13915 + status = 0;
13916 + } else {
13917 +- /* device has up to 10 msec to fully suspend */
13918 + dev_dbg(&udev->dev, "usb %ssuspend, wakeup %d\n",
13919 + (PMSG_IS_AUTO(msg) ? "auto-" : ""),
13920 + udev->do_remote_wakeup);
13921 +- usb_set_device_state(udev, USB_STATE_SUSPENDED);
13922 + if (really_suspend) {
13923 + udev->port_is_suspended = 1;
13924 ++
13925 ++ /* device has up to 10 msec to fully suspend */
13926 + msleep(10);
13927 + }
13928 ++ usb_set_device_state(udev, USB_STATE_SUSPENDED);
13929 + }
13930 +
13931 + /*
13932 +@@ -3291,7 +3316,11 @@ static int hub_suspend(struct usb_interface *intf, pm_message_t msg)
13933 + unsigned port1;
13934 + int status;
13935 +
13936 +- /* Warn if children aren't already suspended */
13937 ++ /*
13938 ++ * Warn if children aren't already suspended.
13939 ++ * Also, add up the number of wakeup-enabled descendants.
13940 ++ */
13941 ++ hub->wakeup_enabled_descendants = 0;
13942 + for (port1 = 1; port1 <= hdev->maxchild; port1++) {
13943 + struct usb_device *udev;
13944 +
13945 +@@ -3301,6 +3330,9 @@ static int hub_suspend(struct usb_interface *intf, pm_message_t msg)
13946 + if (PMSG_IS_AUTO(msg))
13947 + return -EBUSY;
13948 + }
13949 ++ if (udev)
13950 ++ hub->wakeup_enabled_descendants +=
13951 ++ wakeup_enabled_descendants(udev);
13952 + }
13953 +
13954 + if (hdev->do_remote_wakeup && hub->quirk_check_port_auto_suspend) {
13955 +diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
13956 +index 80ab9ee..f608b39 100644
13957 +--- a/drivers/usb/core/hub.h
13958 ++++ b/drivers/usb/core/hub.h
13959 +@@ -59,6 +59,9 @@ struct usb_hub {
13960 + struct usb_tt tt; /* Transaction Translator */
13961 +
13962 + unsigned mA_per_port; /* current for each child */
13963 ++#ifdef CONFIG_PM
13964 ++ unsigned wakeup_enabled_descendants;
13965 ++#endif
13966 +
13967 + unsigned limited_power:1;
13968 + unsigned quiescing:1;
13969 +diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
13970 +index c35d49d..358375e 100644
13971 +--- a/drivers/usb/dwc3/core.c
13972 ++++ b/drivers/usb/dwc3/core.c
13973 +@@ -450,7 +450,7 @@ static int dwc3_probe(struct platform_device *pdev)
13974 + }
13975 +
13976 + if (IS_ERR(dwc->usb3_phy)) {
13977 +- ret = PTR_ERR(dwc->usb2_phy);
13978 ++ ret = PTR_ERR(dwc->usb3_phy);
13979 +
13980 + /*
13981 + * if -ENXIO is returned, it means PHY layer wasn't
13982 +diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
13983 +index b69d322..27dad99 100644
13984 +--- a/drivers/usb/dwc3/core.h
13985 ++++ b/drivers/usb/dwc3/core.h
13986 +@@ -759,8 +759,8 @@ struct dwc3 {
13987 +
13988 + struct dwc3_event_type {
13989 + u32 is_devspec:1;
13990 +- u32 type:6;
13991 +- u32 reserved8_31:25;
13992 ++ u32 type:7;
13993 ++ u32 reserved8_31:24;
13994 + } __packed;
13995 +
13996 + #define DWC3_DEPEVT_XFERCOMPLETE 0x01
13997 +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
13998 +index b5e5b35..f77083f 100644
13999 +--- a/drivers/usb/dwc3/gadget.c
14000 ++++ b/drivers/usb/dwc3/gadget.c
14001 +@@ -1584,6 +1584,7 @@ err1:
14002 + __dwc3_gadget_ep_disable(dwc->eps[0]);
14003 +
14004 + err0:
14005 ++ dwc->gadget_driver = NULL;
14006 + spin_unlock_irqrestore(&dwc->lock, flags);
14007 +
14008 + return ret;
14009 +diff --git a/drivers/usb/gadget/udc-core.c b/drivers/usb/gadget/udc-core.c
14010 +index ffd8fa5..5514822 100644
14011 +--- a/drivers/usb/gadget/udc-core.c
14012 ++++ b/drivers/usb/gadget/udc-core.c
14013 +@@ -105,7 +105,7 @@ void usb_gadget_set_state(struct usb_gadget *gadget,
14014 + enum usb_device_state state)
14015 + {
14016 + gadget->state = state;
14017 +- sysfs_notify(&gadget->dev.kobj, NULL, "status");
14018 ++ sysfs_notify(&gadget->dev.kobj, NULL, "state");
14019 + }
14020 + EXPORT_SYMBOL_GPL(usb_gadget_set_state);
14021 +
14022 +diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
14023 +index 9ab4a4d..ca6289b 100644
14024 +--- a/drivers/usb/host/ehci-hub.c
14025 ++++ b/drivers/usb/host/ehci-hub.c
14026 +@@ -858,6 +858,7 @@ static int ehci_hub_control (
14027 + ehci->reset_done[wIndex] = jiffies
14028 + + msecs_to_jiffies(20);
14029 + usb_hcd_start_port_resume(&hcd->self, wIndex);
14030 ++ set_bit(wIndex, &ehci->resuming_ports);
14031 + /* check the port again */
14032 + mod_timer(&ehci_to_hcd(ehci)->rh_timer,
14033 + ehci->reset_done[wIndex]);
14034 +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
14035 +index cc24e39..f00cb20 100644
14036 +--- a/drivers/usb/host/xhci-pci.c
14037 ++++ b/drivers/usb/host/xhci-pci.c
14038 +@@ -93,7 +93,6 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
14039 + }
14040 + if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
14041 + pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) {
14042 +- xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
14043 + xhci->quirks |= XHCI_EP_LIMIT_QUIRK;
14044 + xhci->limit_active_eps = 64;
14045 + xhci->quirks |= XHCI_SW_BW_CHECKING;
14046 +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
14047 +index 1969c00..cc3bfc5 100644
14048 +--- a/drivers/usb/host/xhci-ring.c
14049 ++++ b/drivers/usb/host/xhci-ring.c
14050 +@@ -434,7 +434,7 @@ static void ring_doorbell_for_active_rings(struct xhci_hcd *xhci,
14051 +
14052 + /* A ring has pending URBs if its TD list is not empty */
14053 + if (!(ep->ep_state & EP_HAS_STREAMS)) {
14054 +- if (!(list_empty(&ep->ring->td_list)))
14055 ++ if (ep->ring && !(list_empty(&ep->ring->td_list)))
14056 + xhci_ring_ep_doorbell(xhci, slot_id, ep_index, 0);
14057 + return;
14058 + }
14059 +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
14060 +index d8f640b..9a550b6 100644
14061 +--- a/drivers/usb/host/xhci.c
14062 ++++ b/drivers/usb/host/xhci.c
14063 +@@ -1171,9 +1171,6 @@ static int xhci_check_args(struct usb_hcd *hcd, struct usb_device *udev,
14064 + }
14065 +
14066 + xhci = hcd_to_xhci(hcd);
14067 +- if (xhci->xhc_state & XHCI_STATE_HALTED)
14068 +- return -ENODEV;
14069 +-
14070 + if (check_virt_dev) {
14071 + if (!udev->slot_id || !xhci->devs[udev->slot_id]) {
14072 + printk(KERN_DEBUG "xHCI %s called with unaddressed "
14073 +@@ -1189,6 +1186,9 @@ static int xhci_check_args(struct usb_hcd *hcd, struct usb_device *udev,
14074 + }
14075 + }
14076 +
14077 ++ if (xhci->xhc_state & XHCI_STATE_HALTED)
14078 ++ return -ENODEV;
14079 ++
14080 + return 1;
14081 + }
14082 +
14083 +@@ -4697,6 +4697,13 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
14084 +
14085 + get_quirks(dev, xhci);
14086 +
14087 ++ /* In xhci controllers which follow xhci 1.0 spec gives a spurious
14088 ++ * success event after a short transfer. This quirk will ignore such
14089 ++ * spurious event.
14090 ++ */
14091 ++ if (xhci->hci_version > 0x96)
14092 ++ xhci->quirks |= XHCI_SPURIOUS_SUCCESS;
14093 ++
14094 + /* Make sure the HC is halted. */
14095 + retval = xhci_halt(xhci);
14096 + if (retval)
14097 +diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c
14098 +index c21386e..de98906 100644
14099 +--- a/drivers/usb/misc/sisusbvga/sisusb.c
14100 ++++ b/drivers/usb/misc/sisusbvga/sisusb.c
14101 +@@ -3247,6 +3247,7 @@ static const struct usb_device_id sisusb_table[] = {
14102 + { USB_DEVICE(0x0711, 0x0903) },
14103 + { USB_DEVICE(0x0711, 0x0918) },
14104 + { USB_DEVICE(0x0711, 0x0920) },
14105 ++ { USB_DEVICE(0x0711, 0x0950) },
14106 + { USB_DEVICE(0x182d, 0x021c) },
14107 + { USB_DEVICE(0x182d, 0x0269) },
14108 + { }
14109 +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
14110 +index 7260ec6..b65e657 100644
14111 +--- a/drivers/usb/serial/ftdi_sio.c
14112 ++++ b/drivers/usb/serial/ftdi_sio.c
14113 +@@ -735,9 +735,34 @@ static struct usb_device_id id_table_combined [] = {
14114 + { USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
14115 + .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
14116 + { USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
14117 +- { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_SERIAL_VX7_PID) },
14118 +- { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_CT29B_PID) },
14119 +- { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_RTS01_PID) },
14120 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S03_PID) },
14121 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_59_PID) },
14122 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57A_PID) },
14123 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57B_PID) },
14124 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_29A_PID) },
14125 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_29B_PID) },
14126 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_29F_PID) },
14127 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_62B_PID) },
14128 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S01_PID) },
14129 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_63_PID) },
14130 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_29C_PID) },
14131 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_81B_PID) },
14132 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_82B_PID) },
14133 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_K5D_PID) },
14134 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_K4Y_PID) },
14135 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_K5G_PID) },
14136 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S05_PID) },
14137 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_60_PID) },
14138 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_61_PID) },
14139 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_62_PID) },
14140 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_63B_PID) },
14141 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_64_PID) },
14142 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_65_PID) },
14143 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_92_PID) },
14144 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_92D_PID) },
14145 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_W5R_PID) },
14146 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_A5R_PID) },
14147 ++ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_PW1_PID) },
14148 + { USB_DEVICE(FTDI_VID, FTDI_MAXSTREAM_PID) },
14149 + { USB_DEVICE(FTDI_VID, FTDI_PHI_FISCO_PID) },
14150 + { USB_DEVICE(TML_VID, TML_USB_SERIAL_PID) },
14151 +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
14152 +index 6dd7925..1b8af46 100644
14153 +--- a/drivers/usb/serial/ftdi_sio_ids.h
14154 ++++ b/drivers/usb/serial/ftdi_sio_ids.h
14155 +@@ -815,11 +815,35 @@
14156 + /*
14157 + * RT Systems programming cables for various ham radios
14158 + */
14159 +-#define RTSYSTEMS_VID 0x2100 /* Vendor ID */
14160 +-#define RTSYSTEMS_SERIAL_VX7_PID 0x9e52 /* Serial converter for VX-7 Radios using FT232RL */
14161 +-#define RTSYSTEMS_CT29B_PID 0x9e54 /* CT29B Radio Cable */
14162 +-#define RTSYSTEMS_RTS01_PID 0x9e57 /* USB-RTS01 Radio Cable */
14163 +-
14164 ++#define RTSYSTEMS_VID 0x2100 /* Vendor ID */
14165 ++#define RTSYSTEMS_USB_S03_PID 0x9001 /* RTS-03 USB to Serial Adapter */
14166 ++#define RTSYSTEMS_USB_59_PID 0x9e50 /* USB-59 USB to 8 pin plug */
14167 ++#define RTSYSTEMS_USB_57A_PID 0x9e51 /* USB-57A USB to 4pin 3.5mm plug */
14168 ++#define RTSYSTEMS_USB_57B_PID 0x9e52 /* USB-57B USB to extended 4pin 3.5mm plug */
14169 ++#define RTSYSTEMS_USB_29A_PID 0x9e53 /* USB-29A USB to 3.5mm stereo plug */
14170 ++#define RTSYSTEMS_USB_29B_PID 0x9e54 /* USB-29B USB to 6 pin mini din */
14171 ++#define RTSYSTEMS_USB_29F_PID 0x9e55 /* USB-29F USB to 6 pin modular plug */
14172 ++#define RTSYSTEMS_USB_62B_PID 0x9e56 /* USB-62B USB to 8 pin mini din plug*/
14173 ++#define RTSYSTEMS_USB_S01_PID 0x9e57 /* USB-RTS01 USB to 3.5 mm stereo plug*/
14174 ++#define RTSYSTEMS_USB_63_PID 0x9e58 /* USB-63 USB to 9 pin female*/
14175 ++#define RTSYSTEMS_USB_29C_PID 0x9e59 /* USB-29C USB to 4 pin modular plug*/
14176 ++#define RTSYSTEMS_USB_81B_PID 0x9e5A /* USB-81 USB to 8 pin mini din plug*/
14177 ++#define RTSYSTEMS_USB_82B_PID 0x9e5B /* USB-82 USB to 2.5 mm stereo plug*/
14178 ++#define RTSYSTEMS_USB_K5D_PID 0x9e5C /* USB-K5D USB to 8 pin modular plug*/
14179 ++#define RTSYSTEMS_USB_K4Y_PID 0x9e5D /* USB-K4Y USB to 2.5/3.5 mm plugs*/
14180 ++#define RTSYSTEMS_USB_K5G_PID 0x9e5E /* USB-K5G USB to 8 pin modular plug*/
14181 ++#define RTSYSTEMS_USB_S05_PID 0x9e5F /* USB-RTS05 USB to 2.5 mm stereo plug*/
14182 ++#define RTSYSTEMS_USB_60_PID 0x9e60 /* USB-60 USB to 6 pin din*/
14183 ++#define RTSYSTEMS_USB_61_PID 0x9e61 /* USB-61 USB to 6 pin mini din*/
14184 ++#define RTSYSTEMS_USB_62_PID 0x9e62 /* USB-62 USB to 8 pin mini din*/
14185 ++#define RTSYSTEMS_USB_63B_PID 0x9e63 /* USB-63 USB to 9 pin female*/
14186 ++#define RTSYSTEMS_USB_64_PID 0x9e64 /* USB-64 USB to 9 pin male*/
14187 ++#define RTSYSTEMS_USB_65_PID 0x9e65 /* USB-65 USB to 9 pin female null modem*/
14188 ++#define RTSYSTEMS_USB_92_PID 0x9e66 /* USB-92 USB to 12 pin plug*/
14189 ++#define RTSYSTEMS_USB_92D_PID 0x9e67 /* USB-92D USB to 12 pin plug data*/
14190 ++#define RTSYSTEMS_USB_W5R_PID 0x9e68 /* USB-W5R USB to 8 pin modular plug*/
14191 ++#define RTSYSTEMS_USB_A5R_PID 0x9e69 /* USB-A5R USB to 8 pin modular plug*/
14192 ++#define RTSYSTEMS_USB_PW1_PID 0x9e6A /* USB-PW1 USB to 8 pin modular plug*/
14193 +
14194 + /*
14195 + * Physik Instrumente
14196 +diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
14197 +index 7e99808..62b86a6 100644
14198 +--- a/drivers/usb/serial/mos7840.c
14199 ++++ b/drivers/usb/serial/mos7840.c
14200 +@@ -914,20 +914,20 @@ static int mos7840_open(struct tty_struct *tty, struct usb_serial_port *port)
14201 + status = mos7840_get_reg_sync(port, mos7840_port->SpRegOffset, &Data);
14202 + if (status < 0) {
14203 + dev_dbg(&port->dev, "Reading Spreg failed\n");
14204 +- return -1;
14205 ++ goto err;
14206 + }
14207 + Data |= 0x80;
14208 + status = mos7840_set_reg_sync(port, mos7840_port->SpRegOffset, Data);
14209 + if (status < 0) {
14210 + dev_dbg(&port->dev, "writing Spreg failed\n");
14211 +- return -1;
14212 ++ goto err;
14213 + }
14214 +
14215 + Data &= ~0x80;
14216 + status = mos7840_set_reg_sync(port, mos7840_port->SpRegOffset, Data);
14217 + if (status < 0) {
14218 + dev_dbg(&port->dev, "writing Spreg failed\n");
14219 +- return -1;
14220 ++ goto err;
14221 + }
14222 + /* End of block to be checked */
14223 +
14224 +@@ -936,7 +936,7 @@ static int mos7840_open(struct tty_struct *tty, struct usb_serial_port *port)
14225 + &Data);
14226 + if (status < 0) {
14227 + dev_dbg(&port->dev, "Reading Controlreg failed\n");
14228 +- return -1;
14229 ++ goto err;
14230 + }
14231 + Data |= 0x08; /* Driver done bit */
14232 + Data |= 0x20; /* rx_disable */
14233 +@@ -944,7 +944,7 @@ static int mos7840_open(struct tty_struct *tty, struct usb_serial_port *port)
14234 + mos7840_port->ControlRegOffset, Data);
14235 + if (status < 0) {
14236 + dev_dbg(&port->dev, "writing Controlreg failed\n");
14237 +- return -1;
14238 ++ goto err;
14239 + }
14240 + /* do register settings here */
14241 + /* Set all regs to the device default values. */
14242 +@@ -955,21 +955,21 @@ static int mos7840_open(struct tty_struct *tty, struct usb_serial_port *port)
14243 + status = mos7840_set_uart_reg(port, INTERRUPT_ENABLE_REGISTER, Data);
14244 + if (status < 0) {
14245 + dev_dbg(&port->dev, "disabling interrupts failed\n");
14246 +- return -1;
14247 ++ goto err;
14248 + }
14249 + /* Set FIFO_CONTROL_REGISTER to the default value */
14250 + Data = 0x00;
14251 + status = mos7840_set_uart_reg(port, FIFO_CONTROL_REGISTER, Data);
14252 + if (status < 0) {
14253 + dev_dbg(&port->dev, "Writing FIFO_CONTROL_REGISTER failed\n");
14254 +- return -1;
14255 ++ goto err;
14256 + }
14257 +
14258 + Data = 0xcf;
14259 + status = mos7840_set_uart_reg(port, FIFO_CONTROL_REGISTER, Data);
14260 + if (status < 0) {
14261 + dev_dbg(&port->dev, "Writing FIFO_CONTROL_REGISTER failed\n");
14262 +- return -1;
14263 ++ goto err;
14264 + }
14265 +
14266 + Data = 0x03;
14267 +@@ -1114,6 +1114,15 @@ static int mos7840_open(struct tty_struct *tty, struct usb_serial_port *port)
14268 + /* mos7840_change_port_settings(mos7840_port,old_termios); */
14269 +
14270 + return 0;
14271 ++err:
14272 ++ for (j = 0; j < NUM_URBS; ++j) {
14273 ++ urb = mos7840_port->write_urb_pool[j];
14274 ++ if (!urb)
14275 ++ continue;
14276 ++ kfree(urb->transfer_buffer);
14277 ++ usb_free_urb(urb);
14278 ++ }
14279 ++ return status;
14280 + }
14281 +
14282 + /*****************************************************************************
14283 +diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
14284 +index e581c25..01f79f1 100644
14285 +--- a/drivers/usb/serial/ti_usb_3410_5052.c
14286 ++++ b/drivers/usb/serial/ti_usb_3410_5052.c
14287 +@@ -371,7 +371,7 @@ static int ti_startup(struct usb_serial *serial)
14288 + usb_set_serial_data(serial, tdev);
14289 +
14290 + /* determine device type */
14291 +- if (usb_match_id(serial->interface, ti_id_table_3410))
14292 ++ if (serial->type == &ti_1port_device)
14293 + tdev->td_is_3410 = 1;
14294 + dev_dbg(&dev->dev, "%s - device type is %s\n", __func__,
14295 + tdev->td_is_3410 ? "3410" : "5052");
14296 +diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
14297 +index 179933528..c015f2c 100644
14298 +--- a/drivers/usb/storage/unusual_devs.h
14299 ++++ b/drivers/usb/storage/unusual_devs.h
14300 +@@ -665,6 +665,13 @@ UNUSUAL_DEV( 0x054c, 0x016a, 0x0000, 0x9999,
14301 + USB_SC_DEVICE, USB_PR_DEVICE, NULL,
14302 + US_FL_FIX_INQUIRY ),
14303 +
14304 ++/* Submitted by Ren Bigcren <bigcren.ren@××××××××××.com> */
14305 ++UNUSUAL_DEV( 0x054c, 0x02a5, 0x0100, 0x0100,
14306 ++ "Sony Corp.",
14307 ++ "MicroVault Flash Drive",
14308 ++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
14309 ++ US_FL_NO_READ_CAPACITY_16 ),
14310 ++
14311 + /* floppy reports multiple luns */
14312 + UNUSUAL_DEV( 0x055d, 0x2020, 0x0000, 0x0210,
14313 + "SAMSUNG",
14314 +diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
14315 +index 45c8efa..34924fb 100644
14316 +--- a/drivers/xen/evtchn.c
14317 ++++ b/drivers/xen/evtchn.c
14318 +@@ -377,18 +377,12 @@ static long evtchn_ioctl(struct file *file,
14319 + if (unbind.port >= NR_EVENT_CHANNELS)
14320 + break;
14321 +
14322 +- spin_lock_irq(&port_user_lock);
14323 +-
14324 + rc = -ENOTCONN;
14325 +- if (get_port_user(unbind.port) != u) {
14326 +- spin_unlock_irq(&port_user_lock);
14327 ++ if (get_port_user(unbind.port) != u)
14328 + break;
14329 +- }
14330 +
14331 + disable_irq(irq_from_evtchn(unbind.port));
14332 +
14333 +- spin_unlock_irq(&port_user_lock);
14334 +-
14335 + evtchn_unbind_from_user(u, unbind.port);
14336 +
14337 + rc = 0;
14338 +@@ -488,26 +482,15 @@ static int evtchn_release(struct inode *inode, struct file *filp)
14339 + int i;
14340 + struct per_user_data *u = filp->private_data;
14341 +
14342 +- spin_lock_irq(&port_user_lock);
14343 +-
14344 +- free_page((unsigned long)u->ring);
14345 +-
14346 + for (i = 0; i < NR_EVENT_CHANNELS; i++) {
14347 + if (get_port_user(i) != u)
14348 + continue;
14349 +
14350 + disable_irq(irq_from_evtchn(i));
14351 +- }
14352 +-
14353 +- spin_unlock_irq(&port_user_lock);
14354 +-
14355 +- for (i = 0; i < NR_EVENT_CHANNELS; i++) {
14356 +- if (get_port_user(i) != u)
14357 +- continue;
14358 +-
14359 + evtchn_unbind_from_user(get_port_user(i), i);
14360 + }
14361 +
14362 ++ free_page((unsigned long)u->ring);
14363 + kfree(u->name);
14364 + kfree(u);
14365 +
14366 +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
14367 +index df472ab..0b272d0 100644
14368 +--- a/fs/btrfs/extent-tree.c
14369 ++++ b/fs/btrfs/extent-tree.c
14370 +@@ -7298,6 +7298,7 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
14371 + int err = 0;
14372 + int ret;
14373 + int level;
14374 ++ bool root_dropped = false;
14375 +
14376 + path = btrfs_alloc_path();
14377 + if (!path) {
14378 +@@ -7355,6 +7356,7 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
14379 + while (1) {
14380 + btrfs_tree_lock(path->nodes[level]);
14381 + btrfs_set_lock_blocking(path->nodes[level]);
14382 ++ path->locks[level] = BTRFS_WRITE_LOCK_BLOCKING;
14383 +
14384 + ret = btrfs_lookup_extent_info(trans, root,
14385 + path->nodes[level]->start,
14386 +@@ -7370,6 +7372,7 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
14387 + break;
14388 +
14389 + btrfs_tree_unlock(path->nodes[level]);
14390 ++ path->locks[level] = 0;
14391 + WARN_ON(wc->refs[level] != 1);
14392 + level--;
14393 + }
14394 +@@ -7471,12 +7474,22 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
14395 + free_extent_buffer(root->commit_root);
14396 + kfree(root);
14397 + }
14398 ++ root_dropped = true;
14399 + out_end_trans:
14400 + btrfs_end_transaction_throttle(trans, tree_root);
14401 + out_free:
14402 + kfree(wc);
14403 + btrfs_free_path(path);
14404 + out:
14405 ++ /*
14406 ++ * So if we need to stop dropping the snapshot for whatever reason we
14407 ++ * need to make sure to add it back to the dead root list so that we
14408 ++ * keep trying to do the work later. This also cleans up roots if we
14409 ++ * don't have it in the radix (like when we recover after a power fail
14410 ++ * or unmount) so we don't leak memory.
14411 ++ */
14412 ++ if (root_dropped == false)
14413 ++ btrfs_add_dead_root(root);
14414 + if (err)
14415 + btrfs_std_error(root->fs_info, err);
14416 + return err;
14417 +diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
14418 +index 79bd479..eb84c2d 100644
14419 +--- a/fs/btrfs/scrub.c
14420 ++++ b/fs/btrfs/scrub.c
14421 +@@ -2501,7 +2501,7 @@ again:
14422 + ret = scrub_extent(sctx, extent_logical, extent_len,
14423 + extent_physical, extent_dev, flags,
14424 + generation, extent_mirror_num,
14425 +- extent_physical);
14426 ++ extent_logical - logical + physical);
14427 + if (ret)
14428 + goto out;
14429 +
14430 +diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
14431 +index 84ce601..baf149a 100644
14432 +--- a/fs/nfsd/vfs.c
14433 ++++ b/fs/nfsd/vfs.c
14434 +@@ -802,9 +802,10 @@ nfsd_open(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type,
14435 + flags = O_WRONLY|O_LARGEFILE;
14436 + }
14437 + *filp = dentry_open(&path, flags, current_cred());
14438 +- if (IS_ERR(*filp))
14439 ++ if (IS_ERR(*filp)) {
14440 + host_err = PTR_ERR(*filp);
14441 +- else {
14442 ++ *filp = NULL;
14443 ++ } else {
14444 + host_err = ima_file_check(*filp, may_flags);
14445 +
14446 + if (may_flags & NFSD_MAY_64BIT_COOKIE)
14447 +diff --git a/fs/super.c b/fs/super.c
14448 +index 7465d43..68307c0 100644
14449 +--- a/fs/super.c
14450 ++++ b/fs/super.c
14451 +@@ -336,19 +336,19 @@ EXPORT_SYMBOL(deactivate_super);
14452 + * and want to turn it into a full-blown active reference. grab_super()
14453 + * is called with sb_lock held and drops it. Returns 1 in case of
14454 + * success, 0 if we had failed (superblock contents was already dead or
14455 +- * dying when grab_super() had been called).
14456 ++ * dying when grab_super() had been called). Note that this is only
14457 ++ * called for superblocks not in rundown mode (== ones still on ->fs_supers
14458 ++ * of their type), so increment of ->s_count is OK here.
14459 + */
14460 + static int grab_super(struct super_block *s) __releases(sb_lock)
14461 + {
14462 +- if (atomic_inc_not_zero(&s->s_active)) {
14463 +- spin_unlock(&sb_lock);
14464 +- return 1;
14465 +- }
14466 +- /* it's going away */
14467 + s->s_count++;
14468 + spin_unlock(&sb_lock);
14469 +- /* wait for it to die */
14470 + down_write(&s->s_umount);
14471 ++ if ((s->s_flags & MS_BORN) && atomic_inc_not_zero(&s->s_active)) {
14472 ++ put_super(s);
14473 ++ return 1;
14474 ++ }
14475 + up_write(&s->s_umount);
14476 + put_super(s);
14477 + return 0;
14478 +@@ -463,11 +463,6 @@ retry:
14479 + destroy_super(s);
14480 + s = NULL;
14481 + }
14482 +- down_write(&old->s_umount);
14483 +- if (unlikely(!(old->s_flags & MS_BORN))) {
14484 +- deactivate_locked_super(old);
14485 +- goto retry;
14486 +- }
14487 + return old;
14488 + }
14489 + }
14490 +@@ -660,10 +655,10 @@ restart:
14491 + if (hlist_unhashed(&sb->s_instances))
14492 + continue;
14493 + if (sb->s_bdev == bdev) {
14494 +- if (grab_super(sb)) /* drops sb_lock */
14495 +- return sb;
14496 +- else
14497 ++ if (!grab_super(sb))
14498 + goto restart;
14499 ++ up_write(&sb->s_umount);
14500 ++ return sb;
14501 + }
14502 + }
14503 + spin_unlock(&sb_lock);
14504 +diff --git a/include/linux/firewire.h b/include/linux/firewire.h
14505 +index 191501a..217e4b4 100644
14506 +--- a/include/linux/firewire.h
14507 ++++ b/include/linux/firewire.h
14508 +@@ -434,6 +434,7 @@ struct fw_iso_context {
14509 + int type;
14510 + int channel;
14511 + int speed;
14512 ++ bool drop_overflow_headers;
14513 + size_t header_size;
14514 + union {
14515 + fw_iso_callback_t sc;
14516 +diff --git a/include/target/iscsi/iscsi_transport.h b/include/target/iscsi/iscsi_transport.h
14517 +index 23a87d0..c5aade5 100644
14518 +--- a/include/target/iscsi/iscsi_transport.h
14519 ++++ b/include/target/iscsi/iscsi_transport.h
14520 +@@ -34,8 +34,6 @@ extern void iscsit_put_transport(struct iscsit_transport *);
14521 + /*
14522 + * From iscsi_target.c
14523 + */
14524 +-extern int iscsit_add_reject_from_cmd(u8, int, int, unsigned char *,
14525 +- struct iscsi_cmd *);
14526 + extern int iscsit_setup_scsi_cmd(struct iscsi_conn *, struct iscsi_cmd *,
14527 + unsigned char *);
14528 + extern void iscsit_set_unsoliticed_dataout(struct iscsi_cmd *);
14529 +@@ -67,6 +65,10 @@ extern int iscsit_logout_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
14530 + */
14531 + extern void iscsit_increment_maxcmdsn(struct iscsi_cmd *, struct iscsi_session *);
14532 + /*
14533 ++ * From iscsi_target_erl0.c
14534 ++ */
14535 ++extern void iscsit_cause_connection_reinstatement(struct iscsi_conn *, int);
14536 ++/*
14537 + * From iscsi_target_erl1.c
14538 + */
14539 + extern void iscsit_stop_dataout_timer(struct iscsi_cmd *);
14540 +@@ -80,4 +82,5 @@ extern int iscsit_tmr_post_handler(struct iscsi_cmd *, struct iscsi_conn *);
14541 + * From iscsi_target_util.c
14542 + */
14543 + extern struct iscsi_cmd *iscsit_allocate_cmd(struct iscsi_conn *, gfp_t);
14544 +-extern int iscsit_sequence_cmd(struct iscsi_conn *, struct iscsi_cmd *, __be32);
14545 ++extern int iscsit_sequence_cmd(struct iscsi_conn *, struct iscsi_cmd *,
14546 ++ unsigned char *, __be32);
14547 +diff --git a/include/uapi/linux/firewire-cdev.h b/include/uapi/linux/firewire-cdev.h
14548 +index d500369..1db453e 100644
14549 +--- a/include/uapi/linux/firewire-cdev.h
14550 ++++ b/include/uapi/linux/firewire-cdev.h
14551 +@@ -215,8 +215,8 @@ struct fw_cdev_event_request2 {
14552 + * with the %FW_CDEV_ISO_INTERRUPT bit set, when explicitly requested with
14553 + * %FW_CDEV_IOC_FLUSH_ISO, or when there have been so many completed packets
14554 + * without the interrupt bit set that the kernel's internal buffer for @header
14555 +- * is about to overflow. (In the last case, kernels with ABI version < 5 drop
14556 +- * header data up to the next interrupt packet.)
14557 ++ * is about to overflow. (In the last case, ABI versions < 5 drop header data
14558 ++ * up to the next interrupt packet.)
14559 + *
14560 + * Isochronous transmit events (context type %FW_CDEV_ISO_CONTEXT_TRANSMIT):
14561 + *
14562 +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
14563 +index 0b936d8..f7bc3ce 100644
14564 +--- a/kernel/trace/trace.c
14565 ++++ b/kernel/trace/trace.c
14566 +@@ -1163,18 +1163,17 @@ void tracing_reset_current(int cpu)
14567 + tracing_reset(&global_trace.trace_buffer, cpu);
14568 + }
14569 +
14570 ++/* Must have trace_types_lock held */
14571 + void tracing_reset_all_online_cpus(void)
14572 + {
14573 + struct trace_array *tr;
14574 +
14575 +- mutex_lock(&trace_types_lock);
14576 + list_for_each_entry(tr, &ftrace_trace_arrays, list) {
14577 + tracing_reset_online_cpus(&tr->trace_buffer);
14578 + #ifdef CONFIG_TRACER_MAX_TRACE
14579 + tracing_reset_online_cpus(&tr->max_buffer);
14580 + #endif
14581 + }
14582 +- mutex_unlock(&trace_types_lock);
14583 + }
14584 +
14585 + #define SAVED_CMDLINES 128
14586 +@@ -2956,7 +2955,6 @@ static int tracing_release(struct inode *inode, struct file *file)
14587 +
14588 + iter = m->private;
14589 + tr = iter->tr;
14590 +- trace_array_put(tr);
14591 +
14592 + mutex_lock(&trace_types_lock);
14593 +
14594 +@@ -2971,6 +2969,9 @@ static int tracing_release(struct inode *inode, struct file *file)
14595 + if (!iter->snapshot)
14596 + /* reenable tracing if it was previously enabled */
14597 + tracing_start_tr(tr);
14598 ++
14599 ++ __trace_array_put(tr);
14600 ++
14601 + mutex_unlock(&trace_types_lock);
14602 +
14603 + mutex_destroy(&iter->mutex);
14604 +@@ -3395,6 +3396,7 @@ tracing_trace_options_write(struct file *filp, const char __user *ubuf,
14605 + static int tracing_trace_options_open(struct inode *inode, struct file *file)
14606 + {
14607 + struct trace_array *tr = inode->i_private;
14608 ++ int ret;
14609 +
14610 + if (tracing_disabled)
14611 + return -ENODEV;
14612 +@@ -3402,7 +3404,11 @@ static int tracing_trace_options_open(struct inode *inode, struct file *file)
14613 + if (trace_array_get(tr) < 0)
14614 + return -ENODEV;
14615 +
14616 +- return single_open(file, tracing_trace_options_show, inode->i_private);
14617 ++ ret = single_open(file, tracing_trace_options_show, inode->i_private);
14618 ++ if (ret < 0)
14619 ++ trace_array_put(tr);
14620 ++
14621 ++ return ret;
14622 + }
14623 +
14624 + static const struct file_operations tracing_iter_fops = {
14625 +@@ -3906,6 +3912,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
14626 + iter = kzalloc(sizeof(*iter), GFP_KERNEL);
14627 + if (!iter) {
14628 + ret = -ENOMEM;
14629 ++ __trace_array_put(tr);
14630 + goto out;
14631 + }
14632 +
14633 +@@ -4652,21 +4659,24 @@ static int tracing_snapshot_open(struct inode *inode, struct file *file)
14634 + ret = PTR_ERR(iter);
14635 + } else {
14636 + /* Writes still need the seq_file to hold the private data */
14637 ++ ret = -ENOMEM;
14638 + m = kzalloc(sizeof(*m), GFP_KERNEL);
14639 + if (!m)
14640 +- return -ENOMEM;
14641 ++ goto out;
14642 + iter = kzalloc(sizeof(*iter), GFP_KERNEL);
14643 + if (!iter) {
14644 + kfree(m);
14645 +- return -ENOMEM;
14646 ++ goto out;
14647 + }
14648 ++ ret = 0;
14649 ++
14650 + iter->tr = tr;
14651 + iter->trace_buffer = &tc->tr->max_buffer;
14652 + iter->cpu_file = tc->cpu;
14653 + m->private = iter;
14654 + file->private_data = m;
14655 + }
14656 +-
14657 ++out:
14658 + if (ret < 0)
14659 + trace_array_put(tr);
14660 +
14661 +@@ -4896,8 +4906,6 @@ static int tracing_buffers_open(struct inode *inode, struct file *filp)
14662 +
14663 + mutex_lock(&trace_types_lock);
14664 +
14665 +- tr->ref++;
14666 +-
14667 + info->iter.tr = tr;
14668 + info->iter.cpu_file = tc->cpu;
14669 + info->iter.trace = tr->current_trace;
14670 +@@ -5276,9 +5284,10 @@ tracing_stats_read(struct file *filp, char __user *ubuf,
14671 + }
14672 +
14673 + static const struct file_operations tracing_stats_fops = {
14674 +- .open = tracing_open_generic,
14675 ++ .open = tracing_open_generic_tc,
14676 + .read = tracing_stats_read,
14677 + .llseek = generic_file_llseek,
14678 ++ .release = tracing_release_generic_tc,
14679 + };
14680 +
14681 + #ifdef CONFIG_DYNAMIC_FTRACE
14682 +@@ -5926,8 +5935,10 @@ static int new_instance_create(const char *name)
14683 + goto out_free_tr;
14684 +
14685 + ret = event_trace_add_tracer(tr->dir, tr);
14686 +- if (ret)
14687 ++ if (ret) {
14688 ++ debugfs_remove_recursive(tr->dir);
14689 + goto out_free_tr;
14690 ++ }
14691 +
14692 + init_tracer_debugfs(tr, tr->dir);
14693 +
14694 +diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
14695 +index 6dfd48b..6953263 100644
14696 +--- a/kernel/trace/trace_events.c
14697 ++++ b/kernel/trace/trace_events.c
14698 +@@ -1221,6 +1221,7 @@ show_header(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
14699 +
14700 + static int ftrace_event_avail_open(struct inode *inode, struct file *file);
14701 + static int ftrace_event_set_open(struct inode *inode, struct file *file);
14702 ++static int ftrace_event_release(struct inode *inode, struct file *file);
14703 +
14704 + static const struct seq_operations show_event_seq_ops = {
14705 + .start = t_start,
14706 +@@ -1248,7 +1249,7 @@ static const struct file_operations ftrace_set_event_fops = {
14707 + .read = seq_read,
14708 + .write = ftrace_event_write,
14709 + .llseek = seq_lseek,
14710 +- .release = seq_release,
14711 ++ .release = ftrace_event_release,
14712 + };
14713 +
14714 + static const struct file_operations ftrace_enable_fops = {
14715 +@@ -1326,6 +1327,15 @@ ftrace_event_open(struct inode *inode, struct file *file,
14716 + return ret;
14717 + }
14718 +
14719 ++static int ftrace_event_release(struct inode *inode, struct file *file)
14720 ++{
14721 ++ struct trace_array *tr = inode->i_private;
14722 ++
14723 ++ trace_array_put(tr);
14724 ++
14725 ++ return seq_release(inode, file);
14726 ++}
14727 ++
14728 + static int
14729 + ftrace_event_avail_open(struct inode *inode, struct file *file)
14730 + {
14731 +@@ -1339,12 +1349,19 @@ ftrace_event_set_open(struct inode *inode, struct file *file)
14732 + {
14733 + const struct seq_operations *seq_ops = &show_set_event_seq_ops;
14734 + struct trace_array *tr = inode->i_private;
14735 ++ int ret;
14736 ++
14737 ++ if (trace_array_get(tr) < 0)
14738 ++ return -ENODEV;
14739 +
14740 + if ((file->f_mode & FMODE_WRITE) &&
14741 + (file->f_flags & O_TRUNC))
14742 + ftrace_clear_events(tr);
14743 +
14744 +- return ftrace_event_open(inode, file, seq_ops);
14745 ++ ret = ftrace_event_open(inode, file, seq_ops);
14746 ++ if (ret < 0)
14747 ++ trace_array_put(tr);
14748 ++ return ret;
14749 + }
14750 +
14751 + static struct event_subsystem *
14752 +diff --git a/mm/memory.c b/mm/memory.c
14753 +index 61a262b..5e50800 100644
14754 +--- a/mm/memory.c
14755 ++++ b/mm/memory.c
14756 +@@ -1101,6 +1101,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
14757 + spinlock_t *ptl;
14758 + pte_t *start_pte;
14759 + pte_t *pte;
14760 ++ unsigned long range_start = addr;
14761 +
14762 + again:
14763 + init_rss_vec(rss);
14764 +@@ -1206,12 +1207,14 @@ again:
14765 + force_flush = 0;
14766 +
14767 + #ifdef HAVE_GENERIC_MMU_GATHER
14768 +- tlb->start = addr;
14769 +- tlb->end = end;
14770 ++ tlb->start = range_start;
14771 ++ tlb->end = addr;
14772 + #endif
14773 + tlb_flush_mmu(tlb);
14774 +- if (addr != end)
14775 ++ if (addr != end) {
14776 ++ range_start = addr;
14777 + goto again;
14778 ++ }
14779 + }
14780 +
14781 + return addr;
14782 +diff --git a/mm/mempolicy.c b/mm/mempolicy.c
14783 +index 7431001..4baf12e 100644
14784 +--- a/mm/mempolicy.c
14785 ++++ b/mm/mempolicy.c
14786 +@@ -732,7 +732,10 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
14787 + if (prev) {
14788 + vma = prev;
14789 + next = vma->vm_next;
14790 +- continue;
14791 ++ if (mpol_equal(vma_policy(vma), new_pol))
14792 ++ continue;
14793 ++ /* vma_merge() joined vma && vma->next, case 8 */
14794 ++ goto replace;
14795 + }
14796 + if (vma->vm_start != vmstart) {
14797 + err = split_vma(vma->vm_mm, vma, vmstart, 1);
14798 +@@ -744,6 +747,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
14799 + if (err)
14800 + goto out;
14801 + }
14802 ++ replace:
14803 + err = vma_replace_policy(vma, new_pol);
14804 + if (err)
14805 + goto out;
14806 +diff --git a/mm/mmap.c b/mm/mmap.c
14807 +index f681e18..7dbe397 100644
14808 +--- a/mm/mmap.c
14809 ++++ b/mm/mmap.c
14810 +@@ -865,7 +865,7 @@ again: remove_next = 1 + (end > next->vm_end);
14811 + if (next->anon_vma)
14812 + anon_vma_merge(vma, next);
14813 + mm->map_count--;
14814 +- vma_set_policy(vma, vma_policy(next));
14815 ++ mpol_put(vma_policy(next));
14816 + kmem_cache_free(vm_area_cachep, next);
14817 + /*
14818 + * In mprotect's case 6 (see comments on vma_merge),
14819 +diff --git a/net/sunrpc/xprtrdma/svc_rdma_marshal.c b/net/sunrpc/xprtrdma/svc_rdma_marshal.c
14820 +index 8d2eddd..65b1462 100644
14821 +--- a/net/sunrpc/xprtrdma/svc_rdma_marshal.c
14822 ++++ b/net/sunrpc/xprtrdma/svc_rdma_marshal.c
14823 +@@ -98,6 +98,7 @@ void svc_rdma_rcl_chunk_counts(struct rpcrdma_read_chunk *ch,
14824 + */
14825 + static u32 *decode_write_list(u32 *va, u32 *vaend)
14826 + {
14827 ++ unsigned long start, end;
14828 + int nchunks;
14829 +
14830 + struct rpcrdma_write_array *ary =
14831 +@@ -113,9 +114,12 @@ static u32 *decode_write_list(u32 *va, u32 *vaend)
14832 + return NULL;
14833 + }
14834 + nchunks = ntohl(ary->wc_nchunks);
14835 +- if (((unsigned long)&ary->wc_array[0] +
14836 +- (sizeof(struct rpcrdma_write_chunk) * nchunks)) >
14837 +- (unsigned long)vaend) {
14838 ++
14839 ++ start = (unsigned long)&ary->wc_array[0];
14840 ++ end = (unsigned long)vaend;
14841 ++ if (nchunks < 0 ||
14842 ++ nchunks > (SIZE_MAX - start) / sizeof(struct rpcrdma_write_chunk) ||
14843 ++ (start + (sizeof(struct rpcrdma_write_chunk) * nchunks)) > end) {
14844 + dprintk("svcrdma: ary=%p, wc_nchunks=%d, vaend=%p\n",
14845 + ary, nchunks, vaend);
14846 + return NULL;
14847 +@@ -129,6 +133,7 @@ static u32 *decode_write_list(u32 *va, u32 *vaend)
14848 +
14849 + static u32 *decode_reply_array(u32 *va, u32 *vaend)
14850 + {
14851 ++ unsigned long start, end;
14852 + int nchunks;
14853 + struct rpcrdma_write_array *ary =
14854 + (struct rpcrdma_write_array *)va;
14855 +@@ -143,9 +148,12 @@ static u32 *decode_reply_array(u32 *va, u32 *vaend)
14856 + return NULL;
14857 + }
14858 + nchunks = ntohl(ary->wc_nchunks);
14859 +- if (((unsigned long)&ary->wc_array[0] +
14860 +- (sizeof(struct rpcrdma_write_chunk) * nchunks)) >
14861 +- (unsigned long)vaend) {
14862 ++
14863 ++ start = (unsigned long)&ary->wc_array[0];
14864 ++ end = (unsigned long)vaend;
14865 ++ if (nchunks < 0 ||
14866 ++ nchunks > (SIZE_MAX - start) / sizeof(struct rpcrdma_write_chunk) ||
14867 ++ (start + (sizeof(struct rpcrdma_write_chunk) * nchunks)) > end) {
14868 + dprintk("svcrdma: ary=%p, wc_nchunks=%d, vaend=%p\n",
14869 + ary, nchunks, vaend);
14870 + return NULL;
14871 +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
14872 +index 403010c..051c03d 100644
14873 +--- a/sound/pci/hda/patch_realtek.c
14874 ++++ b/sound/pci/hda/patch_realtek.c
14875 +@@ -3495,9 +3495,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
14876 + SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14877 + SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14878 + SND_PCI_QUIRK(0x1028, 0x05f8, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14879 ++ SND_PCI_QUIRK(0x1028, 0x05f9, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14880 ++ SND_PCI_QUIRK(0x1028, 0x05fb, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14881 + SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14882 + SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14883 + SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14884 ++ SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
14885 + SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
14886 + SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED),
14887 + SND_PCI_QUIRK(0x103c, 0x1973, "HP Pavilion", ALC269_FIXUP_HP_MUTE_LED_MIC1),
14888 +diff --git a/sound/soc/codecs/max98088.c b/sound/soc/codecs/max98088.c
14889 +index 3eeada5..566a367 100644
14890 +--- a/sound/soc/codecs/max98088.c
14891 ++++ b/sound/soc/codecs/max98088.c
14892 +@@ -1612,7 +1612,7 @@ static int max98088_dai2_digital_mute(struct snd_soc_dai *codec_dai, int mute)
14893 +
14894 + static void max98088_sync_cache(struct snd_soc_codec *codec)
14895 + {
14896 +- u16 *reg_cache = codec->reg_cache;
14897 ++ u8 *reg_cache = codec->reg_cache;
14898 + int i;
14899 +
14900 + if (!codec->cache_sync)
14901 +diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
14902 +index 92bbfec..ea47938 100644
14903 +--- a/sound/soc/codecs/sgtl5000.c
14904 ++++ b/sound/soc/codecs/sgtl5000.c
14905 +@@ -37,7 +37,7 @@
14906 + static const u16 sgtl5000_regs[SGTL5000_MAX_REG_OFFSET] = {
14907 + [SGTL5000_CHIP_CLK_CTRL] = 0x0008,
14908 + [SGTL5000_CHIP_I2S_CTRL] = 0x0010,
14909 +- [SGTL5000_CHIP_SSS_CTRL] = 0x0008,
14910 ++ [SGTL5000_CHIP_SSS_CTRL] = 0x0010,
14911 + [SGTL5000_CHIP_DAC_VOL] = 0x3c3c,
14912 + [SGTL5000_CHIP_PAD_STRENGTH] = 0x015f,
14913 + [SGTL5000_CHIP_ANA_HP_CTRL] = 0x1818,
14914 +diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
14915 +index e971028..730dd0c 100644
14916 +--- a/sound/soc/codecs/wm8962.c
14917 ++++ b/sound/soc/codecs/wm8962.c
14918 +@@ -1600,7 +1600,6 @@ static int wm8962_put_hp_sw(struct snd_kcontrol *kcontrol,
14919 + struct snd_ctl_elem_value *ucontrol)
14920 + {
14921 + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
14922 +- u16 *reg_cache = codec->reg_cache;
14923 + int ret;
14924 +
14925 + /* Apply the update (if any) */
14926 +@@ -1609,16 +1608,19 @@ static int wm8962_put_hp_sw(struct snd_kcontrol *kcontrol,
14927 + return 0;
14928 +
14929 + /* If the left PGA is enabled hit that VU bit... */
14930 +- if (snd_soc_read(codec, WM8962_PWR_MGMT_2) & WM8962_HPOUTL_PGA_ENA)
14931 +- return snd_soc_write(codec, WM8962_HPOUTL_VOLUME,
14932 +- reg_cache[WM8962_HPOUTL_VOLUME]);
14933 ++ ret = snd_soc_read(codec, WM8962_PWR_MGMT_2);
14934 ++ if (ret & WM8962_HPOUTL_PGA_ENA) {
14935 ++ snd_soc_write(codec, WM8962_HPOUTL_VOLUME,
14936 ++ snd_soc_read(codec, WM8962_HPOUTL_VOLUME));
14937 ++ return 1;
14938 ++ }
14939 +
14940 + /* ...otherwise the right. The VU is stereo. */
14941 +- if (snd_soc_read(codec, WM8962_PWR_MGMT_2) & WM8962_HPOUTR_PGA_ENA)
14942 +- return snd_soc_write(codec, WM8962_HPOUTR_VOLUME,
14943 +- reg_cache[WM8962_HPOUTR_VOLUME]);
14944 ++ if (ret & WM8962_HPOUTR_PGA_ENA)
14945 ++ snd_soc_write(codec, WM8962_HPOUTR_VOLUME,
14946 ++ snd_soc_read(codec, WM8962_HPOUTR_VOLUME));
14947 +
14948 +- return 0;
14949 ++ return 1;
14950 + }
14951 +
14952 + /* The VU bits for the speakers are in a different register to the mute
14953 +@@ -3374,7 +3376,6 @@ static int wm8962_probe(struct snd_soc_codec *codec)
14954 + int ret;
14955 + struct wm8962_priv *wm8962 = snd_soc_codec_get_drvdata(codec);
14956 + struct wm8962_pdata *pdata = dev_get_platdata(codec->dev);
14957 +- u16 *reg_cache = codec->reg_cache;
14958 + int i, trigger, irq_pol;
14959 + bool dmicclk, dmicdat;
14960 +
14961 +@@ -3432,8 +3433,9 @@ static int wm8962_probe(struct snd_soc_codec *codec)
14962 +
14963 + /* Put the speakers into mono mode? */
14964 + if (pdata->spk_mono)
14965 +- reg_cache[WM8962_CLASS_D_CONTROL_2]
14966 +- |= WM8962_SPK_MONO;
14967 ++ snd_soc_update_bits(codec, WM8962_CLASS_D_CONTROL_2,
14968 ++ WM8962_SPK_MONO_MASK, WM8962_SPK_MONO);
14969 ++
14970 +
14971 + /* Micbias setup, detection enable and detection
14972 + * threasholds. */
14973 +diff --git a/sound/soc/tegra/tegra20_ac97.c b/sound/soc/tegra/tegra20_ac97.c
14974 +index 2f70ea7..05676c0 100644
14975 +--- a/sound/soc/tegra/tegra20_ac97.c
14976 ++++ b/sound/soc/tegra/tegra20_ac97.c
14977 +@@ -399,9 +399,9 @@ static int tegra20_ac97_platform_probe(struct platform_device *pdev)
14978 + ac97->capture_dma_data.slave_id = of_dma[1];
14979 +
14980 + ac97->playback_dma_data.addr = mem->start + TEGRA20_AC97_FIFO_TX1;
14981 +- ac97->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
14982 +- ac97->capture_dma_data.maxburst = 4;
14983 +- ac97->capture_dma_data.slave_id = of_dma[0];
14984 ++ ac97->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
14985 ++ ac97->playback_dma_data.maxburst = 4;
14986 ++ ac97->playback_dma_data.slave_id = of_dma[1];
14987 +
14988 + ret = snd_soc_register_component(&pdev->dev, &tegra20_ac97_component,
14989 + &tegra20_ac97_dai, 1);
14990 +diff --git a/sound/soc/tegra/tegra20_spdif.c b/sound/soc/tegra/tegra20_spdif.c
14991 +index 5eaa12c..551b3c9 100644
14992 +--- a/sound/soc/tegra/tegra20_spdif.c
14993 ++++ b/sound/soc/tegra/tegra20_spdif.c
14994 +@@ -323,8 +323,8 @@ static int tegra20_spdif_platform_probe(struct platform_device *pdev)
14995 + }
14996 +
14997 + spdif->playback_dma_data.addr = mem->start + TEGRA20_SPDIF_DATA_OUT;
14998 +- spdif->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
14999 +- spdif->capture_dma_data.maxburst = 4;
15000 ++ spdif->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
15001 ++ spdif->playback_dma_data.maxburst = 4;
15002 + spdif->playback_dma_data.slave_id = dmareq->start;
15003 +
15004 + pm_runtime_enable(&pdev->dev);
15005 +diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
15006 +index 5a1f648..274e178 100644
15007 +--- a/tools/hv/hv_kvp_daemon.c
15008 ++++ b/tools/hv/hv_kvp_daemon.c
15009 +@@ -1016,9 +1016,10 @@ kvp_get_ip_info(int family, char *if_name, int op,
15010 +
15011 + if (sn_offset == 0)
15012 + strcpy(sn_str, cidr_mask);
15013 +- else
15014 ++ else {
15015 ++ strcat((char *)ip_buffer->sub_net, ";");
15016 + strcat(sn_str, cidr_mask);
15017 +- strcat((char *)ip_buffer->sub_net, ";");
15018 ++ }
15019 + sn_offset += strlen(sn_str) + 1;
15020 + }
15021 +
15022 +diff --git a/tools/perf/config/utilities.mak b/tools/perf/config/utilities.mak
15023 +index 8ef3bd3..3e89719 100644
15024 +--- a/tools/perf/config/utilities.mak
15025 ++++ b/tools/perf/config/utilities.mak
15026 +@@ -173,7 +173,7 @@ _ge-abspath = $(if $(is-executable),$(1))
15027 + # Usage: absolute-executable-path-or-empty = $(call get-executable-or-default,variable,default)
15028 + #
15029 + define get-executable-or-default
15030 +-$(if $($(1)),$(call _ge_attempt,$($(1)),$(1)),$(call _ge_attempt,$(2),$(1)))
15031 ++$(if $($(1)),$(call _ge_attempt,$($(1)),$(1)),$(call _ge_attempt,$(2)))
15032 + endef
15033 + _ge_attempt = $(if $(get-executable),$(get-executable),$(_gea_warn)$(call _gea_err,$(2)))
15034 + _gea_warn = $(warning The path '$(1)' is not executable.)
15035
15036 Added: genpatches-2.6/trunk/3.10.7/1005_linux-3.10.6.patch
15037 ===================================================================
15038 --- genpatches-2.6/trunk/3.10.7/1005_linux-3.10.6.patch (rev 0)
15039 +++ genpatches-2.6/trunk/3.10.7/1005_linux-3.10.6.patch 2013-08-29 12:09:12 UTC (rev 2497)
15040 @@ -0,0 +1,4439 @@
15041 +diff --git a/Makefile b/Makefile
15042 +index f8349d0..fd92ffb 100644
15043 +--- a/Makefile
15044 ++++ b/Makefile
15045 +@@ -1,8 +1,8 @@
15046 + VERSION = 3
15047 + PATCHLEVEL = 10
15048 +-SUBLEVEL = 5
15049 ++SUBLEVEL = 6
15050 + EXTRAVERSION =
15051 +-NAME = Unicycling Gorilla
15052 ++NAME = TOSSUG Baby Fish
15053 +
15054 + # *DOCUMENTATION*
15055 + # To see a list of typical targets execute "make help"
15056 +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
15057 +index 136f263..18a9f5e 100644
15058 +--- a/arch/arm/Kconfig
15059 ++++ b/arch/arm/Kconfig
15060 +@@ -19,7 +19,6 @@ config ARM
15061 + select GENERIC_STRNCPY_FROM_USER
15062 + select GENERIC_STRNLEN_USER
15063 + select HARDIRQS_SW_RESEND
15064 +- select HAVE_AOUT
15065 + select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
15066 + select HAVE_ARCH_KGDB
15067 + select HAVE_ARCH_SECCOMP_FILTER
15068 +@@ -213,7 +212,8 @@ config VECTORS_BASE
15069 + default DRAM_BASE if REMAP_VECTORS_TO_RAM
15070 + default 0x00000000
15071 + help
15072 +- The base address of exception vectors.
15073 ++ The base address of exception vectors. This must be two pages
15074 ++ in size.
15075 +
15076 + config ARM_PATCH_PHYS_VIRT
15077 + bool "Patch physical to virtual translations at runtime" if EMBEDDED
15078 +diff --git a/arch/arm/include/asm/a.out-core.h b/arch/arm/include/asm/a.out-core.h
15079 +deleted file mode 100644
15080 +index 92f10cb..0000000
15081 +--- a/arch/arm/include/asm/a.out-core.h
15082 ++++ /dev/null
15083 +@@ -1,45 +0,0 @@
15084 +-/* a.out coredump register dumper
15085 +- *
15086 +- * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
15087 +- * Written by David Howells (dhowells@××××××.com)
15088 +- *
15089 +- * This program is free software; you can redistribute it and/or
15090 +- * modify it under the terms of the GNU General Public Licence
15091 +- * as published by the Free Software Foundation; either version
15092 +- * 2 of the Licence, or (at your option) any later version.
15093 +- */
15094 +-
15095 +-#ifndef _ASM_A_OUT_CORE_H
15096 +-#define _ASM_A_OUT_CORE_H
15097 +-
15098 +-#ifdef __KERNEL__
15099 +-
15100 +-#include <linux/user.h>
15101 +-#include <linux/elfcore.h>
15102 +-
15103 +-/*
15104 +- * fill in the user structure for an a.out core dump
15105 +- */
15106 +-static inline void aout_dump_thread(struct pt_regs *regs, struct user *dump)
15107 +-{
15108 +- struct task_struct *tsk = current;
15109 +-
15110 +- dump->magic = CMAGIC;
15111 +- dump->start_code = tsk->mm->start_code;
15112 +- dump->start_stack = regs->ARM_sp & ~(PAGE_SIZE - 1);
15113 +-
15114 +- dump->u_tsize = (tsk->mm->end_code - tsk->mm->start_code) >> PAGE_SHIFT;
15115 +- dump->u_dsize = (tsk->mm->brk - tsk->mm->start_data + PAGE_SIZE - 1) >> PAGE_SHIFT;
15116 +- dump->u_ssize = 0;
15117 +-
15118 +- memset(dump->u_debugreg, 0, sizeof(dump->u_debugreg));
15119 +-
15120 +- if (dump->start_stack < 0x04000000)
15121 +- dump->u_ssize = (0x04000000 - dump->start_stack) >> PAGE_SHIFT;
15122 +-
15123 +- dump->regs = *regs;
15124 +- dump->u_fpvalid = dump_fpu (regs, &dump->u_fp);
15125 +-}
15126 +-
15127 +-#endif /* __KERNEL__ */
15128 +-#endif /* _ASM_A_OUT_CORE_H */
15129 +diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
15130 +index 38050b1..56211f2 100644
15131 +--- a/arch/arm/include/asm/elf.h
15132 ++++ b/arch/arm/include/asm/elf.h
15133 +@@ -130,4 +130,10 @@ struct mm_struct;
15134 + extern unsigned long arch_randomize_brk(struct mm_struct *mm);
15135 + #define arch_randomize_brk arch_randomize_brk
15136 +
15137 ++#ifdef CONFIG_MMU
15138 ++#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
15139 ++struct linux_binprm;
15140 ++int arch_setup_additional_pages(struct linux_binprm *, int);
15141 ++#endif
15142 ++
15143 + #endif
15144 +diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h
15145 +index e3d5554..6f18da0 100644
15146 +--- a/arch/arm/include/asm/mmu.h
15147 ++++ b/arch/arm/include/asm/mmu.h
15148 +@@ -6,8 +6,11 @@
15149 + typedef struct {
15150 + #ifdef CONFIG_CPU_HAS_ASID
15151 + atomic64_t id;
15152 ++#else
15153 ++ int switch_pending;
15154 + #endif
15155 + unsigned int vmalloc_seq;
15156 ++ unsigned long sigpage;
15157 + } mm_context_t;
15158 +
15159 + #ifdef CONFIG_CPU_HAS_ASID
15160 +diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
15161 +index dc90203..e0b10f1 100644
15162 +--- a/arch/arm/include/asm/mmu_context.h
15163 ++++ b/arch/arm/include/asm/mmu_context.h
15164 +@@ -55,7 +55,7 @@ static inline void check_and_switch_context(struct mm_struct *mm,
15165 + * on non-ASID CPUs, the old mm will remain valid until the
15166 + * finish_arch_post_lock_switch() call.
15167 + */
15168 +- set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
15169 ++ mm->context.switch_pending = 1;
15170 + else
15171 + cpu_switch_mm(mm->pgd, mm);
15172 + }
15173 +@@ -64,9 +64,21 @@ static inline void check_and_switch_context(struct mm_struct *mm,
15174 + finish_arch_post_lock_switch
15175 + static inline void finish_arch_post_lock_switch(void)
15176 + {
15177 +- if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
15178 +- struct mm_struct *mm = current->mm;
15179 +- cpu_switch_mm(mm->pgd, mm);
15180 ++ struct mm_struct *mm = current->mm;
15181 ++
15182 ++ if (mm && mm->context.switch_pending) {
15183 ++ /*
15184 ++ * Preemption must be disabled during cpu_switch_mm() as we
15185 ++ * have some stateful cache flush implementations. Check
15186 ++ * switch_pending again in case we were preempted and the
15187 ++ * switch to this mm was already done.
15188 ++ */
15189 ++ preempt_disable();
15190 ++ if (mm->context.switch_pending) {
15191 ++ mm->context.switch_pending = 0;
15192 ++ cpu_switch_mm(mm->pgd, mm);
15193 ++ }
15194 ++ preempt_enable_no_resched();
15195 + }
15196 + }
15197 +
15198 +diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
15199 +index 812a494..cbdc7a2 100644
15200 +--- a/arch/arm/include/asm/page.h
15201 ++++ b/arch/arm/include/asm/page.h
15202 +@@ -142,7 +142,9 @@ extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
15203 + #define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
15204 + extern void copy_page(void *to, const void *from);
15205 +
15206 ++#ifdef CONFIG_KUSER_HELPERS
15207 + #define __HAVE_ARCH_GATE_AREA 1
15208 ++#endif
15209 +
15210 + #ifdef CONFIG_ARM_LPAE
15211 + #include <asm/pgtable-3level-types.h>
15212 +diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
15213 +index 06e7d50..413f387 100644
15214 +--- a/arch/arm/include/asm/processor.h
15215 ++++ b/arch/arm/include/asm/processor.h
15216 +@@ -54,7 +54,6 @@ struct thread_struct {
15217 +
15218 + #define start_thread(regs,pc,sp) \
15219 + ({ \
15220 +- unsigned long *stack = (unsigned long *)sp; \
15221 + memset(regs->uregs, 0, sizeof(regs->uregs)); \
15222 + if (current->personality & ADDR_LIMIT_32BIT) \
15223 + regs->ARM_cpsr = USR_MODE; \
15224 +@@ -65,9 +64,6 @@ struct thread_struct {
15225 + regs->ARM_cpsr |= PSR_ENDSTATE; \
15226 + regs->ARM_pc = pc & ~1; /* pc */ \
15227 + regs->ARM_sp = sp; /* sp */ \
15228 +- regs->ARM_r2 = stack[2]; /* r2 (envp) */ \
15229 +- regs->ARM_r1 = stack[1]; /* r1 (argv) */ \
15230 +- regs->ARM_r0 = stack[0]; /* r0 (argc) */ \
15231 + nommu_start_thread(regs); \
15232 + })
15233 +
15234 +diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
15235 +index 1995d1a..f00b569 100644
15236 +--- a/arch/arm/include/asm/thread_info.h
15237 ++++ b/arch/arm/include/asm/thread_info.h
15238 +@@ -156,7 +156,6 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
15239 + #define TIF_USING_IWMMXT 17
15240 + #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
15241 + #define TIF_RESTORE_SIGMASK 20
15242 +-#define TIF_SWITCH_MM 22 /* deferred switch_mm */
15243 +
15244 + #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
15245 + #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
15246 +diff --git a/arch/arm/include/uapi/asm/Kbuild b/arch/arm/include/uapi/asm/Kbuild
15247 +index 47bcb2d..18d76fd 100644
15248 +--- a/arch/arm/include/uapi/asm/Kbuild
15249 ++++ b/arch/arm/include/uapi/asm/Kbuild
15250 +@@ -1,7 +1,6 @@
15251 + # UAPI Header export list
15252 + include include/uapi/asm-generic/Kbuild.asm
15253 +
15254 +-header-y += a.out.h
15255 + header-y += byteorder.h
15256 + header-y += fcntl.h
15257 + header-y += hwcap.h
15258 +diff --git a/arch/arm/include/uapi/asm/a.out.h b/arch/arm/include/uapi/asm/a.out.h
15259 +deleted file mode 100644
15260 +index 083894b..0000000
15261 +--- a/arch/arm/include/uapi/asm/a.out.h
15262 ++++ /dev/null
15263 +@@ -1,34 +0,0 @@
15264 +-#ifndef __ARM_A_OUT_H__
15265 +-#define __ARM_A_OUT_H__
15266 +-
15267 +-#include <linux/personality.h>
15268 +-#include <linux/types.h>
15269 +-
15270 +-struct exec
15271 +-{
15272 +- __u32 a_info; /* Use macros N_MAGIC, etc for access */
15273 +- __u32 a_text; /* length of text, in bytes */
15274 +- __u32 a_data; /* length of data, in bytes */
15275 +- __u32 a_bss; /* length of uninitialized data area for file, in bytes */
15276 +- __u32 a_syms; /* length of symbol table data in file, in bytes */
15277 +- __u32 a_entry; /* start address */
15278 +- __u32 a_trsize; /* length of relocation info for text, in bytes */
15279 +- __u32 a_drsize; /* length of relocation info for data, in bytes */
15280 +-};
15281 +-
15282 +-/*
15283 +- * This is always the same
15284 +- */
15285 +-#define N_TXTADDR(a) (0x00008000)
15286 +-
15287 +-#define N_TRSIZE(a) ((a).a_trsize)
15288 +-#define N_DRSIZE(a) ((a).a_drsize)
15289 +-#define N_SYMSIZE(a) ((a).a_syms)
15290 +-
15291 +-#define M_ARM 103
15292 +-
15293 +-#ifndef LIBRARY_START_TEXT
15294 +-#define LIBRARY_START_TEXT (0x00c00000)
15295 +-#endif
15296 +-
15297 +-#endif /* __A_OUT_GNU_H__ */
15298 +diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
15299 +index 582b405..d43c7e5 100644
15300 +--- a/arch/arm/kernel/entry-armv.S
15301 ++++ b/arch/arm/kernel/entry-armv.S
15302 +@@ -741,6 +741,18 @@ ENDPROC(__switch_to)
15303 + #endif
15304 + .endm
15305 +
15306 ++ .macro kuser_pad, sym, size
15307 ++ .if (. - \sym) & 3
15308 ++ .rept 4 - (. - \sym) & 3
15309 ++ .byte 0
15310 ++ .endr
15311 ++ .endif
15312 ++ .rept (\size - (. - \sym)) / 4
15313 ++ .word 0xe7fddef1
15314 ++ .endr
15315 ++ .endm
15316 ++
15317 ++#ifdef CONFIG_KUSER_HELPERS
15318 + .align 5
15319 + .globl __kuser_helper_start
15320 + __kuser_helper_start:
15321 +@@ -831,18 +843,13 @@ kuser_cmpxchg64_fixup:
15322 + #error "incoherent kernel configuration"
15323 + #endif
15324 +
15325 +- /* pad to next slot */
15326 +- .rept (16 - (. - __kuser_cmpxchg64)/4)
15327 +- .word 0
15328 +- .endr
15329 +-
15330 +- .align 5
15331 ++ kuser_pad __kuser_cmpxchg64, 64
15332 +
15333 + __kuser_memory_barrier: @ 0xffff0fa0
15334 + smp_dmb arm
15335 + usr_ret lr
15336 +
15337 +- .align 5
15338 ++ kuser_pad __kuser_memory_barrier, 32
15339 +
15340 + __kuser_cmpxchg: @ 0xffff0fc0
15341 +
15342 +@@ -915,13 +922,14 @@ kuser_cmpxchg32_fixup:
15343 +
15344 + #endif
15345 +
15346 +- .align 5
15347 ++ kuser_pad __kuser_cmpxchg, 32
15348 +
15349 + __kuser_get_tls: @ 0xffff0fe0
15350 + ldr r0, [pc, #(16 - 8)] @ read TLS, set in kuser_get_tls_init
15351 + usr_ret lr
15352 + mrc p15, 0, r0, c13, c0, 3 @ 0xffff0fe8 hardware TLS code
15353 +- .rep 4
15354 ++ kuser_pad __kuser_get_tls, 16
15355 ++ .rep 3
15356 + .word 0 @ 0xffff0ff0 software TLS value, then
15357 + .endr @ pad up to __kuser_helper_version
15358 +
15359 +@@ -931,14 +939,16 @@ __kuser_helper_version: @ 0xffff0ffc
15360 + .globl __kuser_helper_end
15361 + __kuser_helper_end:
15362 +
15363 ++#endif
15364 ++
15365 + THUMB( .thumb )
15366 +
15367 + /*
15368 + * Vector stubs.
15369 + *
15370 +- * This code is copied to 0xffff0200 so we can use branches in the
15371 +- * vectors, rather than ldr's. Note that this code must not
15372 +- * exceed 0x300 bytes.
15373 ++ * This code is copied to 0xffff1000 so we can use branches in the
15374 ++ * vectors, rather than ldr's. Note that this code must not exceed
15375 ++ * a page size.
15376 + *
15377 + * Common stub entry macro:
15378 + * Enter in IRQ mode, spsr = SVC/USR CPSR, lr = SVC/USR PC
15379 +@@ -985,8 +995,17 @@ ENDPROC(vector_\name)
15380 + 1:
15381 + .endm
15382 +
15383 +- .globl __stubs_start
15384 ++ .section .stubs, "ax", %progbits
15385 + __stubs_start:
15386 ++ @ This must be the first word
15387 ++ .word vector_swi
15388 ++
15389 ++vector_rst:
15390 ++ ARM( swi SYS_ERROR0 )
15391 ++ THUMB( svc #0 )
15392 ++ THUMB( nop )
15393 ++ b vector_und
15394 ++
15395 + /*
15396 + * Interrupt dispatcher
15397 + */
15398 +@@ -1081,6 +1100,16 @@ __stubs_start:
15399 + .align 5
15400 +
15401 + /*=============================================================================
15402 ++ * Address exception handler
15403 ++ *-----------------------------------------------------------------------------
15404 ++ * These aren't too critical.
15405 ++ * (they're not supposed to happen, and won't happen in 32-bit data mode).
15406 ++ */
15407 ++
15408 ++vector_addrexcptn:
15409 ++ b vector_addrexcptn
15410 ++
15411 ++/*=============================================================================
15412 + * Undefined FIQs
15413 + *-----------------------------------------------------------------------------
15414 + * Enter in FIQ mode, spsr = ANY CPSR, lr = ANY PC
15415 +@@ -1093,45 +1122,19 @@ __stubs_start:
15416 + vector_fiq:
15417 + subs pc, lr, #4
15418 +
15419 +-/*=============================================================================
15420 +- * Address exception handler
15421 +- *-----------------------------------------------------------------------------
15422 +- * These aren't too critical.
15423 +- * (they're not supposed to happen, and won't happen in 32-bit data mode).
15424 +- */
15425 +-
15426 +-vector_addrexcptn:
15427 +- b vector_addrexcptn
15428 +-
15429 +-/*
15430 +- * We group all the following data together to optimise
15431 +- * for CPUs with separate I & D caches.
15432 +- */
15433 +- .align 5
15434 +-
15435 +-.LCvswi:
15436 +- .word vector_swi
15437 +-
15438 +- .globl __stubs_end
15439 +-__stubs_end:
15440 +-
15441 +- .equ stubs_offset, __vectors_start + 0x200 - __stubs_start
15442 ++ .globl vector_fiq_offset
15443 ++ .equ vector_fiq_offset, vector_fiq
15444 +
15445 +- .globl __vectors_start
15446 ++ .section .vectors, "ax", %progbits
15447 + __vectors_start:
15448 +- ARM( swi SYS_ERROR0 )
15449 +- THUMB( svc #0 )
15450 +- THUMB( nop )
15451 +- W(b) vector_und + stubs_offset
15452 +- W(ldr) pc, .LCvswi + stubs_offset
15453 +- W(b) vector_pabt + stubs_offset
15454 +- W(b) vector_dabt + stubs_offset
15455 +- W(b) vector_addrexcptn + stubs_offset
15456 +- W(b) vector_irq + stubs_offset
15457 +- W(b) vector_fiq + stubs_offset
15458 +-
15459 +- .globl __vectors_end
15460 +-__vectors_end:
15461 ++ W(b) vector_rst
15462 ++ W(b) vector_und
15463 ++ W(ldr) pc, __vectors_start + 0x1000
15464 ++ W(b) vector_pabt
15465 ++ W(b) vector_dabt
15466 ++ W(b) vector_addrexcptn
15467 ++ W(b) vector_irq
15468 ++ W(b) vector_fiq
15469 +
15470 + .data
15471 +
15472 +diff --git a/arch/arm/kernel/fiq.c b/arch/arm/kernel/fiq.c
15473 +index 2adda11..25442f4 100644
15474 +--- a/arch/arm/kernel/fiq.c
15475 ++++ b/arch/arm/kernel/fiq.c
15476 +@@ -47,6 +47,11 @@
15477 + #include <asm/irq.h>
15478 + #include <asm/traps.h>
15479 +
15480 ++#define FIQ_OFFSET ({ \
15481 ++ extern void *vector_fiq_offset; \
15482 ++ (unsigned)&vector_fiq_offset; \
15483 ++ })
15484 ++
15485 + static unsigned long no_fiq_insn;
15486 +
15487 + /* Default reacquire function
15488 +@@ -80,13 +85,16 @@ int show_fiq_list(struct seq_file *p, int prec)
15489 + void set_fiq_handler(void *start, unsigned int length)
15490 + {
15491 + #if defined(CONFIG_CPU_USE_DOMAINS)
15492 +- memcpy((void *)0xffff001c, start, length);
15493 ++ void *base = (void *)0xffff0000;
15494 + #else
15495 +- memcpy(vectors_page + 0x1c, start, length);
15496 ++ void *base = vectors_page;
15497 + #endif
15498 +- flush_icache_range(0xffff001c, 0xffff001c + length);
15499 ++ unsigned offset = FIQ_OFFSET;
15500 ++
15501 ++ memcpy(base + offset, start, length);
15502 ++ flush_icache_range(0xffff0000 + offset, 0xffff0000 + offset + length);
15503 + if (!vectors_high())
15504 +- flush_icache_range(0x1c, 0x1c + length);
15505 ++ flush_icache_range(offset, offset + length);
15506 + }
15507 +
15508 + int claim_fiq(struct fiq_handler *f)
15509 +@@ -144,6 +152,7 @@ EXPORT_SYMBOL(disable_fiq);
15510 +
15511 + void __init init_FIQ(int start)
15512 + {
15513 +- no_fiq_insn = *(unsigned long *)0xffff001c;
15514 ++ unsigned offset = FIQ_OFFSET;
15515 ++ no_fiq_insn = *(unsigned long *)(0xffff0000 + offset);
15516 + fiq_start = start;
15517 + }
15518 +diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
15519 +index 6e8931c..5bc2615 100644
15520 +--- a/arch/arm/kernel/process.c
15521 ++++ b/arch/arm/kernel/process.c
15522 +@@ -433,10 +433,11 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
15523 + }
15524 +
15525 + #ifdef CONFIG_MMU
15526 ++#ifdef CONFIG_KUSER_HELPERS
15527 + /*
15528 + * The vectors page is always readable from user space for the
15529 +- * atomic helpers and the signal restart code. Insert it into the
15530 +- * gate_vma so that it is visible through ptrace and /proc/<pid>/mem.
15531 ++ * atomic helpers. Insert it into the gate_vma so that it is visible
15532 ++ * through ptrace and /proc/<pid>/mem.
15533 + */
15534 + static struct vm_area_struct gate_vma = {
15535 + .vm_start = 0xffff0000,
15536 +@@ -465,9 +466,48 @@ int in_gate_area_no_mm(unsigned long addr)
15537 + {
15538 + return in_gate_area(NULL, addr);
15539 + }
15540 ++#define is_gate_vma(vma) ((vma) = &gate_vma)
15541 ++#else
15542 ++#define is_gate_vma(vma) 0
15543 ++#endif
15544 +
15545 + const char *arch_vma_name(struct vm_area_struct *vma)
15546 + {
15547 +- return (vma == &gate_vma) ? "[vectors]" : NULL;
15548 ++ return is_gate_vma(vma) ? "[vectors]" :
15549 ++ (vma->vm_mm && vma->vm_start == vma->vm_mm->context.sigpage) ?
15550 ++ "[sigpage]" : NULL;
15551 ++}
15552 ++
15553 ++static struct page *signal_page;
15554 ++extern struct page *get_signal_page(void);
15555 ++
15556 ++int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
15557 ++{
15558 ++ struct mm_struct *mm = current->mm;
15559 ++ unsigned long addr;
15560 ++ int ret;
15561 ++
15562 ++ if (!signal_page)
15563 ++ signal_page = get_signal_page();
15564 ++ if (!signal_page)
15565 ++ return -ENOMEM;
15566 ++
15567 ++ down_write(&mm->mmap_sem);
15568 ++ addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
15569 ++ if (IS_ERR_VALUE(addr)) {
15570 ++ ret = addr;
15571 ++ goto up_fail;
15572 ++ }
15573 ++
15574 ++ ret = install_special_mapping(mm, addr, PAGE_SIZE,
15575 ++ VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC,
15576 ++ &signal_page);
15577 ++
15578 ++ if (ret == 0)
15579 ++ mm->context.sigpage = addr;
15580 ++
15581 ++ up_fail:
15582 ++ up_write(&mm->mmap_sem);
15583 ++ return ret;
15584 + }
15585 + #endif
15586 +diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
15587 +index 296786b..5a42c12 100644
15588 +--- a/arch/arm/kernel/signal.c
15589 ++++ b/arch/arm/kernel/signal.c
15590 +@@ -8,6 +8,7 @@
15591 + * published by the Free Software Foundation.
15592 + */
15593 + #include <linux/errno.h>
15594 ++#include <linux/random.h>
15595 + #include <linux/signal.h>
15596 + #include <linux/personality.h>
15597 + #include <linux/uaccess.h>
15598 +@@ -15,12 +16,11 @@
15599 +
15600 + #include <asm/elf.h>
15601 + #include <asm/cacheflush.h>
15602 ++#include <asm/traps.h>
15603 + #include <asm/ucontext.h>
15604 + #include <asm/unistd.h>
15605 + #include <asm/vfp.h>
15606 +
15607 +-#include "signal.h"
15608 +-
15609 + /*
15610 + * For ARM syscalls, we encode the syscall number into the instruction.
15611 + */
15612 +@@ -40,11 +40,13 @@
15613 + #define SWI_THUMB_SIGRETURN (0xdf00 << 16 | 0x2700 | (__NR_sigreturn - __NR_SYSCALL_BASE))
15614 + #define SWI_THUMB_RT_SIGRETURN (0xdf00 << 16 | 0x2700 | (__NR_rt_sigreturn - __NR_SYSCALL_BASE))
15615 +
15616 +-const unsigned long sigreturn_codes[7] = {
15617 ++static const unsigned long sigreturn_codes[7] = {
15618 + MOV_R7_NR_SIGRETURN, SWI_SYS_SIGRETURN, SWI_THUMB_SIGRETURN,
15619 + MOV_R7_NR_RT_SIGRETURN, SWI_SYS_RT_SIGRETURN, SWI_THUMB_RT_SIGRETURN,
15620 + };
15621 +
15622 ++static unsigned long signal_return_offset;
15623 ++
15624 + #ifdef CONFIG_CRUNCH
15625 + static int preserve_crunch_context(struct crunch_sigframe __user *frame)
15626 + {
15627 +@@ -396,13 +398,19 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig,
15628 + __put_user(sigreturn_codes[idx+1], rc+1))
15629 + return 1;
15630 +
15631 ++#ifdef CONFIG_MMU
15632 + if (cpsr & MODE32_BIT) {
15633 ++ struct mm_struct *mm = current->mm;
15634 + /*
15635 +- * 32-bit code can use the new high-page
15636 +- * signal return code support.
15637 ++ * 32-bit code can use the signal return page
15638 ++ * except when the MPU has protected the vectors
15639 ++ * page from PL0
15640 + */
15641 +- retcode = KERN_SIGRETURN_CODE + (idx << 2) + thumb;
15642 +- } else {
15643 ++ retcode = mm->context.sigpage + signal_return_offset +
15644 ++ (idx << 2) + thumb;
15645 ++ } else
15646 ++#endif
15647 ++ {
15648 + /*
15649 + * Ensure that the instruction cache sees
15650 + * the return code written onto the stack.
15651 +@@ -603,3 +611,33 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall)
15652 + } while (thread_flags & _TIF_WORK_MASK);
15653 + return 0;
15654 + }
15655 ++
15656 ++struct page *get_signal_page(void)
15657 ++{
15658 ++ unsigned long ptr;
15659 ++ unsigned offset;
15660 ++ struct page *page;
15661 ++ void *addr;
15662 ++
15663 ++ page = alloc_pages(GFP_KERNEL, 0);
15664 ++
15665 ++ if (!page)
15666 ++ return NULL;
15667 ++
15668 ++ addr = page_address(page);
15669 ++
15670 ++ /* Give the signal return code some randomness */
15671 ++ offset = 0x200 + (get_random_int() & 0x7fc);
15672 ++ signal_return_offset = offset;
15673 ++
15674 ++ /*
15675 ++ * Copy signal return handlers into the vector page, and
15676 ++ * set sigreturn to be a pointer to these.
15677 ++ */
15678 ++ memcpy(addr + offset, sigreturn_codes, sizeof(sigreturn_codes));
15679 ++
15680 ++ ptr = (unsigned long)addr + offset;
15681 ++ flush_icache_range(ptr, ptr + sizeof(sigreturn_codes));
15682 ++
15683 ++ return page;
15684 ++}
15685 +diff --git a/arch/arm/kernel/signal.h b/arch/arm/kernel/signal.h
15686 +deleted file mode 100644
15687 +index 5ff067b7..0000000
15688 +--- a/arch/arm/kernel/signal.h
15689 ++++ /dev/null
15690 +@@ -1,12 +0,0 @@
15691 +-/*
15692 +- * linux/arch/arm/kernel/signal.h
15693 +- *
15694 +- * Copyright (C) 2005-2009 Russell King.
15695 +- *
15696 +- * This program is free software; you can redistribute it and/or modify
15697 +- * it under the terms of the GNU General Public License version 2 as
15698 +- * published by the Free Software Foundation.
15699 +- */
15700 +-#define KERN_SIGRETURN_CODE (CONFIG_VECTORS_BASE + 0x00000500)
15701 +-
15702 +-extern const unsigned long sigreturn_codes[7];
15703 +diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
15704 +index 18b32e8..6b9567e 100644
15705 +--- a/arch/arm/kernel/traps.c
15706 ++++ b/arch/arm/kernel/traps.c
15707 +@@ -35,8 +35,6 @@
15708 + #include <asm/tls.h>
15709 + #include <asm/system_misc.h>
15710 +
15711 +-#include "signal.h"
15712 +-
15713 + static const char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" };
15714 +
15715 + void *vectors_page;
15716 +@@ -800,47 +798,55 @@ void __init trap_init(void)
15717 + return;
15718 + }
15719 +
15720 +-static void __init kuser_get_tls_init(unsigned long vectors)
15721 ++#ifdef CONFIG_KUSER_HELPERS
15722 ++static void __init kuser_init(void *vectors)
15723 + {
15724 ++ extern char __kuser_helper_start[], __kuser_helper_end[];
15725 ++ int kuser_sz = __kuser_helper_end - __kuser_helper_start;
15726 ++
15727 ++ memcpy(vectors + 0x1000 - kuser_sz, __kuser_helper_start, kuser_sz);
15728 ++
15729 + /*
15730 + * vectors + 0xfe0 = __kuser_get_tls
15731 + * vectors + 0xfe8 = hardware TLS instruction at 0xffff0fe8
15732 + */
15733 + if (tls_emu || has_tls_reg)
15734 +- memcpy((void *)vectors + 0xfe0, (void *)vectors + 0xfe8, 4);
15735 ++ memcpy(vectors + 0xfe0, vectors + 0xfe8, 4);
15736 + }
15737 ++#else
15738 ++static void __init kuser_init(void *vectors)
15739 ++{
15740 ++}
15741 ++#endif
15742 +
15743 + void __init early_trap_init(void *vectors_base)
15744 + {
15745 + unsigned long vectors = (unsigned long)vectors_base;
15746 + extern char __stubs_start[], __stubs_end[];
15747 + extern char __vectors_start[], __vectors_end[];
15748 +- extern char __kuser_helper_start[], __kuser_helper_end[];
15749 +- int kuser_sz = __kuser_helper_end - __kuser_helper_start;
15750 ++ unsigned i;
15751 +
15752 + vectors_page = vectors_base;
15753 +
15754 + /*
15755 ++ * Poison the vectors page with an undefined instruction. This
15756 ++ * instruction is chosen to be undefined for both ARM and Thumb
15757 ++ * ISAs. The Thumb version is an undefined instruction with a
15758 ++ * branch back to the undefined instruction.
15759 ++ */
15760 ++ for (i = 0; i < PAGE_SIZE / sizeof(u32); i++)
15761 ++ ((u32 *)vectors_base)[i] = 0xe7fddef1;
15762 ++
15763 ++ /*
15764 + * Copy the vectors, stubs and kuser helpers (in entry-armv.S)
15765 + * into the vector page, mapped at 0xffff0000, and ensure these
15766 + * are visible to the instruction stream.
15767 + */
15768 + memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start);
15769 +- memcpy((void *)vectors + 0x200, __stubs_start, __stubs_end - __stubs_start);
15770 +- memcpy((void *)vectors + 0x1000 - kuser_sz, __kuser_helper_start, kuser_sz);
15771 ++ memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start);
15772 +
15773 +- /*
15774 +- * Do processor specific fixups for the kuser helpers
15775 +- */
15776 +- kuser_get_tls_init(vectors);
15777 +-
15778 +- /*
15779 +- * Copy signal return handlers into the vector page, and
15780 +- * set sigreturn to be a pointer to these.
15781 +- */
15782 +- memcpy((void *)(vectors + KERN_SIGRETURN_CODE - CONFIG_VECTORS_BASE),
15783 +- sigreturn_codes, sizeof(sigreturn_codes));
15784 ++ kuser_init(vectors_base);
15785 +
15786 +- flush_icache_range(vectors, vectors + PAGE_SIZE);
15787 ++ flush_icache_range(vectors, vectors + PAGE_SIZE * 2);
15788 + modify_domain(DOMAIN_USER, DOMAIN_CLIENT);
15789 + }
15790 +diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
15791 +index a871b8e..33f2ea3 100644
15792 +--- a/arch/arm/kernel/vmlinux.lds.S
15793 ++++ b/arch/arm/kernel/vmlinux.lds.S
15794 +@@ -152,6 +152,23 @@ SECTIONS
15795 + . = ALIGN(PAGE_SIZE);
15796 + __init_begin = .;
15797 + #endif
15798 ++ /*
15799 ++ * The vectors and stubs are relocatable code, and the
15800 ++ * only thing that matters is their relative offsets
15801 ++ */
15802 ++ __vectors_start = .;
15803 ++ .vectors 0 : AT(__vectors_start) {
15804 ++ *(.vectors)
15805 ++ }
15806 ++ . = __vectors_start + SIZEOF(.vectors);
15807 ++ __vectors_end = .;
15808 ++
15809 ++ __stubs_start = .;
15810 ++ .stubs 0x1000 : AT(__stubs_start) {
15811 ++ *(.stubs)
15812 ++ }
15813 ++ . = __stubs_start + SIZEOF(.stubs);
15814 ++ __stubs_end = .;
15815 +
15816 + INIT_TEXT_SECTION(8)
15817 + .exit.text : {
15818 +diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
15819 +index 35955b5..2950082 100644
15820 +--- a/arch/arm/mm/Kconfig
15821 ++++ b/arch/arm/mm/Kconfig
15822 +@@ -411,24 +411,28 @@ config CPU_32v3
15823 + select CPU_USE_DOMAINS if MMU
15824 + select NEEDS_SYSCALL_FOR_CMPXCHG if SMP
15825 + select TLS_REG_EMUL if SMP || !MMU
15826 ++ select NEED_KUSER_HELPERS
15827 +
15828 + config CPU_32v4
15829 + bool
15830 + select CPU_USE_DOMAINS if MMU
15831 + select NEEDS_SYSCALL_FOR_CMPXCHG if SMP
15832 + select TLS_REG_EMUL if SMP || !MMU
15833 ++ select NEED_KUSER_HELPERS
15834 +
15835 + config CPU_32v4T
15836 + bool
15837 + select CPU_USE_DOMAINS if MMU
15838 + select NEEDS_SYSCALL_FOR_CMPXCHG if SMP
15839 + select TLS_REG_EMUL if SMP || !MMU
15840 ++ select NEED_KUSER_HELPERS
15841 +
15842 + config CPU_32v5
15843 + bool
15844 + select CPU_USE_DOMAINS if MMU
15845 + select NEEDS_SYSCALL_FOR_CMPXCHG if SMP
15846 + select TLS_REG_EMUL if SMP || !MMU
15847 ++ select NEED_KUSER_HELPERS
15848 +
15849 + config CPU_32v6
15850 + bool
15851 +@@ -756,6 +760,7 @@ config CPU_BPREDICT_DISABLE
15852 +
15853 + config TLS_REG_EMUL
15854 + bool
15855 ++ select NEED_KUSER_HELPERS
15856 + help
15857 + An SMP system using a pre-ARMv6 processor (there are apparently
15858 + a few prototypes like that in existence) and therefore access to
15859 +@@ -763,11 +768,40 @@ config TLS_REG_EMUL
15860 +
15861 + config NEEDS_SYSCALL_FOR_CMPXCHG
15862 + bool
15863 ++ select NEED_KUSER_HELPERS
15864 + help
15865 + SMP on a pre-ARMv6 processor? Well OK then.
15866 + Forget about fast user space cmpxchg support.
15867 + It is just not possible.
15868 +
15869 ++config NEED_KUSER_HELPERS
15870 ++ bool
15871 ++
15872 ++config KUSER_HELPERS
15873 ++ bool "Enable kuser helpers in vector page" if !NEED_KUSER_HELPERS
15874 ++ default y
15875 ++ help
15876 ++ Warning: disabling this option may break user programs.
15877 ++
15878 ++ Provide kuser helpers in the vector page. The kernel provides
15879 ++ helper code to userspace in read only form at a fixed location
15880 ++ in the high vector page to allow userspace to be independent of
15881 ++ the CPU type fitted to the system. This permits binaries to be
15882 ++ run on ARMv4 through to ARMv7 without modification.
15883 ++
15884 ++ However, the fixed address nature of these helpers can be used
15885 ++ by ROP (return orientated programming) authors when creating
15886 ++ exploits.
15887 ++
15888 ++ If all of the binaries and libraries which run on your platform
15889 ++ are built specifically for your platform, and make no use of
15890 ++ these helpers, then you can turn this option off. However,
15891 ++ when such an binary or library is run, it will receive a SIGILL
15892 ++ signal, which will terminate the program.
15893 ++
15894 ++ Say N here only if you are absolutely certain that you do not
15895 ++ need these helpers; otherwise, the safe option is to say Y.
15896 ++
15897 + config DMA_CACHE_RWFO
15898 + bool "Enable read/write for ownership DMA cache maintenance"
15899 + depends on CPU_V6K && SMP
15900 +diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
15901 +index 4d409e6..daf336f 100644
15902 +--- a/arch/arm/mm/mmu.c
15903 ++++ b/arch/arm/mm/mmu.c
15904 +@@ -1175,7 +1175,7 @@ static void __init devicemaps_init(struct machine_desc *mdesc)
15905 + /*
15906 + * Allocate the vector page early.
15907 + */
15908 +- vectors = early_alloc(PAGE_SIZE);
15909 ++ vectors = early_alloc(PAGE_SIZE * 2);
15910 +
15911 + early_trap_init(vectors);
15912 +
15913 +@@ -1220,15 +1220,27 @@ static void __init devicemaps_init(struct machine_desc *mdesc)
15914 + map.pfn = __phys_to_pfn(virt_to_phys(vectors));
15915 + map.virtual = 0xffff0000;
15916 + map.length = PAGE_SIZE;
15917 ++#ifdef CONFIG_KUSER_HELPERS
15918 + map.type = MT_HIGH_VECTORS;
15919 ++#else
15920 ++ map.type = MT_LOW_VECTORS;
15921 ++#endif
15922 + create_mapping(&map);
15923 +
15924 + if (!vectors_high()) {
15925 + map.virtual = 0;
15926 ++ map.length = PAGE_SIZE * 2;
15927 + map.type = MT_LOW_VECTORS;
15928 + create_mapping(&map);
15929 + }
15930 +
15931 ++ /* Now create a kernel read-only mapping */
15932 ++ map.pfn += 1;
15933 ++ map.virtual = 0xffff0000 + PAGE_SIZE;
15934 ++ map.length = PAGE_SIZE;
15935 ++ map.type = MT_LOW_VECTORS;
15936 ++ create_mapping(&map);
15937 ++
15938 + /*
15939 + * Ask the machine support to map in the statically mapped devices.
15940 + */
15941 +diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
15942 +index 9704097..b3997c7 100644
15943 +--- a/arch/arm/mm/proc-v7-2level.S
15944 ++++ b/arch/arm/mm/proc-v7-2level.S
15945 +@@ -110,7 +110,7 @@ ENTRY(cpu_v7_set_pte_ext)
15946 + ARM( str r3, [r0, #2048]! )
15947 + THUMB( add r0, r0, #2048 )
15948 + THUMB( str r3, [r0] )
15949 +- ALT_SMP(mov pc,lr)
15950 ++ ALT_SMP(W(nop))
15951 + ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
15952 + #endif
15953 + mov pc, lr
15954 +diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
15955 +index 363027e..6ba4bd9 100644
15956 +--- a/arch/arm/mm/proc-v7-3level.S
15957 ++++ b/arch/arm/mm/proc-v7-3level.S
15958 +@@ -73,7 +73,7 @@ ENTRY(cpu_v7_set_pte_ext)
15959 + tst r3, #1 << (55 - 32) @ L_PTE_DIRTY
15960 + orreq r2, #L_PTE_RDONLY
15961 + 1: strd r2, r3, [r0]
15962 +- ALT_SMP(mov pc, lr)
15963 ++ ALT_SMP(W(nop))
15964 + ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
15965 + #endif
15966 + mov pc, lr
15967 +diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
15968 +index e35fec3..5fbccee 100644
15969 +--- a/arch/arm/mm/proc-v7.S
15970 ++++ b/arch/arm/mm/proc-v7.S
15971 +@@ -75,13 +75,14 @@ ENTRY(cpu_v7_do_idle)
15972 + ENDPROC(cpu_v7_do_idle)
15973 +
15974 + ENTRY(cpu_v7_dcache_clean_area)
15975 +- ALT_SMP(mov pc, lr) @ MP extensions imply L1 PTW
15976 +- ALT_UP(W(nop))
15977 +- dcache_line_size r2, r3
15978 +-1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
15979 ++ ALT_SMP(W(nop)) @ MP extensions imply L1 PTW
15980 ++ ALT_UP_B(1f)
15981 ++ mov pc, lr
15982 ++1: dcache_line_size r2, r3
15983 ++2: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
15984 + add r0, r0, r2
15985 + subs r1, r1, r2
15986 +- bhi 1b
15987 ++ bhi 2b
15988 + dsb
15989 + mov pc, lr
15990 + ENDPROC(cpu_v7_dcache_clean_area)
15991 +diff --git a/arch/parisc/include/asm/parisc-device.h b/arch/parisc/include/asm/parisc-device.h
15992 +index 9afdad6..eaf4dc1 100644
15993 +--- a/arch/parisc/include/asm/parisc-device.h
15994 ++++ b/arch/parisc/include/asm/parisc-device.h
15995 +@@ -23,6 +23,7 @@ struct parisc_device {
15996 + /* generic info returned from pdc_pat_cell_module() */
15997 + unsigned long mod_info; /* PAT specific - Misc Module info */
15998 + unsigned long pmod_loc; /* physical Module location */
15999 ++ unsigned long mod0;
16000 + #endif
16001 + u64 dma_mask; /* DMA mask for I/O */
16002 + struct device dev;
16003 +@@ -61,4 +62,6 @@ parisc_get_drvdata(struct parisc_device *d)
16004 +
16005 + extern struct bus_type parisc_bus_type;
16006 +
16007 ++int iosapic_serial_irq(struct parisc_device *dev);
16008 ++
16009 + #endif /*_ASM_PARISC_PARISC_DEVICE_H_*/
16010 +diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
16011 +index 2e65aa5..c035673 100644
16012 +--- a/arch/parisc/kernel/cache.c
16013 ++++ b/arch/parisc/kernel/cache.c
16014 +@@ -71,18 +71,27 @@ flush_cache_all_local(void)
16015 + }
16016 + EXPORT_SYMBOL(flush_cache_all_local);
16017 +
16018 ++/* Virtual address of pfn. */
16019 ++#define pfn_va(pfn) __va(PFN_PHYS(pfn))
16020 ++
16021 + void
16022 + update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
16023 + {
16024 +- struct page *page = pte_page(*ptep);
16025 ++ unsigned long pfn = pte_pfn(*ptep);
16026 ++ struct page *page;
16027 +
16028 +- if (pfn_valid(page_to_pfn(page)) && page_mapping(page) &&
16029 +- test_bit(PG_dcache_dirty, &page->flags)) {
16030 ++ /* We don't have pte special. As a result, we can be called with
16031 ++ an invalid pfn and we don't need to flush the kernel dcache page.
16032 ++ This occurs with FireGL card in C8000. */
16033 ++ if (!pfn_valid(pfn))
16034 ++ return;
16035 +
16036 +- flush_kernel_dcache_page(page);
16037 ++ page = pfn_to_page(pfn);
16038 ++ if (page_mapping(page) && test_bit(PG_dcache_dirty, &page->flags)) {
16039 ++ flush_kernel_dcache_page_addr(pfn_va(pfn));
16040 + clear_bit(PG_dcache_dirty, &page->flags);
16041 + } else if (parisc_requires_coherency())
16042 +- flush_kernel_dcache_page(page);
16043 ++ flush_kernel_dcache_page_addr(pfn_va(pfn));
16044 + }
16045 +
16046 + void
16047 +@@ -495,44 +504,42 @@ static inline pte_t *get_ptep(pgd_t *pgd, unsigned long addr)
16048 +
16049 + void flush_cache_mm(struct mm_struct *mm)
16050 + {
16051 ++ struct vm_area_struct *vma;
16052 ++ pgd_t *pgd;
16053 ++
16054 + /* Flushing the whole cache on each cpu takes forever on
16055 + rp3440, etc. So, avoid it if the mm isn't too big. */
16056 +- if (mm_total_size(mm) < parisc_cache_flush_threshold) {
16057 +- struct vm_area_struct *vma;
16058 +-
16059 +- if (mm->context == mfsp(3)) {
16060 +- for (vma = mm->mmap; vma; vma = vma->vm_next) {
16061 +- flush_user_dcache_range_asm(vma->vm_start,
16062 +- vma->vm_end);
16063 +- if (vma->vm_flags & VM_EXEC)
16064 +- flush_user_icache_range_asm(
16065 +- vma->vm_start, vma->vm_end);
16066 +- }
16067 +- } else {
16068 +- pgd_t *pgd = mm->pgd;
16069 +-
16070 +- for (vma = mm->mmap; vma; vma = vma->vm_next) {
16071 +- unsigned long addr;
16072 +-
16073 +- for (addr = vma->vm_start; addr < vma->vm_end;
16074 +- addr += PAGE_SIZE) {
16075 +- pte_t *ptep = get_ptep(pgd, addr);
16076 +- if (ptep != NULL) {
16077 +- pte_t pte = *ptep;
16078 +- __flush_cache_page(vma, addr,
16079 +- page_to_phys(pte_page(pte)));
16080 +- }
16081 +- }
16082 +- }
16083 ++ if (mm_total_size(mm) >= parisc_cache_flush_threshold) {
16084 ++ flush_cache_all();
16085 ++ return;
16086 ++ }
16087 ++
16088 ++ if (mm->context == mfsp(3)) {
16089 ++ for (vma = mm->mmap; vma; vma = vma->vm_next) {
16090 ++ flush_user_dcache_range_asm(vma->vm_start, vma->vm_end);
16091 ++ if ((vma->vm_flags & VM_EXEC) == 0)
16092 ++ continue;
16093 ++ flush_user_icache_range_asm(vma->vm_start, vma->vm_end);
16094 + }
16095 + return;
16096 + }
16097 +
16098 +-#ifdef CONFIG_SMP
16099 +- flush_cache_all();
16100 +-#else
16101 +- flush_cache_all_local();
16102 +-#endif
16103 ++ pgd = mm->pgd;
16104 ++ for (vma = mm->mmap; vma; vma = vma->vm_next) {
16105 ++ unsigned long addr;
16106 ++
16107 ++ for (addr = vma->vm_start; addr < vma->vm_end;
16108 ++ addr += PAGE_SIZE) {
16109 ++ unsigned long pfn;
16110 ++ pte_t *ptep = get_ptep(pgd, addr);
16111 ++ if (!ptep)
16112 ++ continue;
16113 ++ pfn = pte_pfn(*ptep);
16114 ++ if (!pfn_valid(pfn))
16115 ++ continue;
16116 ++ __flush_cache_page(vma, addr, PFN_PHYS(pfn));
16117 ++ }
16118 ++ }
16119 + }
16120 +
16121 + void
16122 +@@ -556,33 +563,32 @@ flush_user_icache_range(unsigned long start, unsigned long end)
16123 + void flush_cache_range(struct vm_area_struct *vma,
16124 + unsigned long start, unsigned long end)
16125 + {
16126 ++ unsigned long addr;
16127 ++ pgd_t *pgd;
16128 ++
16129 + BUG_ON(!vma->vm_mm->context);
16130 +
16131 +- if ((end - start) < parisc_cache_flush_threshold) {
16132 +- if (vma->vm_mm->context == mfsp(3)) {
16133 +- flush_user_dcache_range_asm(start, end);
16134 +- if (vma->vm_flags & VM_EXEC)
16135 +- flush_user_icache_range_asm(start, end);
16136 +- } else {
16137 +- unsigned long addr;
16138 +- pgd_t *pgd = vma->vm_mm->pgd;
16139 +-
16140 +- for (addr = start & PAGE_MASK; addr < end;
16141 +- addr += PAGE_SIZE) {
16142 +- pte_t *ptep = get_ptep(pgd, addr);
16143 +- if (ptep != NULL) {
16144 +- pte_t pte = *ptep;
16145 +- flush_cache_page(vma,
16146 +- addr, pte_pfn(pte));
16147 +- }
16148 +- }
16149 +- }
16150 +- } else {
16151 +-#ifdef CONFIG_SMP
16152 ++ if ((end - start) >= parisc_cache_flush_threshold) {
16153 + flush_cache_all();
16154 +-#else
16155 +- flush_cache_all_local();
16156 +-#endif
16157 ++ return;
16158 ++ }
16159 ++
16160 ++ if (vma->vm_mm->context == mfsp(3)) {
16161 ++ flush_user_dcache_range_asm(start, end);
16162 ++ if (vma->vm_flags & VM_EXEC)
16163 ++ flush_user_icache_range_asm(start, end);
16164 ++ return;
16165 ++ }
16166 ++
16167 ++ pgd = vma->vm_mm->pgd;
16168 ++ for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
16169 ++ unsigned long pfn;
16170 ++ pte_t *ptep = get_ptep(pgd, addr);
16171 ++ if (!ptep)
16172 ++ continue;
16173 ++ pfn = pte_pfn(*ptep);
16174 ++ if (pfn_valid(pfn))
16175 ++ __flush_cache_page(vma, addr, PFN_PHYS(pfn));
16176 + }
16177 + }
16178 +
16179 +@@ -591,9 +597,10 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long
16180 + {
16181 + BUG_ON(!vma->vm_mm->context);
16182 +
16183 +- flush_tlb_page(vma, vmaddr);
16184 +- __flush_cache_page(vma, vmaddr, page_to_phys(pfn_to_page(pfn)));
16185 +-
16186 ++ if (pfn_valid(pfn)) {
16187 ++ flush_tlb_page(vma, vmaddr);
16188 ++ __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
16189 ++ }
16190 + }
16191 +
16192 + #ifdef CONFIG_PARISC_TMPALIAS
16193 +diff --git a/arch/parisc/kernel/inventory.c b/arch/parisc/kernel/inventory.c
16194 +index 3295ef4..f0b6722 100644
16195 +--- a/arch/parisc/kernel/inventory.c
16196 ++++ b/arch/parisc/kernel/inventory.c
16197 +@@ -211,6 +211,7 @@ pat_query_module(ulong pcell_loc, ulong mod_index)
16198 + /* REVISIT: who is the consumer of this? not sure yet... */
16199 + dev->mod_info = pa_pdc_cell->mod_info; /* pass to PAT_GET_ENTITY() */
16200 + dev->pmod_loc = pa_pdc_cell->mod_location;
16201 ++ dev->mod0 = pa_pdc_cell->mod[0];
16202 +
16203 + register_parisc_device(dev); /* advertise device */
16204 +
16205 +diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
16206 +index ffbaabe..48cfc85 100644
16207 +--- a/arch/powerpc/include/asm/smp.h
16208 ++++ b/arch/powerpc/include/asm/smp.h
16209 +@@ -145,6 +145,10 @@ extern void __cpu_die(unsigned int cpu);
16210 + #define smp_setup_cpu_maps()
16211 + static inline void inhibit_secondary_onlining(void) {}
16212 + static inline void uninhibit_secondary_onlining(void) {}
16213 ++static inline const struct cpumask *cpu_sibling_mask(int cpu)
16214 ++{
16215 ++ return cpumask_of(cpu);
16216 ++}
16217 +
16218 + #endif /* CONFIG_SMP */
16219 +
16220 +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
16221 +index 2859a1f..cafad40 100644
16222 +--- a/arch/powerpc/mm/numa.c
16223 ++++ b/arch/powerpc/mm/numa.c
16224 +@@ -27,6 +27,7 @@
16225 + #include <linux/seq_file.h>
16226 + #include <linux/uaccess.h>
16227 + #include <linux/slab.h>
16228 ++#include <asm/cputhreads.h>
16229 + #include <asm/sparsemem.h>
16230 + #include <asm/prom.h>
16231 + #include <asm/smp.h>
16232 +@@ -1319,7 +1320,8 @@ static int update_cpu_associativity_changes_mask(void)
16233 + }
16234 + }
16235 + if (changed) {
16236 +- cpumask_set_cpu(cpu, changes);
16237 ++ cpumask_or(changes, changes, cpu_sibling_mask(cpu));
16238 ++ cpu = cpu_last_thread_sibling(cpu);
16239 + }
16240 + }
16241 +
16242 +@@ -1427,7 +1429,7 @@ static int update_cpu_topology(void *data)
16243 + if (!data)
16244 + return -EINVAL;
16245 +
16246 +- cpu = get_cpu();
16247 ++ cpu = smp_processor_id();
16248 +
16249 + for (update = data; update; update = update->next) {
16250 + if (cpu != update->cpu)
16251 +@@ -1447,12 +1449,12 @@ static int update_cpu_topology(void *data)
16252 + */
16253 + int arch_update_cpu_topology(void)
16254 + {
16255 +- unsigned int cpu, changed = 0;
16256 ++ unsigned int cpu, sibling, changed = 0;
16257 + struct topology_update_data *updates, *ud;
16258 + unsigned int associativity[VPHN_ASSOC_BUFSIZE] = {0};
16259 + cpumask_t updated_cpus;
16260 + struct device *dev;
16261 +- int weight, i = 0;
16262 ++ int weight, new_nid, i = 0;
16263 +
16264 + weight = cpumask_weight(&cpu_associativity_changes_mask);
16265 + if (!weight)
16266 +@@ -1465,19 +1467,46 @@ int arch_update_cpu_topology(void)
16267 + cpumask_clear(&updated_cpus);
16268 +
16269 + for_each_cpu(cpu, &cpu_associativity_changes_mask) {
16270 +- ud = &updates[i++];
16271 +- ud->cpu = cpu;
16272 +- vphn_get_associativity(cpu, associativity);
16273 +- ud->new_nid = associativity_to_nid(associativity);
16274 +-
16275 +- if (ud->new_nid < 0 || !node_online(ud->new_nid))
16276 +- ud->new_nid = first_online_node;
16277 ++ /*
16278 ++ * If siblings aren't flagged for changes, updates list
16279 ++ * will be too short. Skip on this update and set for next
16280 ++ * update.
16281 ++ */
16282 ++ if (!cpumask_subset(cpu_sibling_mask(cpu),
16283 ++ &cpu_associativity_changes_mask)) {
16284 ++ pr_info("Sibling bits not set for associativity "
16285 ++ "change, cpu%d\n", cpu);
16286 ++ cpumask_or(&cpu_associativity_changes_mask,
16287 ++ &cpu_associativity_changes_mask,
16288 ++ cpu_sibling_mask(cpu));
16289 ++ cpu = cpu_last_thread_sibling(cpu);
16290 ++ continue;
16291 ++ }
16292 +
16293 +- ud->old_nid = numa_cpu_lookup_table[cpu];
16294 +- cpumask_set_cpu(cpu, &updated_cpus);
16295 ++ /* Use associativity from first thread for all siblings */
16296 ++ vphn_get_associativity(cpu, associativity);
16297 ++ new_nid = associativity_to_nid(associativity);
16298 ++ if (new_nid < 0 || !node_online(new_nid))
16299 ++ new_nid = first_online_node;
16300 ++
16301 ++ if (new_nid == numa_cpu_lookup_table[cpu]) {
16302 ++ cpumask_andnot(&cpu_associativity_changes_mask,
16303 ++ &cpu_associativity_changes_mask,
16304 ++ cpu_sibling_mask(cpu));
16305 ++ cpu = cpu_last_thread_sibling(cpu);
16306 ++ continue;
16307 ++ }
16308 +
16309 +- if (i < weight)
16310 +- ud->next = &updates[i];
16311 ++ for_each_cpu(sibling, cpu_sibling_mask(cpu)) {
16312 ++ ud = &updates[i++];
16313 ++ ud->cpu = sibling;
16314 ++ ud->new_nid = new_nid;
16315 ++ ud->old_nid = numa_cpu_lookup_table[sibling];
16316 ++ cpumask_set_cpu(sibling, &updated_cpus);
16317 ++ if (i < weight)
16318 ++ ud->next = &updates[i];
16319 ++ }
16320 ++ cpu = cpu_last_thread_sibling(cpu);
16321 + }
16322 +
16323 + stop_machine(update_cpu_topology, &updates[0], &updated_cpus);
16324 +diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
16325 +index da183c5..97dcbea 100644
16326 +--- a/arch/s390/Kconfig
16327 ++++ b/arch/s390/Kconfig
16328 +@@ -227,11 +227,12 @@ config MARCH_Z196
16329 + not work on older machines.
16330 +
16331 + config MARCH_ZEC12
16332 +- bool "IBM zEC12"
16333 ++ bool "IBM zBC12 and zEC12"
16334 + select HAVE_MARCH_ZEC12_FEATURES if 64BIT
16335 + help
16336 +- Select this to enable optimizations for IBM zEC12 (2827 series). The
16337 +- kernel will be slightly faster but will not work on older machines.
16338 ++ Select this to enable optimizations for IBM zBC12 and zEC12 (2828 and
16339 ++ 2827 series). The kernel will be slightly faster but will not work on
16340 ++ older machines.
16341 +
16342 + endchoice
16343 +
16344 +diff --git a/arch/s390/include/asm/bitops.h b/arch/s390/include/asm/bitops.h
16345 +index 4d8604e..7d46767 100644
16346 +--- a/arch/s390/include/asm/bitops.h
16347 ++++ b/arch/s390/include/asm/bitops.h
16348 +@@ -693,7 +693,7 @@ static inline int find_next_bit_left(const unsigned long *addr,
16349 + size -= offset;
16350 + p = addr + offset / BITS_PER_LONG;
16351 + if (bit) {
16352 +- set = __flo_word(0, *p & (~0UL << bit));
16353 ++ set = __flo_word(0, *p & (~0UL >> bit));
16354 + if (set >= size)
16355 + return size + offset;
16356 + if (set < BITS_PER_LONG)
16357 +diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
16358 +index 0a49095..8ad9413 100644
16359 +--- a/arch/s390/kernel/setup.c
16360 ++++ b/arch/s390/kernel/setup.c
16361 +@@ -998,6 +998,7 @@ static void __init setup_hwcaps(void)
16362 + strcpy(elf_platform, "z196");
16363 + break;
16364 + case 0x2827:
16365 ++ case 0x2828:
16366 + strcpy(elf_platform, "zEC12");
16367 + break;
16368 + }
16369 +diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
16370 +index 89ebae4..eba15f1 100644
16371 +--- a/arch/s390/mm/init.c
16372 ++++ b/arch/s390/mm/init.c
16373 +@@ -69,6 +69,7 @@ static void __init setup_zero_pages(void)
16374 + order = 2;
16375 + break;
16376 + case 0x2827: /* zEC12 */
16377 ++ case 0x2828: /* zEC12 */
16378 + default:
16379 + order = 5;
16380 + break;
16381 +diff --git a/arch/s390/oprofile/init.c b/arch/s390/oprofile/init.c
16382 +index ffeb17c..930783d 100644
16383 +--- a/arch/s390/oprofile/init.c
16384 ++++ b/arch/s390/oprofile/init.c
16385 +@@ -440,7 +440,7 @@ static int oprofile_hwsampler_init(struct oprofile_operations *ops)
16386 + switch (id.machine) {
16387 + case 0x2097: case 0x2098: ops->cpu_type = "s390/z10"; break;
16388 + case 0x2817: case 0x2818: ops->cpu_type = "s390/z196"; break;
16389 +- case 0x2827: ops->cpu_type = "s390/zEC12"; break;
16390 ++ case 0x2827: case 0x2828: ops->cpu_type = "s390/zEC12"; break;
16391 + default: return -ENODEV;
16392 + }
16393 + }
16394 +diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
16395 +index 94ab6b9..63bdb29 100644
16396 +--- a/arch/x86/kernel/early-quirks.c
16397 ++++ b/arch/x86/kernel/early-quirks.c
16398 +@@ -196,15 +196,23 @@ static void __init ati_bugs_contd(int num, int slot, int func)
16399 + static void __init intel_remapping_check(int num, int slot, int func)
16400 + {
16401 + u8 revision;
16402 ++ u16 device;
16403 +
16404 ++ device = read_pci_config_16(num, slot, func, PCI_DEVICE_ID);
16405 + revision = read_pci_config_byte(num, slot, func, PCI_REVISION_ID);
16406 +
16407 + /*
16408 +- * Revision 0x13 of this chipset supports irq remapping
16409 +- * but has an erratum that breaks its behavior, flag it as such
16410 ++ * Revision 13 of all triggering devices id in this quirk have
16411 ++ * a problem draining interrupts when irq remapping is enabled,
16412 ++ * and should be flagged as broken. Additionally revisions 0x12
16413 ++ * and 0x22 of device id 0x3405 has this problem.
16414 + */
16415 + if (revision == 0x13)
16416 + set_irq_remapping_broken();
16417 ++ else if ((device == 0x3405) &&
16418 ++ ((revision == 0x12) ||
16419 ++ (revision == 0x22)))
16420 ++ set_irq_remapping_broken();
16421 +
16422 + }
16423 +
16424 +@@ -239,6 +247,8 @@ static struct chipset early_qrk[] __initdata = {
16425 + PCI_CLASS_SERIAL_SMBUS, PCI_ANY_ID, 0, ati_bugs_contd },
16426 + { PCI_VENDOR_ID_INTEL, 0x3403, PCI_CLASS_BRIDGE_HOST,
16427 + PCI_BASE_CLASS_BRIDGE, 0, intel_remapping_check },
16428 ++ { PCI_VENDOR_ID_INTEL, 0x3405, PCI_CLASS_BRIDGE_HOST,
16429 ++ PCI_BASE_CLASS_BRIDGE, 0, intel_remapping_check },
16430 + { PCI_VENDOR_ID_INTEL, 0x3406, PCI_CLASS_BRIDGE_HOST,
16431 + PCI_BASE_CLASS_BRIDGE, 0, intel_remapping_check },
16432 + {}
16433 +diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
16434 +index cb33909..f7ea30d 100644
16435 +--- a/arch/x86/kernel/i387.c
16436 ++++ b/arch/x86/kernel/i387.c
16437 +@@ -116,7 +116,7 @@ static void __cpuinit mxcsr_feature_mask_init(void)
16438 +
16439 + if (cpu_has_fxsr) {
16440 + memset(&fx_scratch, 0, sizeof(struct i387_fxsave_struct));
16441 +- asm volatile("fxsave %0" : : "m" (fx_scratch));
16442 ++ asm volatile("fxsave %0" : "+m" (fx_scratch));
16443 + mask = fx_scratch.mxcsr_mask;
16444 + if (mask == 0)
16445 + mask = 0x0000ffbf;
16446 +diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
16447 +index e710045..9533271 100644
16448 +--- a/drivers/acpi/battery.c
16449 ++++ b/drivers/acpi/battery.c
16450 +@@ -117,6 +117,7 @@ struct acpi_battery {
16451 + struct acpi_device *device;
16452 + struct notifier_block pm_nb;
16453 + unsigned long update_time;
16454 ++ int revision;
16455 + int rate_now;
16456 + int capacity_now;
16457 + int voltage_now;
16458 +@@ -359,6 +360,7 @@ static struct acpi_offsets info_offsets[] = {
16459 + };
16460 +
16461 + static struct acpi_offsets extended_info_offsets[] = {
16462 ++ {offsetof(struct acpi_battery, revision), 0},
16463 + {offsetof(struct acpi_battery, power_unit), 0},
16464 + {offsetof(struct acpi_battery, design_capacity), 0},
16465 + {offsetof(struct acpi_battery, full_charge_capacity), 0},
16466 +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
16467 +index d89ef86..69b45fc 100644
16468 +--- a/drivers/block/xen-blkfront.c
16469 ++++ b/drivers/block/xen-blkfront.c
16470 +@@ -75,6 +75,7 @@ struct blk_shadow {
16471 + struct blkif_request req;
16472 + struct request *request;
16473 + struct grant *grants_used[BLKIF_MAX_SEGMENTS_PER_REQUEST];
16474 ++ struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
16475 + };
16476 +
16477 + static DEFINE_MUTEX(blkfront_mutex);
16478 +@@ -98,7 +99,6 @@ struct blkfront_info
16479 + enum blkif_state connected;
16480 + int ring_ref;
16481 + struct blkif_front_ring ring;
16482 +- struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
16483 + unsigned int evtchn, irq;
16484 + struct request_queue *rq;
16485 + struct work_struct work;
16486 +@@ -422,11 +422,11 @@ static int blkif_queue_request(struct request *req)
16487 + ring_req->u.discard.flag = 0;
16488 + } else {
16489 + ring_req->u.rw.nr_segments = blk_rq_map_sg(req->q, req,
16490 +- info->sg);
16491 ++ info->shadow[id].sg);
16492 + BUG_ON(ring_req->u.rw.nr_segments >
16493 + BLKIF_MAX_SEGMENTS_PER_REQUEST);
16494 +
16495 +- for_each_sg(info->sg, sg, ring_req->u.rw.nr_segments, i) {
16496 ++ for_each_sg(info->shadow[id].sg, sg, ring_req->u.rw.nr_segments, i) {
16497 + fsect = sg->offset >> 9;
16498 + lsect = fsect + (sg->length >> 9) - 1;
16499 +
16500 +@@ -867,12 +867,12 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
16501 + struct blkif_response *bret)
16502 + {
16503 + int i = 0;
16504 +- struct bio_vec *bvec;
16505 +- struct req_iterator iter;
16506 +- unsigned long flags;
16507 ++ struct scatterlist *sg;
16508 + char *bvec_data;
16509 + void *shared_data;
16510 +- unsigned int offset = 0;
16511 ++ int nseg;
16512 ++
16513 ++ nseg = s->req.u.rw.nr_segments;
16514 +
16515 + if (bret->operation == BLKIF_OP_READ) {
16516 + /*
16517 +@@ -881,19 +881,16 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
16518 + * than PAGE_SIZE, we have to keep track of the current offset,
16519 + * to be sure we are copying the data from the right shared page.
16520 + */
16521 +- rq_for_each_segment(bvec, s->request, iter) {
16522 +- BUG_ON((bvec->bv_offset + bvec->bv_len) > PAGE_SIZE);
16523 +- if (bvec->bv_offset < offset)
16524 +- i++;
16525 +- BUG_ON(i >= s->req.u.rw.nr_segments);
16526 ++ for_each_sg(s->sg, sg, nseg, i) {
16527 ++ BUG_ON(sg->offset + sg->length > PAGE_SIZE);
16528 + shared_data = kmap_atomic(
16529 + pfn_to_page(s->grants_used[i]->pfn));
16530 +- bvec_data = bvec_kmap_irq(bvec, &flags);
16531 +- memcpy(bvec_data, shared_data + bvec->bv_offset,
16532 +- bvec->bv_len);
16533 +- bvec_kunmap_irq(bvec_data, &flags);
16534 ++ bvec_data = kmap_atomic(sg_page(sg));
16535 ++ memcpy(bvec_data + sg->offset,
16536 ++ shared_data + sg->offset,
16537 ++ sg->length);
16538 ++ kunmap_atomic(bvec_data);
16539 + kunmap_atomic(shared_data);
16540 +- offset = bvec->bv_offset + bvec->bv_len;
16541 + }
16542 + }
16543 + /* Add the persistent grant into the list of free grants */
16544 +@@ -1022,7 +1019,7 @@ static int setup_blkring(struct xenbus_device *dev,
16545 + struct blkfront_info *info)
16546 + {
16547 + struct blkif_sring *sring;
16548 +- int err;
16549 ++ int err, i;
16550 +
16551 + info->ring_ref = GRANT_INVALID_REF;
16552 +
16553 +@@ -1034,7 +1031,8 @@ static int setup_blkring(struct xenbus_device *dev,
16554 + SHARED_RING_INIT(sring);
16555 + FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE);
16556 +
16557 +- sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
16558 ++ for (i = 0; i < BLK_RING_SIZE; i++)
16559 ++ sg_init_table(info->shadow[i].sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
16560 +
16561 + /* Allocate memory for grants */
16562 + err = fill_grant_buffer(info, BLK_RING_SIZE *
16563 +diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
16564 +index 11f467c..a12b923 100644
16565 +--- a/drivers/bluetooth/ath3k.c
16566 ++++ b/drivers/bluetooth/ath3k.c
16567 +@@ -91,6 +91,10 @@ static struct usb_device_id ath3k_table[] = {
16568 + { USB_DEVICE(0x0489, 0xe04e) },
16569 + { USB_DEVICE(0x0489, 0xe056) },
16570 + { USB_DEVICE(0x0489, 0xe04d) },
16571 ++ { USB_DEVICE(0x04c5, 0x1330) },
16572 ++ { USB_DEVICE(0x13d3, 0x3402) },
16573 ++ { USB_DEVICE(0x0cf3, 0x3121) },
16574 ++ { USB_DEVICE(0x0cf3, 0xe003) },
16575 +
16576 + /* Atheros AR5BBU12 with sflash firmware */
16577 + { USB_DEVICE(0x0489, 0xE02C) },
16578 +@@ -128,6 +132,10 @@ static struct usb_device_id ath3k_blist_tbl[] = {
16579 + { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
16580 + { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
16581 + { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 },
16582 ++ { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
16583 ++ { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 },
16584 ++ { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 },
16585 ++ { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 },
16586 +
16587 + /* Atheros AR5BBU22 with sflash firmware */
16588 + { USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 },
16589 +@@ -193,24 +201,44 @@ error:
16590 +
16591 + static int ath3k_get_state(struct usb_device *udev, unsigned char *state)
16592 + {
16593 +- int pipe = 0;
16594 ++ int ret, pipe = 0;
16595 ++ char *buf;
16596 ++
16597 ++ buf = kmalloc(sizeof(*buf), GFP_KERNEL);
16598 ++ if (!buf)
16599 ++ return -ENOMEM;
16600 +
16601 + pipe = usb_rcvctrlpipe(udev, 0);
16602 +- return usb_control_msg(udev, pipe, ATH3K_GETSTATE,
16603 +- USB_TYPE_VENDOR | USB_DIR_IN, 0, 0,
16604 +- state, 0x01, USB_CTRL_SET_TIMEOUT);
16605 ++ ret = usb_control_msg(udev, pipe, ATH3K_GETSTATE,
16606 ++ USB_TYPE_VENDOR | USB_DIR_IN, 0, 0,
16607 ++ buf, sizeof(*buf), USB_CTRL_SET_TIMEOUT);
16608 ++
16609 ++ *state = *buf;
16610 ++ kfree(buf);
16611 ++
16612 ++ return ret;
16613 + }
16614 +
16615 + static int ath3k_get_version(struct usb_device *udev,
16616 + struct ath3k_version *version)
16617 + {
16618 +- int pipe = 0;
16619 ++ int ret, pipe = 0;
16620 ++ struct ath3k_version *buf;
16621 ++ const int size = sizeof(*buf);
16622 ++
16623 ++ buf = kmalloc(size, GFP_KERNEL);
16624 ++ if (!buf)
16625 ++ return -ENOMEM;
16626 +
16627 + pipe = usb_rcvctrlpipe(udev, 0);
16628 +- return usb_control_msg(udev, pipe, ATH3K_GETVERSION,
16629 +- USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, version,
16630 +- sizeof(struct ath3k_version),
16631 +- USB_CTRL_SET_TIMEOUT);
16632 ++ ret = usb_control_msg(udev, pipe, ATH3K_GETVERSION,
16633 ++ USB_TYPE_VENDOR | USB_DIR_IN, 0, 0,
16634 ++ buf, size, USB_CTRL_SET_TIMEOUT);
16635 ++
16636 ++ memcpy(version, buf, size);
16637 ++ kfree(buf);
16638 ++
16639 ++ return ret;
16640 + }
16641 +
16642 + static int ath3k_load_fwfile(struct usb_device *udev,
16643 +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
16644 +index 7a7e5f8..d0b3d90 100644
16645 +--- a/drivers/bluetooth/btusb.c
16646 ++++ b/drivers/bluetooth/btusb.c
16647 +@@ -57,6 +57,9 @@ static struct usb_device_id btusb_table[] = {
16648 + /* Apple-specific (Broadcom) devices */
16649 + { USB_VENDOR_AND_INTERFACE_INFO(0x05ac, 0xff, 0x01, 0x01) },
16650 +
16651 ++ /* MediaTek MT76x0E */
16652 ++ { USB_DEVICE(0x0e8d, 0x763f) },
16653 ++
16654 + /* Broadcom SoftSailing reporting vendor specific */
16655 + { USB_DEVICE(0x0a5c, 0x21e1) },
16656 +
16657 +@@ -151,6 +154,10 @@ static struct usb_device_id blacklist_table[] = {
16658 + { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
16659 + { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
16660 + { USB_DEVICE(0x0489, 0xe04d), .driver_info = BTUSB_ATH3012 },
16661 ++ { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
16662 ++ { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 },
16663 ++ { USB_DEVICE(0x0cf3, 0x3121), .driver_info = BTUSB_ATH3012 },
16664 ++ { USB_DEVICE(0x0cf3, 0xe003), .driver_info = BTUSB_ATH3012 },
16665 +
16666 + /* Atheros AR5BBU12 with sflash firmware */
16667 + { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
16668 +@@ -1092,7 +1099,7 @@ static int btusb_setup_intel_patching(struct hci_dev *hdev,
16669 + if (IS_ERR(skb)) {
16670 + BT_ERR("%s sending Intel patch command (0x%4.4x) failed (%ld)",
16671 + hdev->name, cmd->opcode, PTR_ERR(skb));
16672 +- return -PTR_ERR(skb);
16673 ++ return PTR_ERR(skb);
16674 + }
16675 +
16676 + /* It ensures that the returned event matches the event data read from
16677 +@@ -1144,7 +1151,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
16678 + if (IS_ERR(skb)) {
16679 + BT_ERR("%s sending initial HCI reset command failed (%ld)",
16680 + hdev->name, PTR_ERR(skb));
16681 +- return -PTR_ERR(skb);
16682 ++ return PTR_ERR(skb);
16683 + }
16684 + kfree_skb(skb);
16685 +
16686 +@@ -1158,7 +1165,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
16687 + if (IS_ERR(skb)) {
16688 + BT_ERR("%s reading Intel fw version command failed (%ld)",
16689 + hdev->name, PTR_ERR(skb));
16690 +- return -PTR_ERR(skb);
16691 ++ return PTR_ERR(skb);
16692 + }
16693 +
16694 + if (skb->len != sizeof(*ver)) {
16695 +@@ -1216,7 +1223,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
16696 + BT_ERR("%s entering Intel manufacturer mode failed (%ld)",
16697 + hdev->name, PTR_ERR(skb));
16698 + release_firmware(fw);
16699 +- return -PTR_ERR(skb);
16700 ++ return PTR_ERR(skb);
16701 + }
16702 +
16703 + if (skb->data[0]) {
16704 +@@ -1273,7 +1280,7 @@ static int btusb_setup_intel(struct hci_dev *hdev)
16705 + if (IS_ERR(skb)) {
16706 + BT_ERR("%s exiting Intel manufacturer mode failed (%ld)",
16707 + hdev->name, PTR_ERR(skb));
16708 +- return -PTR_ERR(skb);
16709 ++ return PTR_ERR(skb);
16710 + }
16711 + kfree_skb(skb);
16712 +
16713 +@@ -1289,7 +1296,7 @@ exit_mfg_disable:
16714 + if (IS_ERR(skb)) {
16715 + BT_ERR("%s exiting Intel manufacturer mode failed (%ld)",
16716 + hdev->name, PTR_ERR(skb));
16717 +- return -PTR_ERR(skb);
16718 ++ return PTR_ERR(skb);
16719 + }
16720 + kfree_skb(skb);
16721 +
16722 +@@ -1307,7 +1314,7 @@ exit_mfg_deactivate:
16723 + if (IS_ERR(skb)) {
16724 + BT_ERR("%s exiting Intel manufacturer mode failed (%ld)",
16725 + hdev->name, PTR_ERR(skb));
16726 +- return -PTR_ERR(skb);
16727 ++ return PTR_ERR(skb);
16728 + }
16729 + kfree_skb(skb);
16730 +
16731 +diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c
16732 +index 94821ab..9576fad 100644
16733 +--- a/drivers/char/agp/parisc-agp.c
16734 ++++ b/drivers/char/agp/parisc-agp.c
16735 +@@ -129,7 +129,8 @@ parisc_agp_insert_memory(struct agp_memory *mem, off_t pg_start, int type)
16736 + off_t j, io_pg_start;
16737 + int io_pg_count;
16738 +
16739 +- if (type != 0 || mem->type != 0) {
16740 ++ if (type != mem->type ||
16741 ++ agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type)) {
16742 + return -EINVAL;
16743 + }
16744 +
16745 +@@ -175,7 +176,8 @@ parisc_agp_remove_memory(struct agp_memory *mem, off_t pg_start, int type)
16746 + struct _parisc_agp_info *info = &parisc_agp_info;
16747 + int i, io_pg_start, io_pg_count;
16748 +
16749 +- if (type != 0 || mem->type != 0) {
16750 ++ if (type != mem->type ||
16751 ++ agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type)) {
16752 + return -EINVAL;
16753 + }
16754 +
16755 +diff --git a/drivers/char/hw_random/bcm2835-rng.c b/drivers/char/hw_random/bcm2835-rng.c
16756 +index eb7f147..43577ca 100644
16757 +--- a/drivers/char/hw_random/bcm2835-rng.c
16758 ++++ b/drivers/char/hw_random/bcm2835-rng.c
16759 +@@ -110,4 +110,4 @@ module_platform_driver(bcm2835_rng_driver);
16760 +
16761 + MODULE_AUTHOR("Lubomir Rintel <lkundrak@××.sk>");
16762 + MODULE_DESCRIPTION("BCM2835 Random Number Generator (RNG) driver");
16763 +-MODULE_LICENSE("GPLv2");
16764 ++MODULE_LICENSE("GPL v2");
16765 +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
16766 +index 178fe7a..6485547 100644
16767 +--- a/drivers/cpufreq/cpufreq.c
16768 ++++ b/drivers/cpufreq/cpufreq.c
16769 +@@ -1075,14 +1075,11 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif
16770 + __func__, cpu_dev->id, cpu);
16771 + }
16772 +
16773 +- if ((cpus == 1) && (cpufreq_driver->target))
16774 +- __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);
16775 +-
16776 +- pr_debug("%s: removing link, cpu: %d\n", __func__, cpu);
16777 +- cpufreq_cpu_put(data);
16778 +-
16779 + /* If cpu is last user of policy, free policy */
16780 + if (cpus == 1) {
16781 ++ if (cpufreq_driver->target)
16782 ++ __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);
16783 ++
16784 + lock_policy_rwsem_read(cpu);
16785 + kobj = &data->kobj;
16786 + cmp = &data->kobj_unregister;
16787 +@@ -1103,9 +1100,13 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif
16788 + free_cpumask_var(data->related_cpus);
16789 + free_cpumask_var(data->cpus);
16790 + kfree(data);
16791 +- } else if (cpufreq_driver->target) {
16792 +- __cpufreq_governor(data, CPUFREQ_GOV_START);
16793 +- __cpufreq_governor(data, CPUFREQ_GOV_LIMITS);
16794 ++ } else {
16795 ++ pr_debug("%s: removing link, cpu: %d\n", __func__, cpu);
16796 ++ cpufreq_cpu_put(data);
16797 ++ if (cpufreq_driver->target) {
16798 ++ __cpufreq_governor(data, CPUFREQ_GOV_START);
16799 ++ __cpufreq_governor(data, CPUFREQ_GOV_LIMITS);
16800 ++ }
16801 + }
16802 +
16803 + per_cpu(cpufreq_policy_cpu, cpu) = -1;
16804 +diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
16805 +index fe343a0..bc580b6 100644
16806 +--- a/drivers/cpuidle/governors/menu.c
16807 ++++ b/drivers/cpuidle/governors/menu.c
16808 +@@ -28,13 +28,6 @@
16809 + #define MAX_INTERESTING 50000
16810 + #define STDDEV_THRESH 400
16811 +
16812 +-/* 60 * 60 > STDDEV_THRESH * INTERVALS = 400 * 8 */
16813 +-#define MAX_DEVIATION 60
16814 +-
16815 +-static DEFINE_PER_CPU(struct hrtimer, menu_hrtimer);
16816 +-static DEFINE_PER_CPU(int, hrtimer_status);
16817 +-/* menu hrtimer mode */
16818 +-enum {MENU_HRTIMER_STOP, MENU_HRTIMER_REPEAT, MENU_HRTIMER_GENERAL};
16819 +
16820 + /*
16821 + * Concepts and ideas behind the menu governor
16822 +@@ -116,13 +109,6 @@ enum {MENU_HRTIMER_STOP, MENU_HRTIMER_REPEAT, MENU_HRTIMER_GENERAL};
16823 + *
16824 + */
16825 +
16826 +-/*
16827 +- * The C-state residency is so long that is is worthwhile to exit
16828 +- * from the shallow C-state and re-enter into a deeper C-state.
16829 +- */
16830 +-static unsigned int perfect_cstate_ms __read_mostly = 30;
16831 +-module_param(perfect_cstate_ms, uint, 0000);
16832 +-
16833 + struct menu_device {
16834 + int last_state_idx;
16835 + int needs_update;
16836 +@@ -205,52 +191,17 @@ static u64 div_round64(u64 dividend, u32 divisor)
16837 + return div_u64(dividend + (divisor / 2), divisor);
16838 + }
16839 +
16840 +-/* Cancel the hrtimer if it is not triggered yet */
16841 +-void menu_hrtimer_cancel(void)
16842 +-{
16843 +- int cpu = smp_processor_id();
16844 +- struct hrtimer *hrtmr = &per_cpu(menu_hrtimer, cpu);
16845 +-
16846 +- /* The timer is still not time out*/
16847 +- if (per_cpu(hrtimer_status, cpu)) {
16848 +- hrtimer_cancel(hrtmr);
16849 +- per_cpu(hrtimer_status, cpu) = MENU_HRTIMER_STOP;
16850 +- }
16851 +-}
16852 +-EXPORT_SYMBOL_GPL(menu_hrtimer_cancel);
16853 +-
16854 +-/* Call back for hrtimer is triggered */
16855 +-static enum hrtimer_restart menu_hrtimer_notify(struct hrtimer *hrtimer)
16856 +-{
16857 +- int cpu = smp_processor_id();
16858 +- struct menu_device *data = &per_cpu(menu_devices, cpu);
16859 +-
16860 +- /* In general case, the expected residency is much larger than
16861 +- * deepest C-state target residency, but prediction logic still
16862 +- * predicts a small predicted residency, so the prediction
16863 +- * history is totally broken if the timer is triggered.
16864 +- * So reset the correction factor.
16865 +- */
16866 +- if (per_cpu(hrtimer_status, cpu) == MENU_HRTIMER_GENERAL)
16867 +- data->correction_factor[data->bucket] = RESOLUTION * DECAY;
16868 +-
16869 +- per_cpu(hrtimer_status, cpu) = MENU_HRTIMER_STOP;
16870 +-
16871 +- return HRTIMER_NORESTART;
16872 +-}
16873 +-
16874 + /*
16875 + * Try detecting repeating patterns by keeping track of the last 8
16876 + * intervals, and checking if the standard deviation of that set
16877 + * of points is below a threshold. If it is... then use the
16878 + * average of these 8 points as the estimated value.
16879 + */
16880 +-static u32 get_typical_interval(struct menu_device *data)
16881 ++static void get_typical_interval(struct menu_device *data)
16882 + {
16883 + int i = 0, divisor = 0;
16884 + uint64_t max = 0, avg = 0, stddev = 0;
16885 + int64_t thresh = LLONG_MAX; /* Discard outliers above this value. */
16886 +- unsigned int ret = 0;
16887 +
16888 + again:
16889 +
16890 +@@ -291,16 +242,13 @@ again:
16891 + if (((avg > stddev * 6) && (divisor * 4 >= INTERVALS * 3))
16892 + || stddev <= 20) {
16893 + data->predicted_us = avg;
16894 +- ret = 1;
16895 +- return ret;
16896 ++ return;
16897 +
16898 + } else if ((divisor * 4) > INTERVALS * 3) {
16899 + /* Exclude the max interval */
16900 + thresh = max - 1;
16901 + goto again;
16902 + }
16903 +-
16904 +- return ret;
16905 + }
16906 +
16907 + /**
16908 +@@ -315,9 +263,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
16909 + int i;
16910 + int multiplier;
16911 + struct timespec t;
16912 +- int repeat = 0, low_predicted = 0;
16913 +- int cpu = smp_processor_id();
16914 +- struct hrtimer *hrtmr = &per_cpu(menu_hrtimer, cpu);
16915 +
16916 + if (data->needs_update) {
16917 + menu_update(drv, dev);
16918 +@@ -352,7 +297,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
16919 + data->predicted_us = div_round64(data->expected_us * data->correction_factor[data->bucket],
16920 + RESOLUTION * DECAY);
16921 +
16922 +- repeat = get_typical_interval(data);
16923 ++ get_typical_interval(data);
16924 +
16925 + /*
16926 + * We want to default to C1 (hlt), not to busy polling
16927 +@@ -373,10 +318,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
16928 +
16929 + if (s->disabled || su->disable)
16930 + continue;
16931 +- if (s->target_residency > data->predicted_us) {
16932 +- low_predicted = 1;
16933 ++ if (s->target_residency > data->predicted_us)
16934 + continue;
16935 +- }
16936 + if (s->exit_latency > latency_req)
16937 + continue;
16938 + if (s->exit_latency * multiplier > data->predicted_us)
16939 +@@ -386,44 +329,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
16940 + data->exit_us = s->exit_latency;
16941 + }
16942 +
16943 +- /* not deepest C-state chosen for low predicted residency */
16944 +- if (low_predicted) {
16945 +- unsigned int timer_us = 0;
16946 +- unsigned int perfect_us = 0;
16947 +-
16948 +- /*
16949 +- * Set a timer to detect whether this sleep is much
16950 +- * longer than repeat mode predicted. If the timer
16951 +- * triggers, the code will evaluate whether to put
16952 +- * the CPU into a deeper C-state.
16953 +- * The timer is cancelled on CPU wakeup.
16954 +- */
16955 +- timer_us = 2 * (data->predicted_us + MAX_DEVIATION);
16956 +-
16957 +- perfect_us = perfect_cstate_ms * 1000;
16958 +-
16959 +- if (repeat && (4 * timer_us < data->expected_us)) {
16960 +- RCU_NONIDLE(hrtimer_start(hrtmr,
16961 +- ns_to_ktime(1000 * timer_us),
16962 +- HRTIMER_MODE_REL_PINNED));
16963 +- /* In repeat case, menu hrtimer is started */
16964 +- per_cpu(hrtimer_status, cpu) = MENU_HRTIMER_REPEAT;
16965 +- } else if (perfect_us < data->expected_us) {
16966 +- /*
16967 +- * The next timer is long. This could be because
16968 +- * we did not make a useful prediction.
16969 +- * In that case, it makes sense to re-enter
16970 +- * into a deeper C-state after some time.
16971 +- */
16972 +- RCU_NONIDLE(hrtimer_start(hrtmr,
16973 +- ns_to_ktime(1000 * timer_us),
16974 +- HRTIMER_MODE_REL_PINNED));
16975 +- /* In general case, menu hrtimer is started */
16976 +- per_cpu(hrtimer_status, cpu) = MENU_HRTIMER_GENERAL;
16977 +- }
16978 +-
16979 +- }
16980 +-
16981 + return data->last_state_idx;
16982 + }
16983 +
16984 +@@ -514,9 +419,6 @@ static int menu_enable_device(struct cpuidle_driver *drv,
16985 + struct cpuidle_device *dev)
16986 + {
16987 + struct menu_device *data = &per_cpu(menu_devices, dev->cpu);
16988 +- struct hrtimer *t = &per_cpu(menu_hrtimer, dev->cpu);
16989 +- hrtimer_init(t, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
16990 +- t->function = menu_hrtimer_notify;
16991 +
16992 + memset(data, 0, sizeof(struct menu_device));
16993 +
16994 +diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
16995 +index 7ec82f0..4c2f465 100644
16996 +--- a/drivers/dma/pl330.c
16997 ++++ b/drivers/dma/pl330.c
16998 +@@ -2527,6 +2527,10 @@ static dma_cookie_t pl330_tx_submit(struct dma_async_tx_descriptor *tx)
16999 + /* Assign cookies to all nodes */
17000 + while (!list_empty(&last->node)) {
17001 + desc = list_entry(last->node.next, struct dma_pl330_desc, node);
17002 ++ if (pch->cyclic) {
17003 ++ desc->txd.callback = last->txd.callback;
17004 ++ desc->txd.callback_param = last->txd.callback_param;
17005 ++ }
17006 +
17007 + dma_cookie_assign(&desc->txd);
17008 +
17009 +@@ -2710,45 +2714,82 @@ static struct dma_async_tx_descriptor *pl330_prep_dma_cyclic(
17010 + size_t period_len, enum dma_transfer_direction direction,
17011 + unsigned long flags, void *context)
17012 + {
17013 +- struct dma_pl330_desc *desc;
17014 ++ struct dma_pl330_desc *desc = NULL, *first = NULL;
17015 + struct dma_pl330_chan *pch = to_pchan(chan);
17016 ++ struct dma_pl330_dmac *pdmac = pch->dmac;
17017 ++ unsigned int i;
17018 + dma_addr_t dst;
17019 + dma_addr_t src;
17020 +
17021 +- desc = pl330_get_desc(pch);
17022 +- if (!desc) {
17023 +- dev_err(pch->dmac->pif.dev, "%s:%d Unable to fetch desc\n",
17024 +- __func__, __LINE__);
17025 ++ if (len % period_len != 0)
17026 + return NULL;
17027 +- }
17028 +
17029 +- switch (direction) {
17030 +- case DMA_MEM_TO_DEV:
17031 +- desc->rqcfg.src_inc = 1;
17032 +- desc->rqcfg.dst_inc = 0;
17033 +- desc->req.rqtype = MEMTODEV;
17034 +- src = dma_addr;
17035 +- dst = pch->fifo_addr;
17036 +- break;
17037 +- case DMA_DEV_TO_MEM:
17038 +- desc->rqcfg.src_inc = 0;
17039 +- desc->rqcfg.dst_inc = 1;
17040 +- desc->req.rqtype = DEVTOMEM;
17041 +- src = pch->fifo_addr;
17042 +- dst = dma_addr;
17043 +- break;
17044 +- default:
17045 ++ if (!is_slave_direction(direction)) {
17046 + dev_err(pch->dmac->pif.dev, "%s:%d Invalid dma direction\n",
17047 + __func__, __LINE__);
17048 + return NULL;
17049 + }
17050 +
17051 +- desc->rqcfg.brst_size = pch->burst_sz;
17052 +- desc->rqcfg.brst_len = 1;
17053 ++ for (i = 0; i < len / period_len; i++) {
17054 ++ desc = pl330_get_desc(pch);
17055 ++ if (!desc) {
17056 ++ dev_err(pch->dmac->pif.dev, "%s:%d Unable to fetch desc\n",
17057 ++ __func__, __LINE__);
17058 +
17059 +- pch->cyclic = true;
17060 ++ if (!first)
17061 ++ return NULL;
17062 ++
17063 ++ spin_lock_irqsave(&pdmac->pool_lock, flags);
17064 ++
17065 ++ while (!list_empty(&first->node)) {
17066 ++ desc = list_entry(first->node.next,
17067 ++ struct dma_pl330_desc, node);
17068 ++ list_move_tail(&desc->node, &pdmac->desc_pool);
17069 ++ }
17070 ++
17071 ++ list_move_tail(&first->node, &pdmac->desc_pool);
17072 +
17073 +- fill_px(&desc->px, dst, src, period_len);
17074 ++ spin_unlock_irqrestore(&pdmac->pool_lock, flags);
17075 ++
17076 ++ return NULL;
17077 ++ }
17078 ++
17079 ++ switch (direction) {
17080 ++ case DMA_MEM_TO_DEV:
17081 ++ desc->rqcfg.src_inc = 1;
17082 ++ desc->rqcfg.dst_inc = 0;
17083 ++ desc->req.rqtype = MEMTODEV;
17084 ++ src = dma_addr;
17085 ++ dst = pch->fifo_addr;
17086 ++ break;
17087 ++ case DMA_DEV_TO_MEM:
17088 ++ desc->rqcfg.src_inc = 0;
17089 ++ desc->rqcfg.dst_inc = 1;
17090 ++ desc->req.rqtype = DEVTOMEM;
17091 ++ src = pch->fifo_addr;
17092 ++ dst = dma_addr;
17093 ++ break;
17094 ++ default:
17095 ++ break;
17096 ++ }
17097 ++
17098 ++ desc->rqcfg.brst_size = pch->burst_sz;
17099 ++ desc->rqcfg.brst_len = 1;
17100 ++ fill_px(&desc->px, dst, src, period_len);
17101 ++
17102 ++ if (!first)
17103 ++ first = desc;
17104 ++ else
17105 ++ list_add_tail(&desc->node, &first->node);
17106 ++
17107 ++ dma_addr += period_len;
17108 ++ }
17109 ++
17110 ++ if (!desc)
17111 ++ return NULL;
17112 ++
17113 ++ pch->cyclic = true;
17114 ++ desc->txd.flags = flags;
17115 +
17116 + return &desc->txd;
17117 + }
17118 +diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
17119 +index fb961bb..16e674a 100644
17120 +--- a/drivers/gpu/drm/i915/intel_ddi.c
17121 ++++ b/drivers/gpu/drm/i915/intel_ddi.c
17122 +@@ -684,7 +684,7 @@ static void intel_ddi_mode_set(struct drm_encoder *encoder,
17123 + struct intel_digital_port *intel_dig_port =
17124 + enc_to_dig_port(encoder);
17125 +
17126 +- intel_dp->DP = intel_dig_port->port_reversal |
17127 ++ intel_dp->DP = intel_dig_port->saved_port_bits |
17128 + DDI_BUF_CTL_ENABLE | DDI_BUF_EMP_400MV_0DB_HSW;
17129 + switch (intel_dp->lane_count) {
17130 + case 1:
17131 +@@ -1324,7 +1324,8 @@ static void intel_enable_ddi(struct intel_encoder *intel_encoder)
17132 + * enabling the port.
17133 + */
17134 + I915_WRITE(DDI_BUF_CTL(port),
17135 +- intel_dig_port->port_reversal | DDI_BUF_CTL_ENABLE);
17136 ++ intel_dig_port->saved_port_bits |
17137 ++ DDI_BUF_CTL_ENABLE);
17138 + } else if (type == INTEL_OUTPUT_EDP) {
17139 + struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
17140 +
17141 +@@ -1543,8 +1544,9 @@ void intel_ddi_init(struct drm_device *dev, enum port port)
17142 + intel_encoder->get_hw_state = intel_ddi_get_hw_state;
17143 +
17144 + intel_dig_port->port = port;
17145 +- intel_dig_port->port_reversal = I915_READ(DDI_BUF_CTL(port)) &
17146 +- DDI_BUF_PORT_REVERSAL;
17147 ++ intel_dig_port->saved_port_bits = I915_READ(DDI_BUF_CTL(port)) &
17148 ++ (DDI_BUF_PORT_REVERSAL |
17149 ++ DDI_A_4_LANES);
17150 + if (hdmi_connector)
17151 + intel_dig_port->hdmi.hdmi_reg = DDI_BUF_CTL(port);
17152 + intel_dig_port->dp.output_reg = DDI_BUF_CTL(port);
17153 +diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
17154 +index e1f4e6e..eea5982 100644
17155 +--- a/drivers/gpu/drm/i915/intel_display.c
17156 ++++ b/drivers/gpu/drm/i915/intel_display.c
17157 +@@ -4333,7 +4333,8 @@ static void vlv_update_pll(struct intel_crtc *crtc)
17158 +
17159 + static void i9xx_update_pll(struct intel_crtc *crtc,
17160 + intel_clock_t *reduced_clock,
17161 +- int num_connectors)
17162 ++ int num_connectors,
17163 ++ bool needs_tv_clock)
17164 + {
17165 + struct drm_device *dev = crtc->base.dev;
17166 + struct drm_i915_private *dev_priv = dev->dev_private;
17167 +@@ -4391,7 +4392,7 @@ static void i9xx_update_pll(struct intel_crtc *crtc,
17168 + if (INTEL_INFO(dev)->gen >= 4)
17169 + dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT);
17170 +
17171 +- if (is_sdvo && intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_TVOUT))
17172 ++ if (is_sdvo && needs_tv_clock)
17173 + dpll |= PLL_REF_INPUT_TVCLKINBC;
17174 + else if (intel_pipe_has_type(&crtc->base, INTEL_OUTPUT_TVOUT))
17175 + /* XXX: just matching BIOS for now */
17176 +@@ -4716,7 +4717,8 @@ static int i9xx_crtc_mode_set(struct drm_crtc *crtc,
17177 + else
17178 + i9xx_update_pll(intel_crtc,
17179 + has_reduced_clock ? &reduced_clock : NULL,
17180 +- num_connectors);
17181 ++ num_connectors,
17182 ++ is_sdvo && is_tv);
17183 +
17184 + /* Set up the display plane register */
17185 + dspcntr = DISPPLANE_GAMMA_ENABLE;
17186 +diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
17187 +index 624a9e6..7cd5584 100644
17188 +--- a/drivers/gpu/drm/i915/intel_drv.h
17189 ++++ b/drivers/gpu/drm/i915/intel_drv.h
17190 +@@ -426,7 +426,7 @@ struct intel_dp {
17191 + struct intel_digital_port {
17192 + struct intel_encoder base;
17193 + enum port port;
17194 +- u32 port_reversal;
17195 ++ u32 saved_port_bits;
17196 + struct intel_dp dp;
17197 + struct intel_hdmi hdmi;
17198 + };
17199 +diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
17200 +index f4dcfdd..aad18e6 100644
17201 +--- a/drivers/gpu/drm/radeon/radeon.h
17202 ++++ b/drivers/gpu/drm/radeon/radeon.h
17203 +@@ -1145,6 +1145,8 @@ struct radeon_uvd {
17204 + struct radeon_bo *vcpu_bo;
17205 + void *cpu_addr;
17206 + uint64_t gpu_addr;
17207 ++ void *saved_bo;
17208 ++ unsigned fw_size;
17209 + atomic_t handles[RADEON_MAX_UVD_HANDLES];
17210 + struct drm_file *filp[RADEON_MAX_UVD_HANDLES];
17211 + struct delayed_work idle_work;
17212 +@@ -1684,7 +1686,6 @@ struct radeon_device {
17213 + const struct firmware *rlc_fw; /* r6/700 RLC firmware */
17214 + const struct firmware *mc_fw; /* NI MC firmware */
17215 + const struct firmware *ce_fw; /* SI CE firmware */
17216 +- const struct firmware *uvd_fw; /* UVD firmware */
17217 + struct r600_blit r600_blit;
17218 + struct r600_vram_scratch vram_scratch;
17219 + int msi_enabled; /* msi enabled */
17220 +diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c
17221 +index a2802b47..de36c47 100644
17222 +--- a/drivers/gpu/drm/radeon/radeon_asic.c
17223 ++++ b/drivers/gpu/drm/radeon/radeon_asic.c
17224 +@@ -986,8 +986,8 @@ static struct radeon_asic r600_asic = {
17225 + .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX,
17226 + .dma = &r600_copy_dma,
17227 + .dma_ring_index = R600_RING_TYPE_DMA_INDEX,
17228 +- .copy = &r600_copy_dma,
17229 +- .copy_ring_index = R600_RING_TYPE_DMA_INDEX,
17230 ++ .copy = &r600_copy_blit,
17231 ++ .copy_ring_index = RADEON_RING_TYPE_GFX_INDEX,
17232 + },
17233 + .surface = {
17234 + .set_reg = r600_set_surface_reg,
17235 +@@ -1074,8 +1074,8 @@ static struct radeon_asic rs780_asic = {
17236 + .blit_ring_index = RADEON_RING_TYPE_GFX_INDEX,
17237 + .dma = &r600_copy_dma,
17238 + .dma_ring_index = R600_RING_TYPE_DMA_INDEX,
17239 +- .copy = &r600_copy_dma,
17240 +- .copy_ring_index = R600_RING_TYPE_DMA_INDEX,
17241 ++ .copy = &r600_copy_blit,
17242 ++ .copy_ring_index = RADEON_RING_TYPE_GFX_INDEX,
17243 + },
17244 + .surface = {
17245 + .set_reg = r600_set_surface_reg,
17246 +diff --git a/drivers/gpu/drm/radeon/radeon_fence.c b/drivers/gpu/drm/radeon/radeon_fence.c
17247 +index ddb8f8e..7ddb0ef 100644
17248 +--- a/drivers/gpu/drm/radeon/radeon_fence.c
17249 ++++ b/drivers/gpu/drm/radeon/radeon_fence.c
17250 +@@ -782,7 +782,7 @@ int radeon_fence_driver_start_ring(struct radeon_device *rdev, int ring)
17251 +
17252 + } else {
17253 + /* put fence directly behind firmware */
17254 +- index = ALIGN(rdev->uvd_fw->size, 8);
17255 ++ index = ALIGN(rdev->uvd.fw_size, 8);
17256 + rdev->fence_drv[ring].cpu_addr = rdev->uvd.cpu_addr + index;
17257 + rdev->fence_drv[ring].gpu_addr = rdev->uvd.gpu_addr + index;
17258 + }
17259 +diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
17260 +index cad735d..1b3a91b 100644
17261 +--- a/drivers/gpu/drm/radeon/radeon_uvd.c
17262 ++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
17263 +@@ -55,6 +55,7 @@ static void radeon_uvd_idle_work_handler(struct work_struct *work);
17264 + int radeon_uvd_init(struct radeon_device *rdev)
17265 + {
17266 + struct platform_device *pdev;
17267 ++ const struct firmware *fw;
17268 + unsigned long bo_size;
17269 + const char *fw_name;
17270 + int i, r;
17271 +@@ -104,7 +105,7 @@ int radeon_uvd_init(struct radeon_device *rdev)
17272 + return -EINVAL;
17273 + }
17274 +
17275 +- r = request_firmware(&rdev->uvd_fw, fw_name, &pdev->dev);
17276 ++ r = request_firmware(&fw, fw_name, &pdev->dev);
17277 + if (r) {
17278 + dev_err(rdev->dev, "radeon_uvd: Can't load firmware \"%s\"\n",
17279 + fw_name);
17280 +@@ -114,7 +115,7 @@ int radeon_uvd_init(struct radeon_device *rdev)
17281 +
17282 + platform_device_unregister(pdev);
17283 +
17284 +- bo_size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size + 8) +
17285 ++ bo_size = RADEON_GPU_PAGE_ALIGN(fw->size + 8) +
17286 + RADEON_UVD_STACK_SIZE + RADEON_UVD_HEAP_SIZE;
17287 + r = radeon_bo_create(rdev, bo_size, PAGE_SIZE, true,
17288 + RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->uvd.vcpu_bo);
17289 +@@ -123,16 +124,35 @@ int radeon_uvd_init(struct radeon_device *rdev)
17290 + return r;
17291 + }
17292 +
17293 +- r = radeon_uvd_resume(rdev);
17294 +- if (r)
17295 ++ r = radeon_bo_reserve(rdev->uvd.vcpu_bo, false);
17296 ++ if (r) {
17297 ++ radeon_bo_unref(&rdev->uvd.vcpu_bo);
17298 ++ dev_err(rdev->dev, "(%d) failed to reserve UVD bo\n", r);
17299 + return r;
17300 ++ }
17301 +
17302 +- memset(rdev->uvd.cpu_addr, 0, bo_size);
17303 +- memcpy(rdev->uvd.cpu_addr, rdev->uvd_fw->data, rdev->uvd_fw->size);
17304 ++ r = radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_VRAM,
17305 ++ &rdev->uvd.gpu_addr);
17306 ++ if (r) {
17307 ++ radeon_bo_unreserve(rdev->uvd.vcpu_bo);
17308 ++ radeon_bo_unref(&rdev->uvd.vcpu_bo);
17309 ++ dev_err(rdev->dev, "(%d) UVD bo pin failed\n", r);
17310 ++ return r;
17311 ++ }
17312 +
17313 +- r = radeon_uvd_suspend(rdev);
17314 +- if (r)
17315 ++ r = radeon_bo_kmap(rdev->uvd.vcpu_bo, &rdev->uvd.cpu_addr);
17316 ++ if (r) {
17317 ++ dev_err(rdev->dev, "(%d) UVD map failed\n", r);
17318 + return r;
17319 ++ }
17320 ++
17321 ++ radeon_bo_unreserve(rdev->uvd.vcpu_bo);
17322 ++
17323 ++ rdev->uvd.fw_size = fw->size;
17324 ++ memset(rdev->uvd.cpu_addr, 0, bo_size);
17325 ++ memcpy(rdev->uvd.cpu_addr, fw->data, fw->size);
17326 ++
17327 ++ release_firmware(fw);
17328 +
17329 + for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
17330 + atomic_set(&rdev->uvd.handles[i], 0);
17331 +@@ -144,71 +164,47 @@ int radeon_uvd_init(struct radeon_device *rdev)
17332 +
17333 + void radeon_uvd_fini(struct radeon_device *rdev)
17334 + {
17335 +- radeon_uvd_suspend(rdev);
17336 +- radeon_bo_unref(&rdev->uvd.vcpu_bo);
17337 +-}
17338 +-
17339 +-int radeon_uvd_suspend(struct radeon_device *rdev)
17340 +-{
17341 + int r;
17342 +
17343 + if (rdev->uvd.vcpu_bo == NULL)
17344 +- return 0;
17345 ++ return;
17346 +
17347 + r = radeon_bo_reserve(rdev->uvd.vcpu_bo, false);
17348 + if (!r) {
17349 + radeon_bo_kunmap(rdev->uvd.vcpu_bo);
17350 + radeon_bo_unpin(rdev->uvd.vcpu_bo);
17351 +- rdev->uvd.cpu_addr = NULL;
17352 +- if (!radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_CPU, NULL)) {
17353 +- radeon_bo_kmap(rdev->uvd.vcpu_bo, &rdev->uvd.cpu_addr);
17354 +- }
17355 + radeon_bo_unreserve(rdev->uvd.vcpu_bo);
17356 +-
17357 +- if (rdev->uvd.cpu_addr) {
17358 +- radeon_fence_driver_start_ring(rdev, R600_RING_TYPE_UVD_INDEX);
17359 +- } else {
17360 +- rdev->fence_drv[R600_RING_TYPE_UVD_INDEX].cpu_addr = NULL;
17361 +- }
17362 + }
17363 +- return r;
17364 ++
17365 ++ radeon_bo_unref(&rdev->uvd.vcpu_bo);
17366 + }
17367 +
17368 +-int radeon_uvd_resume(struct radeon_device *rdev)
17369 ++int radeon_uvd_suspend(struct radeon_device *rdev)
17370 + {
17371 +- int r;
17372 ++ unsigned size;
17373 +
17374 + if (rdev->uvd.vcpu_bo == NULL)
17375 +- return -EINVAL;
17376 ++ return 0;
17377 +
17378 +- r = radeon_bo_reserve(rdev->uvd.vcpu_bo, false);
17379 +- if (r) {
17380 +- radeon_bo_unref(&rdev->uvd.vcpu_bo);
17381 +- dev_err(rdev->dev, "(%d) failed to reserve UVD bo\n", r);
17382 +- return r;
17383 +- }
17384 ++ size = radeon_bo_size(rdev->uvd.vcpu_bo);
17385 ++ rdev->uvd.saved_bo = kmalloc(size, GFP_KERNEL);
17386 ++ memcpy(rdev->uvd.saved_bo, rdev->uvd.cpu_addr, size);
17387 +
17388 +- /* Have been pin in cpu unmap unpin */
17389 +- radeon_bo_kunmap(rdev->uvd.vcpu_bo);
17390 +- radeon_bo_unpin(rdev->uvd.vcpu_bo);
17391 ++ return 0;
17392 ++}
17393 +
17394 +- r = radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_VRAM,
17395 +- &rdev->uvd.gpu_addr);
17396 +- if (r) {
17397 +- radeon_bo_unreserve(rdev->uvd.vcpu_bo);
17398 +- radeon_bo_unref(&rdev->uvd.vcpu_bo);
17399 +- dev_err(rdev->dev, "(%d) UVD bo pin failed\n", r);
17400 +- return r;
17401 +- }
17402 ++int radeon_uvd_resume(struct radeon_device *rdev)
17403 ++{
17404 ++ if (rdev->uvd.vcpu_bo == NULL)
17405 ++ return -EINVAL;
17406 +
17407 +- r = radeon_bo_kmap(rdev->uvd.vcpu_bo, &rdev->uvd.cpu_addr);
17408 +- if (r) {
17409 +- dev_err(rdev->dev, "(%d) UVD map failed\n", r);
17410 +- return r;
17411 ++ if (rdev->uvd.saved_bo != NULL) {
17412 ++ unsigned size = radeon_bo_size(rdev->uvd.vcpu_bo);
17413 ++ memcpy(rdev->uvd.cpu_addr, rdev->uvd.saved_bo, size);
17414 ++ kfree(rdev->uvd.saved_bo);
17415 ++ rdev->uvd.saved_bo = NULL;
17416 + }
17417 +
17418 +- radeon_bo_unreserve(rdev->uvd.vcpu_bo);
17419 +-
17420 + return 0;
17421 + }
17422 +
17423 +diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
17424 +index 4a62ad2..30ea14e 100644
17425 +--- a/drivers/gpu/drm/radeon/rv770.c
17426 ++++ b/drivers/gpu/drm/radeon/rv770.c
17427 +@@ -813,7 +813,7 @@ int rv770_uvd_resume(struct radeon_device *rdev)
17428 +
17429 + /* programm the VCPU memory controller bits 0-27 */
17430 + addr = rdev->uvd.gpu_addr >> 3;
17431 +- size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size + 4) >> 3;
17432 ++ size = RADEON_GPU_PAGE_ALIGN(rdev->uvd.fw_size + 4) >> 3;
17433 + WREG32(UVD_VCPU_CACHE_OFFSET0, addr);
17434 + WREG32(UVD_VCPU_CACHE_SIZE0, size);
17435 +
17436 +diff --git a/drivers/hwmon/max6697.c b/drivers/hwmon/max6697.c
17437 +index 328fb03..a41b5f3 100644
17438 +--- a/drivers/hwmon/max6697.c
17439 ++++ b/drivers/hwmon/max6697.c
17440 +@@ -605,12 +605,12 @@ static int max6697_init_chip(struct i2c_client *client)
17441 + if (ret < 0)
17442 + return ret;
17443 + ret = i2c_smbus_write_byte_data(client, MAX6581_REG_IDEALITY,
17444 +- pdata->ideality_mask >> 1);
17445 ++ pdata->ideality_value);
17446 + if (ret < 0)
17447 + return ret;
17448 + ret = i2c_smbus_write_byte_data(client,
17449 + MAX6581_REG_IDEALITY_SELECT,
17450 +- pdata->ideality_value);
17451 ++ pdata->ideality_mask >> 1);
17452 + if (ret < 0)
17453 + return ret;
17454 + }
17455 +diff --git a/drivers/macintosh/windfarm_rm31.c b/drivers/macintosh/windfarm_rm31.c
17456 +index 0b9a79b..82fc86a 100644
17457 +--- a/drivers/macintosh/windfarm_rm31.c
17458 ++++ b/drivers/macintosh/windfarm_rm31.c
17459 +@@ -439,15 +439,15 @@ static void backside_setup_pid(void)
17460 +
17461 + /* Slots fan */
17462 + static const struct wf_pid_param slots_param = {
17463 +- .interval = 5,
17464 +- .history_len = 2,
17465 +- .gd = 30 << 20,
17466 +- .gp = 5 << 20,
17467 +- .gr = 0,
17468 +- .itarget = 40 << 16,
17469 +- .additive = 1,
17470 +- .min = 300,
17471 +- .max = 4000,
17472 ++ .interval = 1,
17473 ++ .history_len = 20,
17474 ++ .gd = 0,
17475 ++ .gp = 0,
17476 ++ .gr = 0x00100000,
17477 ++ .itarget = 3200000,
17478 ++ .additive = 0,
17479 ++ .min = 20,
17480 ++ .max = 100,
17481 + };
17482 +
17483 + static void slots_fan_tick(void)
17484 +diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
17485 +index a746ba2..a956053 100644
17486 +--- a/drivers/net/arcnet/arcnet.c
17487 ++++ b/drivers/net/arcnet/arcnet.c
17488 +@@ -1007,7 +1007,7 @@ static void arcnet_rx(struct net_device *dev, int bufnum)
17489 +
17490 + soft = &pkt.soft.rfc1201;
17491 +
17492 +- lp->hw.copy_from_card(dev, bufnum, 0, &pkt, sizeof(ARC_HDR_SIZE));
17493 ++ lp->hw.copy_from_card(dev, bufnum, 0, &pkt, ARC_HDR_SIZE);
17494 + if (pkt.hard.offset[0]) {
17495 + ofs = pkt.hard.offset[0];
17496 + length = 256 - ofs;
17497 +diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c.h b/drivers/net/ethernet/atheros/atl1c/atl1c.h
17498 +index b2bf324..0f05565 100644
17499 +--- a/drivers/net/ethernet/atheros/atl1c/atl1c.h
17500 ++++ b/drivers/net/ethernet/atheros/atl1c/atl1c.h
17501 +@@ -520,6 +520,9 @@ struct atl1c_adapter {
17502 + struct net_device *netdev;
17503 + struct pci_dev *pdev;
17504 + struct napi_struct napi;
17505 ++ struct page *rx_page;
17506 ++ unsigned int rx_page_offset;
17507 ++ unsigned int rx_frag_size;
17508 + struct atl1c_hw hw;
17509 + struct atl1c_hw_stats hw_stats;
17510 + struct mii_if_info mii; /* MII interface info */
17511 +diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
17512 +index 0ba9007..11cdf1d 100644
17513 +--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
17514 ++++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
17515 +@@ -481,10 +481,15 @@ static int atl1c_set_mac_addr(struct net_device *netdev, void *p)
17516 + static void atl1c_set_rxbufsize(struct atl1c_adapter *adapter,
17517 + struct net_device *dev)
17518 + {
17519 ++ unsigned int head_size;
17520 + int mtu = dev->mtu;
17521 +
17522 + adapter->rx_buffer_len = mtu > AT_RX_BUF_SIZE ?
17523 + roundup(mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN, 8) : AT_RX_BUF_SIZE;
17524 ++
17525 ++ head_size = SKB_DATA_ALIGN(adapter->rx_buffer_len + NET_SKB_PAD) +
17526 ++ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
17527 ++ adapter->rx_frag_size = roundup_pow_of_two(head_size);
17528 + }
17529 +
17530 + static netdev_features_t atl1c_fix_features(struct net_device *netdev,
17531 +@@ -952,6 +957,10 @@ static void atl1c_free_ring_resources(struct atl1c_adapter *adapter)
17532 + kfree(adapter->tpd_ring[0].buffer_info);
17533 + adapter->tpd_ring[0].buffer_info = NULL;
17534 + }
17535 ++ if (adapter->rx_page) {
17536 ++ put_page(adapter->rx_page);
17537 ++ adapter->rx_page = NULL;
17538 ++ }
17539 + }
17540 +
17541 + /**
17542 +@@ -1639,6 +1648,35 @@ static inline void atl1c_rx_checksum(struct atl1c_adapter *adapter,
17543 + skb_checksum_none_assert(skb);
17544 + }
17545 +
17546 ++static struct sk_buff *atl1c_alloc_skb(struct atl1c_adapter *adapter)
17547 ++{
17548 ++ struct sk_buff *skb;
17549 ++ struct page *page;
17550 ++
17551 ++ if (adapter->rx_frag_size > PAGE_SIZE)
17552 ++ return netdev_alloc_skb(adapter->netdev,
17553 ++ adapter->rx_buffer_len);
17554 ++
17555 ++ page = adapter->rx_page;
17556 ++ if (!page) {
17557 ++ adapter->rx_page = page = alloc_page(GFP_ATOMIC);
17558 ++ if (unlikely(!page))
17559 ++ return NULL;
17560 ++ adapter->rx_page_offset = 0;
17561 ++ }
17562 ++
17563 ++ skb = build_skb(page_address(page) + adapter->rx_page_offset,
17564 ++ adapter->rx_frag_size);
17565 ++ if (likely(skb)) {
17566 ++ adapter->rx_page_offset += adapter->rx_frag_size;
17567 ++ if (adapter->rx_page_offset >= PAGE_SIZE)
17568 ++ adapter->rx_page = NULL;
17569 ++ else
17570 ++ get_page(page);
17571 ++ }
17572 ++ return skb;
17573 ++}
17574 ++
17575 + static int atl1c_alloc_rx_buffer(struct atl1c_adapter *adapter)
17576 + {
17577 + struct atl1c_rfd_ring *rfd_ring = &adapter->rfd_ring;
17578 +@@ -1660,7 +1698,7 @@ static int atl1c_alloc_rx_buffer(struct atl1c_adapter *adapter)
17579 + while (next_info->flags & ATL1C_BUFFER_FREE) {
17580 + rfd_desc = ATL1C_RFD_DESC(rfd_ring, rfd_next_to_use);
17581 +
17582 +- skb = netdev_alloc_skb(adapter->netdev, adapter->rx_buffer_len);
17583 ++ skb = atl1c_alloc_skb(adapter);
17584 + if (unlikely(!skb)) {
17585 + if (netif_msg_rx_err(adapter))
17586 + dev_warn(&pdev->dev, "alloc rx buffer failed\n");
17587 +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
17588 +index ac78077..7a77f37 100644
17589 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
17590 ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
17591 +@@ -108,9 +108,8 @@ s32 ixgbe_dcb_config_tx_desc_arbiter_82598(struct ixgbe_hw *hw,
17592 +
17593 + /* Enable arbiter */
17594 + reg &= ~IXGBE_DPMCS_ARBDIS;
17595 +- /* Enable DFP and Recycle mode */
17596 +- reg |= (IXGBE_DPMCS_TDPAC | IXGBE_DPMCS_TRM);
17597 + reg |= IXGBE_DPMCS_TSOEF;
17598 ++
17599 + /* Configure Max TSO packet size 34KB including payload and headers */
17600 + reg |= (0x4 << IXGBE_DPMCS_MTSOS_SHIFT);
17601 +
17602 +diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
17603 +index 2c97901..593177d 100644
17604 +--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
17605 ++++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
17606 +@@ -840,16 +840,7 @@ int mlx4_QUERY_PORT_wrapper(struct mlx4_dev *dev, int slave,
17607 + MLX4_CMD_NATIVE);
17608 +
17609 + if (!err && dev->caps.function != slave) {
17610 +- /* if config MAC in DB use it */
17611 +- if (priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac)
17612 +- def_mac = priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac;
17613 +- else {
17614 +- /* set slave default_mac address */
17615 +- MLX4_GET(def_mac, outbox->buf, QUERY_PORT_MAC_OFFSET);
17616 +- def_mac += slave << 8;
17617 +- priv->mfunc.master.vf_admin[slave].vport[vhcr->in_modifier].mac = def_mac;
17618 +- }
17619 +-
17620 ++ def_mac = priv->mfunc.master.vf_oper[slave].vport[vhcr->in_modifier].state.mac;
17621 + MLX4_PUT(outbox->buf, def_mac, QUERY_PORT_MAC_OFFSET);
17622 +
17623 + /* get port type - currently only eth is enabled */
17624 +diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
17625 +index 8a43499..1b195fc 100644
17626 +--- a/drivers/net/ethernet/mellanox/mlx4/main.c
17627 ++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
17628 +@@ -371,7 +371,7 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
17629 +
17630 + dev->caps.sqp_demux = (mlx4_is_master(dev)) ? MLX4_MAX_NUM_SLAVES : 0;
17631 +
17632 +- if (!enable_64b_cqe_eqe) {
17633 ++ if (!enable_64b_cqe_eqe && !mlx4_is_slave(dev)) {
17634 + if (dev_cap->flags &
17635 + (MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) {
17636 + mlx4_warn(dev, "64B EQEs/CQEs supported by the device but not enabled\n");
17637 +diff --git a/drivers/net/ethernet/realtek/8139cp.c b/drivers/net/ethernet/realtek/8139cp.c
17638 +index 0352345..887aebe 100644
17639 +--- a/drivers/net/ethernet/realtek/8139cp.c
17640 ++++ b/drivers/net/ethernet/realtek/8139cp.c
17641 +@@ -478,7 +478,7 @@ rx_status_loop:
17642 +
17643 + while (1) {
17644 + u32 status, len;
17645 +- dma_addr_t mapping;
17646 ++ dma_addr_t mapping, new_mapping;
17647 + struct sk_buff *skb, *new_skb;
17648 + struct cp_desc *desc;
17649 + const unsigned buflen = cp->rx_buf_sz;
17650 +@@ -520,6 +520,13 @@ rx_status_loop:
17651 + goto rx_next;
17652 + }
17653 +
17654 ++ new_mapping = dma_map_single(&cp->pdev->dev, new_skb->data, buflen,
17655 ++ PCI_DMA_FROMDEVICE);
17656 ++ if (dma_mapping_error(&cp->pdev->dev, new_mapping)) {
17657 ++ dev->stats.rx_dropped++;
17658 ++ goto rx_next;
17659 ++ }
17660 ++
17661 + dma_unmap_single(&cp->pdev->dev, mapping,
17662 + buflen, PCI_DMA_FROMDEVICE);
17663 +
17664 +@@ -531,12 +538,11 @@ rx_status_loop:
17665 +
17666 + skb_put(skb, len);
17667 +
17668 +- mapping = dma_map_single(&cp->pdev->dev, new_skb->data, buflen,
17669 +- PCI_DMA_FROMDEVICE);
17670 + cp->rx_skb[rx_tail] = new_skb;
17671 +
17672 + cp_rx_skb(cp, skb, desc);
17673 + rx++;
17674 ++ mapping = new_mapping;
17675 +
17676 + rx_next:
17677 + cp->rx_ring[rx_tail].opts2 = 0;
17678 +@@ -716,6 +722,22 @@ static inline u32 cp_tx_vlan_tag(struct sk_buff *skb)
17679 + TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
17680 + }
17681 +
17682 ++static void unwind_tx_frag_mapping(struct cp_private *cp, struct sk_buff *skb,
17683 ++ int first, int entry_last)
17684 ++{
17685 ++ int frag, index;
17686 ++ struct cp_desc *txd;
17687 ++ skb_frag_t *this_frag;
17688 ++ for (frag = 0; frag+first < entry_last; frag++) {
17689 ++ index = first+frag;
17690 ++ cp->tx_skb[index] = NULL;
17691 ++ txd = &cp->tx_ring[index];
17692 ++ this_frag = &skb_shinfo(skb)->frags[frag];
17693 ++ dma_unmap_single(&cp->pdev->dev, le64_to_cpu(txd->addr),
17694 ++ skb_frag_size(this_frag), PCI_DMA_TODEVICE);
17695 ++ }
17696 ++}
17697 ++
17698 + static netdev_tx_t cp_start_xmit (struct sk_buff *skb,
17699 + struct net_device *dev)
17700 + {
17701 +@@ -749,6 +771,9 @@ static netdev_tx_t cp_start_xmit (struct sk_buff *skb,
17702 +
17703 + len = skb->len;
17704 + mapping = dma_map_single(&cp->pdev->dev, skb->data, len, PCI_DMA_TODEVICE);
17705 ++ if (dma_mapping_error(&cp->pdev->dev, mapping))
17706 ++ goto out_dma_error;
17707 ++
17708 + txd->opts2 = opts2;
17709 + txd->addr = cpu_to_le64(mapping);
17710 + wmb();
17711 +@@ -786,6 +811,9 @@ static netdev_tx_t cp_start_xmit (struct sk_buff *skb,
17712 + first_len = skb_headlen(skb);
17713 + first_mapping = dma_map_single(&cp->pdev->dev, skb->data,
17714 + first_len, PCI_DMA_TODEVICE);
17715 ++ if (dma_mapping_error(&cp->pdev->dev, first_mapping))
17716 ++ goto out_dma_error;
17717 ++
17718 + cp->tx_skb[entry] = skb;
17719 + entry = NEXT_TX(entry);
17720 +
17721 +@@ -799,6 +827,11 @@ static netdev_tx_t cp_start_xmit (struct sk_buff *skb,
17722 + mapping = dma_map_single(&cp->pdev->dev,
17723 + skb_frag_address(this_frag),
17724 + len, PCI_DMA_TODEVICE);
17725 ++ if (dma_mapping_error(&cp->pdev->dev, mapping)) {
17726 ++ unwind_tx_frag_mapping(cp, skb, first_entry, entry);
17727 ++ goto out_dma_error;
17728 ++ }
17729 ++
17730 + eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0;
17731 +
17732 + ctrl = eor | len | DescOwn;
17733 +@@ -859,11 +892,16 @@ static netdev_tx_t cp_start_xmit (struct sk_buff *skb,
17734 + if (TX_BUFFS_AVAIL(cp) <= (MAX_SKB_FRAGS + 1))
17735 + netif_stop_queue(dev);
17736 +
17737 ++out_unlock:
17738 + spin_unlock_irqrestore(&cp->lock, intr_flags);
17739 +
17740 + cpw8(TxPoll, NormalTxPoll);
17741 +
17742 + return NETDEV_TX_OK;
17743 ++out_dma_error:
17744 ++ kfree_skb(skb);
17745 ++ cp->dev->stats.tx_dropped++;
17746 ++ goto out_unlock;
17747 + }
17748 +
17749 + /* Set or clear the multicast filter for this adaptor.
17750 +@@ -1054,6 +1092,10 @@ static int cp_refill_rx(struct cp_private *cp)
17751 +
17752 + mapping = dma_map_single(&cp->pdev->dev, skb->data,
17753 + cp->rx_buf_sz, PCI_DMA_FROMDEVICE);
17754 ++ if (dma_mapping_error(&cp->pdev->dev, mapping)) {
17755 ++ kfree_skb(skb);
17756 ++ goto err_out;
17757 ++ }
17758 + cp->rx_skb[i] = skb;
17759 +
17760 + cp->rx_ring[i].opts2 = 0;
17761 +diff --git a/drivers/net/ethernet/sfc/filter.c b/drivers/net/ethernet/sfc/filter.c
17762 +index 2397f0e..2738b5f 100644
17763 +--- a/drivers/net/ethernet/sfc/filter.c
17764 ++++ b/drivers/net/ethernet/sfc/filter.c
17765 +@@ -1196,7 +1196,9 @@ int efx_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb,
17766 + EFX_BUG_ON_PARANOID(skb_headlen(skb) < nhoff + 4 * ip->ihl + 4);
17767 + ports = (const __be16 *)(skb->data + nhoff + 4 * ip->ihl);
17768 +
17769 +- efx_filter_init_rx(&spec, EFX_FILTER_PRI_HINT, 0, rxq_index);
17770 ++ efx_filter_init_rx(&spec, EFX_FILTER_PRI_HINT,
17771 ++ efx->rx_scatter ? EFX_FILTER_FLAG_RX_SCATTER : 0,
17772 ++ rxq_index);
17773 + rc = efx_filter_set_ipv4_full(&spec, ip->protocol,
17774 + ip->daddr, ports[1], ip->saddr, ports[0]);
17775 + if (rc)
17776 +diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
17777 +index bd8758f..cea1f3d 100644
17778 +--- a/drivers/net/usb/ax88179_178a.c
17779 ++++ b/drivers/net/usb/ax88179_178a.c
17780 +@@ -1029,10 +1029,10 @@ static int ax88179_bind(struct usbnet *dev, struct usb_interface *intf)
17781 + dev->mii.supports_gmii = 1;
17782 +
17783 + dev->net->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
17784 +- NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_TSO;
17785 ++ NETIF_F_RXCSUM;
17786 +
17787 + dev->net->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
17788 +- NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_TSO;
17789 ++ NETIF_F_RXCSUM;
17790 +
17791 + /* Enable checksum offload */
17792 + *tmp = AX_RXCOE_IP | AX_RXCOE_TCP | AX_RXCOE_UDP |
17793 +@@ -1173,7 +1173,6 @@ ax88179_tx_fixup(struct usbnet *dev, struct sk_buff *skb, gfp_t flags)
17794 + if (((skb->len + 8) % frame_size) == 0)
17795 + tx_hdr2 |= 0x80008000; /* Enable padding */
17796 +
17797 +- skb_linearize(skb);
17798 + headroom = skb_headroom(skb);
17799 + tailroom = skb_tailroom(skb);
17800 +
17801 +@@ -1317,10 +1316,10 @@ static int ax88179_reset(struct usbnet *dev)
17802 + 1, 1, tmp);
17803 +
17804 + dev->net->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
17805 +- NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_TSO;
17806 ++ NETIF_F_RXCSUM;
17807 +
17808 + dev->net->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
17809 +- NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_TSO;
17810 ++ NETIF_F_RXCSUM;
17811 +
17812 + /* Enable checksum offload */
17813 + *tmp = AX_RXCOE_IP | AX_RXCOE_TCP | AX_RXCOE_UDP |
17814 +diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
17815 +index 7540974..66ebbac 100644
17816 +--- a/drivers/net/usb/smsc75xx.c
17817 ++++ b/drivers/net/usb/smsc75xx.c
17818 +@@ -45,7 +45,6 @@
17819 + #define EEPROM_MAC_OFFSET (0x01)
17820 + #define DEFAULT_TX_CSUM_ENABLE (true)
17821 + #define DEFAULT_RX_CSUM_ENABLE (true)
17822 +-#define DEFAULT_TSO_ENABLE (true)
17823 + #define SMSC75XX_INTERNAL_PHY_ID (1)
17824 + #define SMSC75XX_TX_OVERHEAD (8)
17825 + #define MAX_RX_FIFO_SIZE (20 * 1024)
17826 +@@ -1410,17 +1409,14 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
17827 +
17828 + INIT_WORK(&pdata->set_multicast, smsc75xx_deferred_multicast_write);
17829 +
17830 +- if (DEFAULT_TX_CSUM_ENABLE) {
17831 ++ if (DEFAULT_TX_CSUM_ENABLE)
17832 + dev->net->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
17833 +- if (DEFAULT_TSO_ENABLE)
17834 +- dev->net->features |= NETIF_F_SG |
17835 +- NETIF_F_TSO | NETIF_F_TSO6;
17836 +- }
17837 ++
17838 + if (DEFAULT_RX_CSUM_ENABLE)
17839 + dev->net->features |= NETIF_F_RXCSUM;
17840 +
17841 + dev->net->hw_features = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
17842 +- NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_RXCSUM;
17843 ++ NETIF_F_RXCSUM;
17844 +
17845 + ret = smsc75xx_wait_ready(dev, 0);
17846 + if (ret < 0) {
17847 +@@ -2200,8 +2196,6 @@ static struct sk_buff *smsc75xx_tx_fixup(struct usbnet *dev,
17848 + {
17849 + u32 tx_cmd_a, tx_cmd_b;
17850 +
17851 +- skb_linearize(skb);
17852 +-
17853 + if (skb_headroom(skb) < SMSC75XX_TX_OVERHEAD) {
17854 + struct sk_buff *skb2 =
17855 + skb_copy_expand(skb, SMSC75XX_TX_OVERHEAD, 0, flags);
17856 +diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
17857 +index f5dda84..75a6376 100644
17858 +--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
17859 ++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
17860 +@@ -1289,7 +1289,9 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
17861 +
17862 + usb_set_intfdata(interface, NULL);
17863 +
17864 +- if (!unplugged && (hif_dev->flags & HIF_USB_START))
17865 ++ /* If firmware was loaded we should drop it
17866 ++ * go back to first stage bootloader. */
17867 ++ if (!unplugged && (hif_dev->flags & HIF_USB_READY))
17868 + ath9k_hif_usb_reboot(udev);
17869 +
17870 + kfree(hif_dev);
17871 +diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
17872 +index a47f5e0..3b202ff 100644
17873 +--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
17874 ++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
17875 +@@ -846,6 +846,7 @@ static int ath9k_init_device(struct ath9k_htc_priv *priv,
17876 + if (error != 0)
17877 + goto err_rx;
17878 +
17879 ++ ath9k_hw_disable(priv->ah);
17880 + #ifdef CONFIG_MAC80211_LEDS
17881 + /* must be initialized before ieee80211_register_hw */
17882 + priv->led_cdev.default_trigger = ieee80211_create_tpt_led_trigger(priv->hw,
17883 +diff --git a/drivers/net/wireless/ath/wil6210/debugfs.c b/drivers/net/wireless/ath/wil6210/debugfs.c
17884 +index 727b1f5..d57e5be 100644
17885 +--- a/drivers/net/wireless/ath/wil6210/debugfs.c
17886 ++++ b/drivers/net/wireless/ath/wil6210/debugfs.c
17887 +@@ -145,7 +145,7 @@ static void wil_print_ring(struct seq_file *s, const char *prefix,
17888 + le16_to_cpu(hdr.type), hdr.flags);
17889 + if (len <= MAX_MBOXITEM_SIZE) {
17890 + int n = 0;
17891 +- unsigned char printbuf[16 * 3 + 2];
17892 ++ char printbuf[16 * 3 + 2];
17893 + unsigned char databuf[MAX_MBOXITEM_SIZE];
17894 + void __iomem *src = wmi_buffer(wil, d.addr) +
17895 + sizeof(struct wil6210_mbox_hdr);
17896 +@@ -416,7 +416,7 @@ static int wil_txdesc_debugfs_show(struct seq_file *s, void *data)
17897 + seq_printf(s, " SKB = %p\n", skb);
17898 +
17899 + if (skb) {
17900 +- unsigned char printbuf[16 * 3 + 2];
17901 ++ char printbuf[16 * 3 + 2];
17902 + int i = 0;
17903 + int len = skb_headlen(skb);
17904 + void *p = skb->data;
17905 +diff --git a/drivers/net/wireless/iwlwifi/dvm/main.c b/drivers/net/wireless/iwlwifi/dvm/main.c
17906 +index 74d7572..a8afc7b 100644
17907 +--- a/drivers/net/wireless/iwlwifi/dvm/main.c
17908 ++++ b/drivers/net/wireless/iwlwifi/dvm/main.c
17909 +@@ -758,7 +758,7 @@ int iwl_alive_start(struct iwl_priv *priv)
17910 + BT_COEX_PRIO_TBL_EVT_INIT_CALIB2);
17911 + if (ret)
17912 + return ret;
17913 +- } else {
17914 ++ } else if (priv->cfg->bt_params) {
17915 + /*
17916 + * default is 2-wire BT coexexistence support
17917 + */
17918 +diff --git a/drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h b/drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
17919 +index b60d141..365095a 100644
17920 +--- a/drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
17921 ++++ b/drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
17922 +@@ -69,7 +69,6 @@
17923 + /* Scan Commands, Responses, Notifications */
17924 +
17925 + /* Masks for iwl_scan_channel.type flags */
17926 +-#define SCAN_CHANNEL_TYPE_PASSIVE 0
17927 + #define SCAN_CHANNEL_TYPE_ACTIVE BIT(0)
17928 + #define SCAN_CHANNEL_NARROW_BAND BIT(22)
17929 +
17930 +diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c
17931 +index a5eb8c8..b7e95b0 100644
17932 +--- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c
17933 ++++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c
17934 +@@ -987,6 +987,21 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
17935 + mutex_lock(&mvm->mutex);
17936 + if (old_state == IEEE80211_STA_NOTEXIST &&
17937 + new_state == IEEE80211_STA_NONE) {
17938 ++ /*
17939 ++ * Firmware bug - it'll crash if the beacon interval is less
17940 ++ * than 16. We can't avoid connecting at all, so refuse the
17941 ++ * station state change, this will cause mac80211 to abandon
17942 ++ * attempts to connect to this AP, and eventually wpa_s will
17943 ++ * blacklist the AP...
17944 ++ */
17945 ++ if (vif->type == NL80211_IFTYPE_STATION &&
17946 ++ vif->bss_conf.beacon_int < 16) {
17947 ++ IWL_ERR(mvm,
17948 ++ "AP %pM beacon interval is %d, refusing due to firmware bug!\n",
17949 ++ sta->addr, vif->bss_conf.beacon_int);
17950 ++ ret = -EINVAL;
17951 ++ goto out_unlock;
17952 ++ }
17953 + ret = iwl_mvm_add_sta(mvm, vif, sta);
17954 + } else if (old_state == IEEE80211_STA_NONE &&
17955 + new_state == IEEE80211_STA_AUTH) {
17956 +@@ -1015,6 +1030,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
17957 + } else {
17958 + ret = -EIO;
17959 + }
17960 ++ out_unlock:
17961 + mutex_unlock(&mvm->mutex);
17962 +
17963 + return ret;
17964 +diff --git a/drivers/net/wireless/iwlwifi/mvm/scan.c b/drivers/net/wireless/iwlwifi/mvm/scan.c
17965 +index 2476e43..8e1f6c0 100644
17966 +--- a/drivers/net/wireless/iwlwifi/mvm/scan.c
17967 ++++ b/drivers/net/wireless/iwlwifi/mvm/scan.c
17968 +@@ -137,8 +137,8 @@ static void iwl_mvm_scan_fill_ssids(struct iwl_scan_cmd *cmd,
17969 + {
17970 + int fw_idx, req_idx;
17971 +
17972 +- fw_idx = 0;
17973 +- for (req_idx = req->n_ssids - 1; req_idx > 0; req_idx--) {
17974 ++ for (req_idx = req->n_ssids - 1, fw_idx = 0; req_idx > 0;
17975 ++ req_idx--, fw_idx++) {
17976 + cmd->direct_scan[fw_idx].id = WLAN_EID_SSID;
17977 + cmd->direct_scan[fw_idx].len = req->ssids[req_idx].ssid_len;
17978 + memcpy(cmd->direct_scan[fw_idx].ssid,
17979 +@@ -176,19 +176,12 @@ static void iwl_mvm_scan_fill_channels(struct iwl_scan_cmd *cmd,
17980 + struct iwl_scan_channel *chan = (struct iwl_scan_channel *)
17981 + (cmd->data + le16_to_cpu(cmd->tx_cmd.len));
17982 + int i;
17983 +- __le32 chan_type_value;
17984 +-
17985 +- if (req->n_ssids > 0)
17986 +- chan_type_value = cpu_to_le32(BIT(req->n_ssids + 1) - 1);
17987 +- else
17988 +- chan_type_value = SCAN_CHANNEL_TYPE_PASSIVE;
17989 +
17990 + for (i = 0; i < cmd->channel_count; i++) {
17991 + chan->channel = cpu_to_le16(req->channels[i]->hw_value);
17992 ++ chan->type = cpu_to_le32(BIT(req->n_ssids) - 1);
17993 + if (req->channels[i]->flags & IEEE80211_CHAN_PASSIVE_SCAN)
17994 +- chan->type = SCAN_CHANNEL_TYPE_PASSIVE;
17995 +- else
17996 +- chan->type = chan_type_value;
17997 ++ chan->type &= cpu_to_le32(~SCAN_CHANNEL_TYPE_ACTIVE);
17998 + chan->active_dwell = cpu_to_le16(active_dwell);
17999 + chan->passive_dwell = cpu_to_le16(passive_dwell);
18000 + chan->iteration_count = cpu_to_le16(1);
18001 +diff --git a/drivers/net/wireless/iwlwifi/mvm/sta.c b/drivers/net/wireless/iwlwifi/mvm/sta.c
18002 +index 5c664ed..736b50b 100644
18003 +--- a/drivers/net/wireless/iwlwifi/mvm/sta.c
18004 ++++ b/drivers/net/wireless/iwlwifi/mvm/sta.c
18005 +@@ -621,8 +621,12 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
18006 + cmd.mac_id_n_color = cpu_to_le32(mvm_sta->mac_id_n_color);
18007 + cmd.sta_id = mvm_sta->sta_id;
18008 + cmd.add_modify = STA_MODE_MODIFY;
18009 +- cmd.add_immediate_ba_tid = (u8) tid;
18010 +- cmd.add_immediate_ba_ssn = cpu_to_le16(ssn);
18011 ++ if (start) {
18012 ++ cmd.add_immediate_ba_tid = (u8) tid;
18013 ++ cmd.add_immediate_ba_ssn = cpu_to_le16(ssn);
18014 ++ } else {
18015 ++ cmd.remove_immediate_ba_tid = (u8) tid;
18016 ++ }
18017 + cmd.modify_mask = start ? STA_MODIFY_ADD_BA_TID :
18018 + STA_MODIFY_REMOVE_BA_TID;
18019 +
18020 +@@ -894,6 +898,7 @@ int iwl_mvm_sta_tx_agg_flush(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
18021 + struct iwl_mvm_sta *mvmsta = (void *)sta->drv_priv;
18022 + struct iwl_mvm_tid_data *tid_data = &mvmsta->tid_data[tid];
18023 + u16 txq_id;
18024 ++ enum iwl_mvm_agg_state old_state;
18025 +
18026 + /*
18027 + * First set the agg state to OFF to avoid calling
18028 +@@ -903,13 +908,17 @@ int iwl_mvm_sta_tx_agg_flush(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
18029 + txq_id = tid_data->txq_id;
18030 + IWL_DEBUG_TX_QUEUES(mvm, "Flush AGG: sta %d tid %d q %d state %d\n",
18031 + mvmsta->sta_id, tid, txq_id, tid_data->state);
18032 ++ old_state = tid_data->state;
18033 + tid_data->state = IWL_AGG_OFF;
18034 + spin_unlock_bh(&mvmsta->lock);
18035 +
18036 +- if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true))
18037 +- IWL_ERR(mvm, "Couldn't flush the AGG queue\n");
18038 ++ if (old_state >= IWL_AGG_ON) {
18039 ++ if (iwl_mvm_flush_tx_path(mvm, BIT(txq_id), true))
18040 ++ IWL_ERR(mvm, "Couldn't flush the AGG queue\n");
18041 ++
18042 ++ iwl_trans_txq_disable(mvm->trans, tid_data->txq_id);
18043 ++ }
18044 +
18045 +- iwl_trans_txq_disable(mvm->trans, tid_data->txq_id);
18046 + mvm->queue_to_mac80211[tid_data->txq_id] =
18047 + IWL_INVALID_MAC80211_QUEUE;
18048 +
18049 +diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c
18050 +index 8cb53ec..5283b55 100644
18051 +--- a/drivers/net/wireless/iwlwifi/pcie/drv.c
18052 ++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
18053 +@@ -129,6 +129,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
18054 + {IWL_PCI_DEVICE(0x423C, 0x1306, iwl5150_abg_cfg)}, /* Half Mini Card */
18055 + {IWL_PCI_DEVICE(0x423C, 0x1221, iwl5150_agn_cfg)}, /* Mini Card */
18056 + {IWL_PCI_DEVICE(0x423C, 0x1321, iwl5150_agn_cfg)}, /* Half Mini Card */
18057 ++ {IWL_PCI_DEVICE(0x423C, 0x1326, iwl5150_abg_cfg)}, /* Half Mini Card */
18058 +
18059 + {IWL_PCI_DEVICE(0x423D, 0x1211, iwl5150_agn_cfg)}, /* Mini Card */
18060 + {IWL_PCI_DEVICE(0x423D, 0x1311, iwl5150_agn_cfg)}, /* Half Mini Card */
18061 +diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c
18062 +index e42b266..e7f7cdf 100644
18063 +--- a/drivers/net/wireless/mwifiex/cfg80211.c
18064 ++++ b/drivers/net/wireless/mwifiex/cfg80211.c
18065 +@@ -1668,9 +1668,9 @@ mwifiex_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev,
18066 + struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
18067 + int ret;
18068 +
18069 +- if (priv->bss_mode != NL80211_IFTYPE_STATION) {
18070 ++ if (GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_STA) {
18071 + wiphy_err(wiphy,
18072 +- "%s: reject infra assoc request in non-STA mode\n",
18073 ++ "%s: reject infra assoc request in non-STA role\n",
18074 + dev->name);
18075 + return -EINVAL;
18076 + }
18077 +diff --git a/drivers/net/wireless/mwifiex/cfp.c b/drivers/net/wireless/mwifiex/cfp.c
18078 +index 988552d..5178c46 100644
18079 +--- a/drivers/net/wireless/mwifiex/cfp.c
18080 ++++ b/drivers/net/wireless/mwifiex/cfp.c
18081 +@@ -415,7 +415,8 @@ u32 mwifiex_get_supported_rates(struct mwifiex_private *priv, u8 *rates)
18082 + u32 k = 0;
18083 + struct mwifiex_adapter *adapter = priv->adapter;
18084 +
18085 +- if (priv->bss_mode == NL80211_IFTYPE_STATION) {
18086 ++ if (priv->bss_mode == NL80211_IFTYPE_STATION ||
18087 ++ priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) {
18088 + switch (adapter->config_bands) {
18089 + case BAND_B:
18090 + dev_dbg(adapter->dev, "info: infra band=%d "
18091 +diff --git a/drivers/net/wireless/mwifiex/join.c b/drivers/net/wireless/mwifiex/join.c
18092 +index 6bcb66e..96bda6c 100644
18093 +--- a/drivers/net/wireless/mwifiex/join.c
18094 ++++ b/drivers/net/wireless/mwifiex/join.c
18095 +@@ -1290,8 +1290,10 @@ int mwifiex_associate(struct mwifiex_private *priv,
18096 + {
18097 + u8 current_bssid[ETH_ALEN];
18098 +
18099 +- /* Return error if the adapter or table entry is not marked as infra */
18100 +- if ((priv->bss_mode != NL80211_IFTYPE_STATION) ||
18101 ++ /* Return error if the adapter is not STA role or table entry
18102 ++ * is not marked as infra.
18103 ++ */
18104 ++ if ((GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_STA) ||
18105 + (bss_desc->bss_mode != NL80211_IFTYPE_STATION))
18106 + return -1;
18107 +
18108 +diff --git a/drivers/net/wireless/mwifiex/sdio.c b/drivers/net/wireless/mwifiex/sdio.c
18109 +index 363ba31..139c958 100644
18110 +--- a/drivers/net/wireless/mwifiex/sdio.c
18111 ++++ b/drivers/net/wireless/mwifiex/sdio.c
18112 +@@ -1441,8 +1441,8 @@ static int mwifiex_sdio_host_to_card(struct mwifiex_adapter *adapter,
18113 + /* Allocate buffer and copy payload */
18114 + blk_size = MWIFIEX_SDIO_BLOCK_SIZE;
18115 + buf_block_len = (pkt_len + blk_size - 1) / blk_size;
18116 +- *(u16 *) &payload[0] = (u16) pkt_len;
18117 +- *(u16 *) &payload[2] = type;
18118 ++ *(__le16 *)&payload[0] = cpu_to_le16((u16)pkt_len);
18119 ++ *(__le16 *)&payload[2] = cpu_to_le16(type);
18120 +
18121 + /*
18122 + * This is SDIO specific header
18123 +diff --git a/drivers/net/wireless/rt2x00/rt2x00queue.c b/drivers/net/wireless/rt2x00/rt2x00queue.c
18124 +index 2c12311..d955741 100644
18125 +--- a/drivers/net/wireless/rt2x00/rt2x00queue.c
18126 ++++ b/drivers/net/wireless/rt2x00/rt2x00queue.c
18127 +@@ -936,13 +936,8 @@ void rt2x00queue_index_inc(struct queue_entry *entry, enum queue_index index)
18128 + spin_unlock_irqrestore(&queue->index_lock, irqflags);
18129 + }
18130 +
18131 +-void rt2x00queue_pause_queue(struct data_queue *queue)
18132 ++void rt2x00queue_pause_queue_nocheck(struct data_queue *queue)
18133 + {
18134 +- if (!test_bit(DEVICE_STATE_PRESENT, &queue->rt2x00dev->flags) ||
18135 +- !test_bit(QUEUE_STARTED, &queue->flags) ||
18136 +- test_and_set_bit(QUEUE_PAUSED, &queue->flags))
18137 +- return;
18138 +-
18139 + switch (queue->qid) {
18140 + case QID_AC_VO:
18141 + case QID_AC_VI:
18142 +@@ -958,6 +953,15 @@ void rt2x00queue_pause_queue(struct data_queue *queue)
18143 + break;
18144 + }
18145 + }
18146 ++void rt2x00queue_pause_queue(struct data_queue *queue)
18147 ++{
18148 ++ if (!test_bit(DEVICE_STATE_PRESENT, &queue->rt2x00dev->flags) ||
18149 ++ !test_bit(QUEUE_STARTED, &queue->flags) ||
18150 ++ test_and_set_bit(QUEUE_PAUSED, &queue->flags))
18151 ++ return;
18152 ++
18153 ++ rt2x00queue_pause_queue_nocheck(queue);
18154 ++}
18155 + EXPORT_SYMBOL_GPL(rt2x00queue_pause_queue);
18156 +
18157 + void rt2x00queue_unpause_queue(struct data_queue *queue)
18158 +@@ -1019,7 +1023,7 @@ void rt2x00queue_stop_queue(struct data_queue *queue)
18159 + return;
18160 + }
18161 +
18162 +- rt2x00queue_pause_queue(queue);
18163 ++ rt2x00queue_pause_queue_nocheck(queue);
18164 +
18165 + queue->rt2x00dev->ops->lib->stop_queue(queue);
18166 +
18167 +diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
18168 +index e79e006..9ee04b4 100644
18169 +--- a/drivers/parisc/iosapic.c
18170 ++++ b/drivers/parisc/iosapic.c
18171 +@@ -811,18 +811,28 @@ int iosapic_fixup_irq(void *isi_obj, struct pci_dev *pcidev)
18172 + return pcidev->irq;
18173 + }
18174 +
18175 +-static struct iosapic_info *first_isi = NULL;
18176 ++static struct iosapic_info *iosapic_list;
18177 +
18178 + #ifdef CONFIG_64BIT
18179 +-int iosapic_serial_irq(int num)
18180 ++int iosapic_serial_irq(struct parisc_device *dev)
18181 + {
18182 +- struct iosapic_info *isi = first_isi;
18183 +- struct irt_entry *irte = NULL; /* only used if PAT PDC */
18184 ++ struct iosapic_info *isi;
18185 ++ struct irt_entry *irte;
18186 + struct vector_info *vi;
18187 +- int isi_line; /* line used by device */
18188 ++ int cnt;
18189 ++ int intin;
18190 ++
18191 ++ intin = (dev->mod_info >> 24) & 15;
18192 +
18193 + /* lookup IRT entry for isi/slot/pin set */
18194 +- irte = &irt_cell[num];
18195 ++ for (cnt = 0; cnt < irt_num_entry; cnt++) {
18196 ++ irte = &irt_cell[cnt];
18197 ++ if (COMPARE_IRTE_ADDR(irte, dev->mod0) &&
18198 ++ irte->dest_iosapic_intin == intin)
18199 ++ break;
18200 ++ }
18201 ++ if (cnt >= irt_num_entry)
18202 ++ return 0; /* no irq found, force polling */
18203 +
18204 + DBG_IRT("iosapic_serial_irq(): irte %p %x %x %x %x %x %x %x %x\n",
18205 + irte,
18206 +@@ -834,11 +844,17 @@ int iosapic_serial_irq(int num)
18207 + irte->src_seg_id,
18208 + irte->dest_iosapic_intin,
18209 + (u32) irte->dest_iosapic_addr);
18210 +- isi_line = irte->dest_iosapic_intin;
18211 ++
18212 ++ /* search for iosapic */
18213 ++ for (isi = iosapic_list; isi; isi = isi->isi_next)
18214 ++ if (isi->isi_hpa == dev->mod0)
18215 ++ break;
18216 ++ if (!isi)
18217 ++ return 0; /* no iosapic found, force polling */
18218 +
18219 + /* get vector info for this input line */
18220 +- vi = isi->isi_vector + isi_line;
18221 +- DBG_IRT("iosapic_serial_irq: line %d vi 0x%p\n", isi_line, vi);
18222 ++ vi = isi->isi_vector + intin;
18223 ++ DBG_IRT("iosapic_serial_irq: line %d vi 0x%p\n", iosapic_intin, vi);
18224 +
18225 + /* If this IRQ line has already been setup, skip it */
18226 + if (vi->irte)
18227 +@@ -941,8 +957,8 @@ void *iosapic_register(unsigned long hpa)
18228 + vip->irqline = (unsigned char) cnt;
18229 + vip->iosapic = isi;
18230 + }
18231 +- if (!first_isi)
18232 +- first_isi = isi;
18233 ++ isi->isi_next = iosapic_list;
18234 ++ iosapic_list = isi;
18235 + return isi;
18236 + }
18237 +
18238 +diff --git a/drivers/pci/hotplug/pciehp_pci.c b/drivers/pci/hotplug/pciehp_pci.c
18239 +index aac7a40..0e0d0f7 100644
18240 +--- a/drivers/pci/hotplug/pciehp_pci.c
18241 ++++ b/drivers/pci/hotplug/pciehp_pci.c
18242 +@@ -92,7 +92,14 @@ int pciehp_unconfigure_device(struct slot *p_slot)
18243 + if (ret)
18244 + presence = 0;
18245 +
18246 +- list_for_each_entry_safe(dev, temp, &parent->devices, bus_list) {
18247 ++ /*
18248 ++ * Stopping an SR-IOV PF device removes all the associated VFs,
18249 ++ * which will update the bus->devices list and confuse the
18250 ++ * iterator. Therefore, iterate in reverse so we remove the VFs
18251 ++ * first, then the PF. We do the same in pci_stop_bus_device().
18252 ++ */
18253 ++ list_for_each_entry_safe_reverse(dev, temp, &parent->devices,
18254 ++ bus_list) {
18255 + pci_dev_get(dev);
18256 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE && presence) {
18257 + pci_read_config_byte(dev, PCI_BRIDGE_CONTROL, &bctl);
18258 +diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
18259 +index d254e23..64a7de2 100644
18260 +--- a/drivers/pci/setup-bus.c
18261 ++++ b/drivers/pci/setup-bus.c
18262 +@@ -300,6 +300,47 @@ static void assign_requested_resources_sorted(struct list_head *head,
18263 + }
18264 + }
18265 +
18266 ++static unsigned long pci_fail_res_type_mask(struct list_head *fail_head)
18267 ++{
18268 ++ struct pci_dev_resource *fail_res;
18269 ++ unsigned long mask = 0;
18270 ++
18271 ++ /* check failed type */
18272 ++ list_for_each_entry(fail_res, fail_head, list)
18273 ++ mask |= fail_res->flags;
18274 ++
18275 ++ /*
18276 ++ * one pref failed resource will set IORESOURCE_MEM,
18277 ++ * as we can allocate pref in non-pref range.
18278 ++ * Will release all assigned non-pref sibling resources
18279 ++ * according to that bit.
18280 ++ */
18281 ++ return mask & (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH);
18282 ++}
18283 ++
18284 ++static bool pci_need_to_release(unsigned long mask, struct resource *res)
18285 ++{
18286 ++ if (res->flags & IORESOURCE_IO)
18287 ++ return !!(mask & IORESOURCE_IO);
18288 ++
18289 ++ /* check pref at first */
18290 ++ if (res->flags & IORESOURCE_PREFETCH) {
18291 ++ if (mask & IORESOURCE_PREFETCH)
18292 ++ return true;
18293 ++ /* count pref if its parent is non-pref */
18294 ++ else if ((mask & IORESOURCE_MEM) &&
18295 ++ !(res->parent->flags & IORESOURCE_PREFETCH))
18296 ++ return true;
18297 ++ else
18298 ++ return false;
18299 ++ }
18300 ++
18301 ++ if (res->flags & IORESOURCE_MEM)
18302 ++ return !!(mask & IORESOURCE_MEM);
18303 ++
18304 ++ return false; /* should not get here */
18305 ++}
18306 ++
18307 + static void __assign_resources_sorted(struct list_head *head,
18308 + struct list_head *realloc_head,
18309 + struct list_head *fail_head)
18310 +@@ -312,11 +353,24 @@ static void __assign_resources_sorted(struct list_head *head,
18311 + * if could do that, could get out early.
18312 + * if could not do that, we still try to assign requested at first,
18313 + * then try to reassign add_size for some resources.
18314 ++ *
18315 ++ * Separate three resource type checking if we need to release
18316 ++ * assigned resource after requested + add_size try.
18317 ++ * 1. if there is io port assign fail, will release assigned
18318 ++ * io port.
18319 ++ * 2. if there is pref mmio assign fail, release assigned
18320 ++ * pref mmio.
18321 ++ * if assigned pref mmio's parent is non-pref mmio and there
18322 ++ * is non-pref mmio assign fail, will release that assigned
18323 ++ * pref mmio.
18324 ++ * 3. if there is non-pref mmio assign fail or pref mmio
18325 ++ * assigned fail, will release assigned non-pref mmio.
18326 + */
18327 + LIST_HEAD(save_head);
18328 + LIST_HEAD(local_fail_head);
18329 + struct pci_dev_resource *save_res;
18330 +- struct pci_dev_resource *dev_res;
18331 ++ struct pci_dev_resource *dev_res, *tmp_res;
18332 ++ unsigned long fail_type;
18333 +
18334 + /* Check if optional add_size is there */
18335 + if (!realloc_head || list_empty(realloc_head))
18336 +@@ -348,6 +402,19 @@ static void __assign_resources_sorted(struct list_head *head,
18337 + return;
18338 + }
18339 +
18340 ++ /* check failed type */
18341 ++ fail_type = pci_fail_res_type_mask(&local_fail_head);
18342 ++ /* remove not need to be released assigned res from head list etc */
18343 ++ list_for_each_entry_safe(dev_res, tmp_res, head, list)
18344 ++ if (dev_res->res->parent &&
18345 ++ !pci_need_to_release(fail_type, dev_res->res)) {
18346 ++ /* remove it from realloc_head list */
18347 ++ remove_from_list(realloc_head, dev_res->res);
18348 ++ remove_from_list(&save_head, dev_res->res);
18349 ++ list_del(&dev_res->list);
18350 ++ kfree(dev_res);
18351 ++ }
18352 ++
18353 + free_list(&local_fail_head);
18354 + /* Release assigned resource */
18355 + list_for_each_entry(dev_res, head, list)
18356 +diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
18357 +index 50b13c9..df0aacc 100644
18358 +--- a/drivers/spi/spi-davinci.c
18359 ++++ b/drivers/spi/spi-davinci.c
18360 +@@ -610,7 +610,7 @@ static int davinci_spi_bufs(struct spi_device *spi, struct spi_transfer *t)
18361 + else
18362 + buf = (void *)t->tx_buf;
18363 + t->tx_dma = dma_map_single(&spi->dev, buf,
18364 +- t->len, DMA_FROM_DEVICE);
18365 ++ t->len, DMA_TO_DEVICE);
18366 + if (!t->tx_dma) {
18367 + ret = -EFAULT;
18368 + goto err_tx_map;
18369 +diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
18370 +index e34e3fe..1742ce5 100644
18371 +--- a/drivers/staging/zram/zram_drv.c
18372 ++++ b/drivers/staging/zram/zram_drv.c
18373 +@@ -272,8 +272,6 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
18374 +
18375 + if (page_zero_filled(uncmem)) {
18376 + kunmap_atomic(user_mem);
18377 +- if (is_partial_io(bvec))
18378 +- kfree(uncmem);
18379 + zram->stats.pages_zero++;
18380 + zram_set_flag(meta, index, ZRAM_ZERO);
18381 + ret = 0;
18382 +@@ -422,13 +420,20 @@ out:
18383 + */
18384 + static inline int valid_io_request(struct zram *zram, struct bio *bio)
18385 + {
18386 +- if (unlikely(
18387 +- (bio->bi_sector >= (zram->disksize >> SECTOR_SHIFT)) ||
18388 +- (bio->bi_sector & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1)) ||
18389 +- (bio->bi_size & (ZRAM_LOGICAL_BLOCK_SIZE - 1)))) {
18390 ++ u64 start, end, bound;
18391 +
18392 ++ /* unaligned request */
18393 ++ if (unlikely(bio->bi_sector & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1)))
18394 ++ return 0;
18395 ++ if (unlikely(bio->bi_size & (ZRAM_LOGICAL_BLOCK_SIZE - 1)))
18396 ++ return 0;
18397 ++
18398 ++ start = bio->bi_sector;
18399 ++ end = start + (bio->bi_size >> SECTOR_SHIFT);
18400 ++ bound = zram->disksize >> SECTOR_SHIFT;
18401 ++ /* out of range range */
18402 ++ if (unlikely(start >= bound || end >= bound || start > end))
18403 + return 0;
18404 +- }
18405 +
18406 + /* I/O request is valid */
18407 + return 1;
18408 +@@ -582,7 +587,9 @@ static void zram_slot_free_notify(struct block_device *bdev,
18409 + struct zram *zram;
18410 +
18411 + zram = bdev->bd_disk->private_data;
18412 ++ down_write(&zram->lock);
18413 + zram_free_page(zram, index);
18414 ++ up_write(&zram->lock);
18415 + zram_stat64_inc(zram, &zram->stats.notify_free);
18416 + }
18417 +
18418 +@@ -593,7 +600,7 @@ static const struct block_device_operations zram_devops = {
18419 +
18420 + static int create_device(struct zram *zram, int device_id)
18421 + {
18422 +- int ret = 0;
18423 ++ int ret = -ENOMEM;
18424 +
18425 + init_rwsem(&zram->lock);
18426 + init_rwsem(&zram->init_lock);
18427 +@@ -603,7 +610,6 @@ static int create_device(struct zram *zram, int device_id)
18428 + if (!zram->queue) {
18429 + pr_err("Error allocating disk queue for device %d\n",
18430 + device_id);
18431 +- ret = -ENOMEM;
18432 + goto out;
18433 + }
18434 +
18435 +@@ -613,11 +619,9 @@ static int create_device(struct zram *zram, int device_id)
18436 + /* gendisk structure */
18437 + zram->disk = alloc_disk(1);
18438 + if (!zram->disk) {
18439 +- blk_cleanup_queue(zram->queue);
18440 + pr_warn("Error allocating disk structure for device %d\n",
18441 + device_id);
18442 +- ret = -ENOMEM;
18443 +- goto out;
18444 ++ goto out_free_queue;
18445 + }
18446 +
18447 + zram->disk->major = zram_major;
18448 +@@ -646,11 +650,17 @@ static int create_device(struct zram *zram, int device_id)
18449 + &zram_disk_attr_group);
18450 + if (ret < 0) {
18451 + pr_warn("Error creating sysfs group");
18452 +- goto out;
18453 ++ goto out_free_disk;
18454 + }
18455 +
18456 + zram->init_done = 0;
18457 ++ return 0;
18458 +
18459 ++out_free_disk:
18460 ++ del_gendisk(zram->disk);
18461 ++ put_disk(zram->disk);
18462 ++out_free_queue:
18463 ++ blk_cleanup_queue(zram->queue);
18464 + out:
18465 + return ret;
18466 + }
18467 +@@ -727,8 +737,10 @@ static void __exit zram_exit(void)
18468 + for (i = 0; i < num_devices; i++) {
18469 + zram = &zram_devices[i];
18470 +
18471 ++ get_disk(zram->disk);
18472 + destroy_device(zram);
18473 + zram_reset_device(zram);
18474 ++ put_disk(zram->disk);
18475 + }
18476 +
18477 + unregister_blkdev(zram_major, "zram");
18478 +diff --git a/drivers/staging/zram/zram_drv.h b/drivers/staging/zram/zram_drv.h
18479 +index 2d1a3f1..d542eee 100644
18480 +--- a/drivers/staging/zram/zram_drv.h
18481 ++++ b/drivers/staging/zram/zram_drv.h
18482 +@@ -93,8 +93,9 @@ struct zram_meta {
18483 + struct zram {
18484 + struct zram_meta *meta;
18485 + spinlock_t stat64_lock; /* protect 64-bit stats */
18486 +- struct rw_semaphore lock; /* protect compression buffers and table
18487 +- * against concurrent read and writes */
18488 ++ struct rw_semaphore lock; /* protect compression buffers, table,
18489 ++ * 32bit stat counters against concurrent
18490 ++ * notifications, reads and writes */
18491 + struct request_queue *queue;
18492 + struct gendisk *disk;
18493 + int init_done;
18494 +diff --git a/drivers/staging/zram/zram_sysfs.c b/drivers/staging/zram/zram_sysfs.c
18495 +index e6a929d..dc76a3d 100644
18496 +--- a/drivers/staging/zram/zram_sysfs.c
18497 ++++ b/drivers/staging/zram/zram_sysfs.c
18498 +@@ -188,8 +188,10 @@ static ssize_t mem_used_total_show(struct device *dev,
18499 + struct zram *zram = dev_to_zram(dev);
18500 + struct zram_meta *meta = zram->meta;
18501 +
18502 ++ down_read(&zram->init_lock);
18503 + if (zram->init_done)
18504 + val = zs_get_total_size_bytes(meta->mem_pool);
18505 ++ up_read(&zram->init_lock);
18506 +
18507 + return sprintf(buf, "%llu\n", val);
18508 + }
18509 +diff --git a/drivers/tty/serial/8250/8250_gsc.c b/drivers/tty/serial/8250/8250_gsc.c
18510 +index bb91b47..2e3ea1a 100644
18511 +--- a/drivers/tty/serial/8250/8250_gsc.c
18512 ++++ b/drivers/tty/serial/8250/8250_gsc.c
18513 +@@ -31,9 +31,8 @@ static int __init serial_init_chip(struct parisc_device *dev)
18514 + int err;
18515 +
18516 + #ifdef CONFIG_64BIT
18517 +- extern int iosapic_serial_irq(int cellnum);
18518 + if (!dev->irq && (dev->id.sversion == 0xad))
18519 +- dev->irq = iosapic_serial_irq(dev->mod_index-1);
18520 ++ dev->irq = iosapic_serial_irq(dev);
18521 + #endif
18522 +
18523 + if (!dev->irq) {
18524 +diff --git a/drivers/tty/serial/arc_uart.c b/drivers/tty/serial/arc_uart.c
18525 +index cbf1d15..22f280a 100644
18526 +--- a/drivers/tty/serial/arc_uart.c
18527 ++++ b/drivers/tty/serial/arc_uart.c
18528 +@@ -773,6 +773,6 @@ module_init(arc_serial_init);
18529 + module_exit(arc_serial_exit);
18530 +
18531 + MODULE_LICENSE("GPL");
18532 +-MODULE_ALIAS("plat-arcfpga/uart");
18533 ++MODULE_ALIAS("platform:" DRIVER_NAME);
18534 + MODULE_AUTHOR("Vineet Gupta");
18535 + MODULE_DESCRIPTION("ARC(Synopsys) On-Chip(fpga) serial driver");
18536 +diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
18537 +index 4f5f161..f85b8e6 100644
18538 +--- a/drivers/tty/serial/mxs-auart.c
18539 ++++ b/drivers/tty/serial/mxs-auart.c
18540 +@@ -678,11 +678,18 @@ static void mxs_auart_settermios(struct uart_port *u,
18541 +
18542 + static irqreturn_t mxs_auart_irq_handle(int irq, void *context)
18543 + {
18544 +- u32 istatus, istat;
18545 ++ u32 istat;
18546 + struct mxs_auart_port *s = context;
18547 + u32 stat = readl(s->port.membase + AUART_STAT);
18548 +
18549 +- istatus = istat = readl(s->port.membase + AUART_INTR);
18550 ++ istat = readl(s->port.membase + AUART_INTR);
18551 ++
18552 ++ /* ack irq */
18553 ++ writel(istat & (AUART_INTR_RTIS
18554 ++ | AUART_INTR_TXIS
18555 ++ | AUART_INTR_RXIS
18556 ++ | AUART_INTR_CTSMIS),
18557 ++ s->port.membase + AUART_INTR_CLR);
18558 +
18559 + if (istat & AUART_INTR_CTSMIS) {
18560 + uart_handle_cts_change(&s->port, stat & AUART_STAT_CTS);
18561 +@@ -702,12 +709,6 @@ static irqreturn_t mxs_auart_irq_handle(int irq, void *context)
18562 + istat &= ~AUART_INTR_TXIS;
18563 + }
18564 +
18565 +- writel(istatus & (AUART_INTR_RTIS
18566 +- | AUART_INTR_TXIS
18567 +- | AUART_INTR_RXIS
18568 +- | AUART_INTR_CTSMIS),
18569 +- s->port.membase + AUART_INTR_CLR);
18570 +-
18571 + return IRQ_HANDLED;
18572 + }
18573 +
18574 +@@ -850,7 +851,7 @@ auart_console_write(struct console *co, const char *str, unsigned int count)
18575 + struct mxs_auart_port *s;
18576 + struct uart_port *port;
18577 + unsigned int old_ctrl0, old_ctrl2;
18578 +- unsigned int to = 1000;
18579 ++ unsigned int to = 20000;
18580 +
18581 + if (co->index >= MXS_AUART_PORTS || co->index < 0)
18582 + return;
18583 +@@ -871,18 +872,23 @@ auart_console_write(struct console *co, const char *str, unsigned int count)
18584 +
18585 + uart_console_write(port, str, count, mxs_auart_console_putchar);
18586 +
18587 +- /*
18588 +- * Finally, wait for transmitter to become empty
18589 +- * and restore the TCR
18590 +- */
18591 ++ /* Finally, wait for transmitter to become empty ... */
18592 + while (readl(port->membase + AUART_STAT) & AUART_STAT_BUSY) {
18593 ++ udelay(1);
18594 + if (!to--)
18595 + break;
18596 +- udelay(1);
18597 + }
18598 +
18599 +- writel(old_ctrl0, port->membase + AUART_CTRL0);
18600 +- writel(old_ctrl2, port->membase + AUART_CTRL2);
18601 ++ /*
18602 ++ * ... and restore the TCR if we waited long enough for the transmitter
18603 ++ * to be idle. This might keep the transmitter enabled although it is
18604 ++ * unused, but that is better than to disable it while it is still
18605 ++ * transmitting.
18606 ++ */
18607 ++ if (!(readl(port->membase + AUART_STAT) & AUART_STAT_BUSY)) {
18608 ++ writel(old_ctrl0, port->membase + AUART_CTRL0);
18609 ++ writel(old_ctrl2, port->membase + AUART_CTRL2);
18610 ++ }
18611 +
18612 + clk_disable(s->clk);
18613 + }
18614 +diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
18615 +index 62b86a6..b92d333 100644
18616 +--- a/drivers/usb/serial/mos7840.c
18617 ++++ b/drivers/usb/serial/mos7840.c
18618 +@@ -183,7 +183,10 @@
18619 + #define LED_ON_MS 500
18620 + #define LED_OFF_MS 500
18621 +
18622 +-static int device_type;
18623 ++enum mos7840_flag {
18624 ++ MOS7840_FLAG_CTRL_BUSY,
18625 ++ MOS7840_FLAG_LED_BUSY,
18626 ++};
18627 +
18628 + static const struct usb_device_id id_table[] = {
18629 + {USB_DEVICE(USB_VENDOR_ID_MOSCHIP, MOSCHIP_DEVICE_ID_7840)},
18630 +@@ -238,9 +241,12 @@ struct moschip_port {
18631 +
18632 + /* For device(s) with LED indicator */
18633 + bool has_led;
18634 +- bool led_flag;
18635 + struct timer_list led_timer1; /* Timer for LED on */
18636 + struct timer_list led_timer2; /* Timer for LED off */
18637 ++ struct urb *led_urb;
18638 ++ struct usb_ctrlrequest *led_dr;
18639 ++
18640 ++ unsigned long flags;
18641 + };
18642 +
18643 + /*
18644 +@@ -467,10 +473,10 @@ static void mos7840_control_callback(struct urb *urb)
18645 + case -ESHUTDOWN:
18646 + /* this urb is terminated, clean up */
18647 + dev_dbg(dev, "%s - urb shutting down with status: %d\n", __func__, status);
18648 +- return;
18649 ++ goto out;
18650 + default:
18651 + dev_dbg(dev, "%s - nonzero urb status received: %d\n", __func__, status);
18652 +- return;
18653 ++ goto out;
18654 + }
18655 +
18656 + dev_dbg(dev, "%s urb buffer size is %d\n", __func__, urb->actual_length);
18657 +@@ -483,6 +489,8 @@ static void mos7840_control_callback(struct urb *urb)
18658 + mos7840_handle_new_msr(mos7840_port, regval);
18659 + else if (mos7840_port->MsrLsr == 1)
18660 + mos7840_handle_new_lsr(mos7840_port, regval);
18661 ++out:
18662 ++ clear_bit_unlock(MOS7840_FLAG_CTRL_BUSY, &mos7840_port->flags);
18663 + }
18664 +
18665 + static int mos7840_get_reg(struct moschip_port *mcs, __u16 Wval, __u16 reg,
18666 +@@ -493,6 +501,9 @@ static int mos7840_get_reg(struct moschip_port *mcs, __u16 Wval, __u16 reg,
18667 + unsigned char *buffer = mcs->ctrl_buf;
18668 + int ret;
18669 +
18670 ++ if (test_and_set_bit_lock(MOS7840_FLAG_CTRL_BUSY, &mcs->flags))
18671 ++ return -EBUSY;
18672 ++
18673 + dr->bRequestType = MCS_RD_RTYPE;
18674 + dr->bRequest = MCS_RDREQ;
18675 + dr->wValue = cpu_to_le16(Wval); /* 0 */
18676 +@@ -504,6 +515,9 @@ static int mos7840_get_reg(struct moschip_port *mcs, __u16 Wval, __u16 reg,
18677 + mos7840_control_callback, mcs);
18678 + mcs->control_urb->transfer_buffer_length = 2;
18679 + ret = usb_submit_urb(mcs->control_urb, GFP_ATOMIC);
18680 ++ if (ret)
18681 ++ clear_bit_unlock(MOS7840_FLAG_CTRL_BUSY, &mcs->flags);
18682 ++
18683 + return ret;
18684 + }
18685 +
18686 +@@ -530,7 +544,7 @@ static void mos7840_set_led_async(struct moschip_port *mcs, __u16 wval,
18687 + __u16 reg)
18688 + {
18689 + struct usb_device *dev = mcs->port->serial->dev;
18690 +- struct usb_ctrlrequest *dr = mcs->dr;
18691 ++ struct usb_ctrlrequest *dr = mcs->led_dr;
18692 +
18693 + dr->bRequestType = MCS_WR_RTYPE;
18694 + dr->bRequest = MCS_WRREQ;
18695 +@@ -538,10 +552,10 @@ static void mos7840_set_led_async(struct moschip_port *mcs, __u16 wval,
18696 + dr->wIndex = cpu_to_le16(reg);
18697 + dr->wLength = cpu_to_le16(0);
18698 +
18699 +- usb_fill_control_urb(mcs->control_urb, dev, usb_sndctrlpipe(dev, 0),
18700 ++ usb_fill_control_urb(mcs->led_urb, dev, usb_sndctrlpipe(dev, 0),
18701 + (unsigned char *)dr, NULL, 0, mos7840_set_led_callback, NULL);
18702 +
18703 +- usb_submit_urb(mcs->control_urb, GFP_ATOMIC);
18704 ++ usb_submit_urb(mcs->led_urb, GFP_ATOMIC);
18705 + }
18706 +
18707 + static void mos7840_set_led_sync(struct usb_serial_port *port, __u16 reg,
18708 +@@ -567,7 +581,19 @@ static void mos7840_led_flag_off(unsigned long arg)
18709 + {
18710 + struct moschip_port *mcs = (struct moschip_port *) arg;
18711 +
18712 +- mcs->led_flag = false;
18713 ++ clear_bit_unlock(MOS7840_FLAG_LED_BUSY, &mcs->flags);
18714 ++}
18715 ++
18716 ++static void mos7840_led_activity(struct usb_serial_port *port)
18717 ++{
18718 ++ struct moschip_port *mos7840_port = usb_get_serial_port_data(port);
18719 ++
18720 ++ if (test_and_set_bit_lock(MOS7840_FLAG_LED_BUSY, &mos7840_port->flags))
18721 ++ return;
18722 ++
18723 ++ mos7840_set_led_async(mos7840_port, 0x0301, MODEM_CONTROL_REGISTER);
18724 ++ mod_timer(&mos7840_port->led_timer1,
18725 ++ jiffies + msecs_to_jiffies(LED_ON_MS));
18726 + }
18727 +
18728 + /*****************************************************************************
18729 +@@ -767,14 +793,8 @@ static void mos7840_bulk_in_callback(struct urb *urb)
18730 + return;
18731 + }
18732 +
18733 +- /* Turn on LED */
18734 +- if (mos7840_port->has_led && !mos7840_port->led_flag) {
18735 +- mos7840_port->led_flag = true;
18736 +- mos7840_set_led_async(mos7840_port, 0x0301,
18737 +- MODEM_CONTROL_REGISTER);
18738 +- mod_timer(&mos7840_port->led_timer1,
18739 +- jiffies + msecs_to_jiffies(LED_ON_MS));
18740 +- }
18741 ++ if (mos7840_port->has_led)
18742 ++ mos7840_led_activity(port);
18743 +
18744 + mos7840_port->read_urb_busy = true;
18745 + retval = usb_submit_urb(mos7840_port->read_urb, GFP_ATOMIC);
18746 +@@ -825,18 +845,6 @@ static void mos7840_bulk_out_data_callback(struct urb *urb)
18747 + /************************************************************************/
18748 + /* D R I V E R T T Y I N T E R F A C E F U N C T I O N S */
18749 + /************************************************************************/
18750 +-#ifdef MCSSerialProbe
18751 +-static int mos7840_serial_probe(struct usb_serial *serial,
18752 +- const struct usb_device_id *id)
18753 +-{
18754 +-
18755 +- /*need to implement the mode_reg reading and updating\
18756 +- structures usb_serial_ device_type\
18757 +- (i.e num_ports, num_bulkin,bulkout etc) */
18758 +- /* Also we can update the changes attach */
18759 +- return 1;
18760 +-}
18761 +-#endif
18762 +
18763 + /*****************************************************************************
18764 + * mos7840_open
18765 +@@ -1467,13 +1475,8 @@ static int mos7840_write(struct tty_struct *tty, struct usb_serial_port *port,
18766 + data1 = urb->transfer_buffer;
18767 + dev_dbg(&port->dev, "bulkout endpoint is %d\n", port->bulk_out_endpointAddress);
18768 +
18769 +- /* Turn on LED */
18770 +- if (mos7840_port->has_led && !mos7840_port->led_flag) {
18771 +- mos7840_port->led_flag = true;
18772 +- mos7840_set_led_sync(port, MODEM_CONTROL_REGISTER, 0x0301);
18773 +- mod_timer(&mos7840_port->led_timer1,
18774 +- jiffies + msecs_to_jiffies(LED_ON_MS));
18775 +- }
18776 ++ if (mos7840_port->has_led)
18777 ++ mos7840_led_activity(port);
18778 +
18779 + /* send it down the pipe */
18780 + status = usb_submit_urb(urb, GFP_ATOMIC);
18781 +@@ -2202,38 +2205,48 @@ static int mos7810_check(struct usb_serial *serial)
18782 + return 0;
18783 + }
18784 +
18785 +-static int mos7840_calc_num_ports(struct usb_serial *serial)
18786 ++static int mos7840_probe(struct usb_serial *serial,
18787 ++ const struct usb_device_id *id)
18788 + {
18789 +- __u16 data = 0x00;
18790 ++ u16 product = serial->dev->descriptor.idProduct;
18791 + u8 *buf;
18792 +- int mos7840_num_ports;
18793 ++ int device_type;
18794 ++
18795 ++ if (product == MOSCHIP_DEVICE_ID_7810 ||
18796 ++ product == MOSCHIP_DEVICE_ID_7820) {
18797 ++ device_type = product;
18798 ++ goto out;
18799 ++ }
18800 +
18801 + buf = kzalloc(VENDOR_READ_LENGTH, GFP_KERNEL);
18802 +- if (buf) {
18803 +- usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
18804 ++ if (!buf)
18805 ++ return -ENOMEM;
18806 ++
18807 ++ usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
18808 + MCS_RDREQ, MCS_RD_RTYPE, 0, GPIO_REGISTER, buf,
18809 + VENDOR_READ_LENGTH, MOS_WDR_TIMEOUT);
18810 +- data = *buf;
18811 +- kfree(buf);
18812 +- }
18813 +
18814 +- if (serial->dev->descriptor.idProduct == MOSCHIP_DEVICE_ID_7810 ||
18815 +- serial->dev->descriptor.idProduct == MOSCHIP_DEVICE_ID_7820) {
18816 +- device_type = serial->dev->descriptor.idProduct;
18817 +- } else {
18818 +- /* For a MCS7840 device GPIO0 must be set to 1 */
18819 +- if ((data & 0x01) == 1)
18820 +- device_type = MOSCHIP_DEVICE_ID_7840;
18821 +- else if (mos7810_check(serial))
18822 +- device_type = MOSCHIP_DEVICE_ID_7810;
18823 +- else
18824 +- device_type = MOSCHIP_DEVICE_ID_7820;
18825 +- }
18826 ++ /* For a MCS7840 device GPIO0 must be set to 1 */
18827 ++ if (buf[0] & 0x01)
18828 ++ device_type = MOSCHIP_DEVICE_ID_7840;
18829 ++ else if (mos7810_check(serial))
18830 ++ device_type = MOSCHIP_DEVICE_ID_7810;
18831 ++ else
18832 ++ device_type = MOSCHIP_DEVICE_ID_7820;
18833 ++
18834 ++ kfree(buf);
18835 ++out:
18836 ++ usb_set_serial_data(serial, (void *)(unsigned long)device_type);
18837 ++
18838 ++ return 0;
18839 ++}
18840 ++
18841 ++static int mos7840_calc_num_ports(struct usb_serial *serial)
18842 ++{
18843 ++ int device_type = (unsigned long)usb_get_serial_data(serial);
18844 ++ int mos7840_num_ports;
18845 +
18846 + mos7840_num_ports = (device_type >> 4) & 0x000F;
18847 +- serial->num_bulk_in = mos7840_num_ports;
18848 +- serial->num_bulk_out = mos7840_num_ports;
18849 +- serial->num_ports = mos7840_num_ports;
18850 +
18851 + return mos7840_num_ports;
18852 + }
18853 +@@ -2241,6 +2254,7 @@ static int mos7840_calc_num_ports(struct usb_serial *serial)
18854 + static int mos7840_port_probe(struct usb_serial_port *port)
18855 + {
18856 + struct usb_serial *serial = port->serial;
18857 ++ int device_type = (unsigned long)usb_get_serial_data(serial);
18858 + struct moschip_port *mos7840_port;
18859 + int status;
18860 + int pnum;
18861 +@@ -2418,6 +2432,14 @@ static int mos7840_port_probe(struct usb_serial_port *port)
18862 + if (device_type == MOSCHIP_DEVICE_ID_7810) {
18863 + mos7840_port->has_led = true;
18864 +
18865 ++ mos7840_port->led_urb = usb_alloc_urb(0, GFP_KERNEL);
18866 ++ mos7840_port->led_dr = kmalloc(sizeof(*mos7840_port->led_dr),
18867 ++ GFP_KERNEL);
18868 ++ if (!mos7840_port->led_urb || !mos7840_port->led_dr) {
18869 ++ status = -ENOMEM;
18870 ++ goto error;
18871 ++ }
18872 ++
18873 + init_timer(&mos7840_port->led_timer1);
18874 + mos7840_port->led_timer1.function = mos7840_led_off;
18875 + mos7840_port->led_timer1.expires =
18876 +@@ -2430,8 +2452,6 @@ static int mos7840_port_probe(struct usb_serial_port *port)
18877 + jiffies + msecs_to_jiffies(LED_OFF_MS);
18878 + mos7840_port->led_timer2.data = (unsigned long)mos7840_port;
18879 +
18880 +- mos7840_port->led_flag = false;
18881 +-
18882 + /* Turn off LED */
18883 + mos7840_set_led_sync(port, MODEM_CONTROL_REGISTER, 0x0300);
18884 + }
18885 +@@ -2453,6 +2473,8 @@ out:
18886 + }
18887 + return 0;
18888 + error:
18889 ++ kfree(mos7840_port->led_dr);
18890 ++ usb_free_urb(mos7840_port->led_urb);
18891 + kfree(mos7840_port->dr);
18892 + kfree(mos7840_port->ctrl_buf);
18893 + usb_free_urb(mos7840_port->control_urb);
18894 +@@ -2473,6 +2495,10 @@ static int mos7840_port_remove(struct usb_serial_port *port)
18895 +
18896 + del_timer_sync(&mos7840_port->led_timer1);
18897 + del_timer_sync(&mos7840_port->led_timer2);
18898 ++
18899 ++ usb_kill_urb(mos7840_port->led_urb);
18900 ++ usb_free_urb(mos7840_port->led_urb);
18901 ++ kfree(mos7840_port->led_dr);
18902 + }
18903 + usb_kill_urb(mos7840_port->control_urb);
18904 + usb_free_urb(mos7840_port->control_urb);
18905 +@@ -2499,9 +2525,7 @@ static struct usb_serial_driver moschip7840_4port_device = {
18906 + .throttle = mos7840_throttle,
18907 + .unthrottle = mos7840_unthrottle,
18908 + .calc_num_ports = mos7840_calc_num_ports,
18909 +-#ifdef MCSSerialProbe
18910 +- .probe = mos7840_serial_probe,
18911 +-#endif
18912 ++ .probe = mos7840_probe,
18913 + .ioctl = mos7840_ioctl,
18914 + .set_termios = mos7840_set_termios,
18915 + .break_ctl = mos7840_break,
18916 +diff --git a/fs/btrfs/ulist.c b/fs/btrfs/ulist.c
18917 +index 7b417e2..b0a523b2 100644
18918 +--- a/fs/btrfs/ulist.c
18919 ++++ b/fs/btrfs/ulist.c
18920 +@@ -205,6 +205,10 @@ int ulist_add_merge(struct ulist *ulist, u64 val, u64 aux,
18921 + u64 new_alloced = ulist->nodes_alloced + 128;
18922 + struct ulist_node *new_nodes;
18923 + void *old = NULL;
18924 ++ int i;
18925 ++
18926 ++ for (i = 0; i < ulist->nnodes; i++)
18927 ++ rb_erase(&ulist->nodes[i].rb_node, &ulist->root);
18928 +
18929 + /*
18930 + * if nodes_alloced == ULIST_SIZE no memory has been allocated
18931 +@@ -224,6 +228,17 @@ int ulist_add_merge(struct ulist *ulist, u64 val, u64 aux,
18932 +
18933 + ulist->nodes = new_nodes;
18934 + ulist->nodes_alloced = new_alloced;
18935 ++
18936 ++ /*
18937 ++ * krealloc actually uses memcpy, which does not copy rb_node
18938 ++ * pointers, so we have to do it ourselves. Otherwise we may
18939 ++ * be bitten by crashes.
18940 ++ */
18941 ++ for (i = 0; i < ulist->nnodes; i++) {
18942 ++ ret = ulist_rbtree_insert(ulist, &ulist->nodes[i]);
18943 ++ if (ret < 0)
18944 ++ return ret;
18945 ++ }
18946 + }
18947 + ulist->nodes[ulist->nnodes].val = val;
18948 + ulist->nodes[ulist->nnodes].aux = aux;
18949 +diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
18950 +index 6c80083..77cc85d 100644
18951 +--- a/fs/notify/fanotify/fanotify_user.c
18952 ++++ b/fs/notify/fanotify/fanotify_user.c
18953 +@@ -122,6 +122,7 @@ static int fill_event_metadata(struct fsnotify_group *group,
18954 + metadata->event_len = FAN_EVENT_METADATA_LEN;
18955 + metadata->metadata_len = FAN_EVENT_METADATA_LEN;
18956 + metadata->vers = FANOTIFY_METADATA_VERSION;
18957 ++ metadata->reserved = 0;
18958 + metadata->mask = event->mask & FAN_ALL_OUTGOING_EVENTS;
18959 + metadata->pid = pid_vnr(event->tgid);
18960 + if (unlikely(event->mask & FAN_Q_OVERFLOW))
18961 +diff --git a/include/linux/tick.h b/include/linux/tick.h
18962 +index 9180f4b..62bd8b7 100644
18963 +--- a/include/linux/tick.h
18964 ++++ b/include/linux/tick.h
18965 +@@ -174,10 +174,4 @@ static inline void tick_nohz_task_switch(struct task_struct *tsk) { }
18966 + #endif
18967 +
18968 +
18969 +-# ifdef CONFIG_CPU_IDLE_GOV_MENU
18970 +-extern void menu_hrtimer_cancel(void);
18971 +-# else
18972 +-static inline void menu_hrtimer_cancel(void) {}
18973 +-# endif /* CONFIG_CPU_IDLE_GOV_MENU */
18974 +-
18975 + #endif
18976 +diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
18977 +index b6b215f..14105c2 100644
18978 +--- a/include/linux/user_namespace.h
18979 ++++ b/include/linux/user_namespace.h
18980 +@@ -23,6 +23,7 @@ struct user_namespace {
18981 + struct uid_gid_map projid_map;
18982 + atomic_t count;
18983 + struct user_namespace *parent;
18984 ++ int level;
18985 + kuid_t owner;
18986 + kgid_t group;
18987 + unsigned int proc_inum;
18988 +diff --git a/include/net/ndisc.h b/include/net/ndisc.h
18989 +index 745bf74..5043f8b 100644
18990 +--- a/include/net/ndisc.h
18991 ++++ b/include/net/ndisc.h
18992 +@@ -119,7 +119,7 @@ extern struct ndisc_options *ndisc_parse_options(u8 *opt, int opt_len,
18993 + * if RFC 3831 IPv6-over-Fibre Channel is ever implemented it may
18994 + * also need a pad of 2.
18995 + */
18996 +-static int ndisc_addr_option_pad(unsigned short type)
18997 ++static inline int ndisc_addr_option_pad(unsigned short type)
18998 + {
18999 + switch (type) {
19000 + case ARPHRD_INFINIBAND: return 2;
19001 +diff --git a/kernel/cgroup.c b/kernel/cgroup.c
19002 +index c6e77ef..2e9b387 100644
19003 +--- a/kernel/cgroup.c
19004 ++++ b/kernel/cgroup.c
19005 +@@ -2769,13 +2769,17 @@ static void cgroup_cfts_commit(struct cgroup_subsys *ss,
19006 + {
19007 + LIST_HEAD(pending);
19008 + struct cgroup *cgrp, *n;
19009 ++ struct super_block *sb = ss->root->sb;
19010 +
19011 + /* %NULL @cfts indicates abort and don't bother if @ss isn't attached */
19012 +- if (cfts && ss->root != &rootnode) {
19013 ++ if (cfts && ss->root != &rootnode &&
19014 ++ atomic_inc_not_zero(&sb->s_active)) {
19015 + list_for_each_entry(cgrp, &ss->root->allcg_list, allcg_node) {
19016 + dget(cgrp->dentry);
19017 + list_add_tail(&cgrp->cft_q_node, &pending);
19018 + }
19019 ++ } else {
19020 ++ sb = NULL;
19021 + }
19022 +
19023 + mutex_unlock(&cgroup_mutex);
19024 +@@ -2798,6 +2802,9 @@ static void cgroup_cfts_commit(struct cgroup_subsys *ss,
19025 + dput(cgrp->dentry);
19026 + }
19027 +
19028 ++ if (sb)
19029 ++ deactivate_super(sb);
19030 ++
19031 + mutex_unlock(&cgroup_cft_mutex);
19032 + }
19033 +
19034 +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
19035 +index 0cf1c14..4251374 100644
19036 +--- a/kernel/time/tick-sched.c
19037 ++++ b/kernel/time/tick-sched.c
19038 +@@ -832,13 +832,10 @@ void tick_nohz_irq_exit(void)
19039 + {
19040 + struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
19041 +
19042 +- if (ts->inidle) {
19043 +- /* Cancel the timer because CPU already waken up from the C-states*/
19044 +- menu_hrtimer_cancel();
19045 ++ if (ts->inidle)
19046 + __tick_nohz_idle_enter(ts);
19047 +- } else {
19048 ++ else
19049 + tick_nohz_full_stop_tick(ts);
19050 +- }
19051 + }
19052 +
19053 + /**
19054 +@@ -936,8 +933,6 @@ void tick_nohz_idle_exit(void)
19055 +
19056 + ts->inidle = 0;
19057 +
19058 +- /* Cancel the timer because CPU already waken up from the C-states*/
19059 +- menu_hrtimer_cancel();
19060 + if (ts->idle_active || ts->tick_stopped)
19061 + now = ktime_get();
19062 +
19063 +diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
19064 +index d8c30db..9064b91 100644
19065 +--- a/kernel/user_namespace.c
19066 ++++ b/kernel/user_namespace.c
19067 +@@ -62,6 +62,9 @@ int create_user_ns(struct cred *new)
19068 + kgid_t group = new->egid;
19069 + int ret;
19070 +
19071 ++ if (parent_ns->level > 32)
19072 ++ return -EUSERS;
19073 ++
19074 + /*
19075 + * Verify that we can not violate the policy of which files
19076 + * may be accessed that is specified by the root directory,
19077 +@@ -92,6 +95,7 @@ int create_user_ns(struct cred *new)
19078 + atomic_set(&ns->count, 1);
19079 + /* Leave the new->user_ns reference with the new user namespace. */
19080 + ns->parent = parent_ns;
19081 ++ ns->level = parent_ns->level + 1;
19082 + ns->owner = owner;
19083 + ns->group = group;
19084 +
19085 +@@ -105,16 +109,21 @@ int create_user_ns(struct cred *new)
19086 + int unshare_userns(unsigned long unshare_flags, struct cred **new_cred)
19087 + {
19088 + struct cred *cred;
19089 ++ int err = -ENOMEM;
19090 +
19091 + if (!(unshare_flags & CLONE_NEWUSER))
19092 + return 0;
19093 +
19094 + cred = prepare_creds();
19095 +- if (!cred)
19096 +- return -ENOMEM;
19097 ++ if (cred) {
19098 ++ err = create_user_ns(cred);
19099 ++ if (err)
19100 ++ put_cred(cred);
19101 ++ else
19102 ++ *new_cred = cred;
19103 ++ }
19104 +
19105 +- *new_cred = cred;
19106 +- return create_user_ns(cred);
19107 ++ return err;
19108 + }
19109 +
19110 + void free_user_ns(struct user_namespace *ns)
19111 +diff --git a/kernel/workqueue.c b/kernel/workqueue.c
19112 +index ee8e29a..6f01921 100644
19113 +--- a/kernel/workqueue.c
19114 ++++ b/kernel/workqueue.c
19115 +@@ -3398,6 +3398,12 @@ static void copy_workqueue_attrs(struct workqueue_attrs *to,
19116 + {
19117 + to->nice = from->nice;
19118 + cpumask_copy(to->cpumask, from->cpumask);
19119 ++ /*
19120 ++ * Unlike hash and equality test, this function doesn't ignore
19121 ++ * ->no_numa as it is used for both pool and wq attrs. Instead,
19122 ++ * get_unbound_pool() explicitly clears ->no_numa after copying.
19123 ++ */
19124 ++ to->no_numa = from->no_numa;
19125 + }
19126 +
19127 + /* hash value of the content of @attr */
19128 +@@ -3565,6 +3571,12 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
19129 + lockdep_set_subclass(&pool->lock, 1); /* see put_pwq() */
19130 + copy_workqueue_attrs(pool->attrs, attrs);
19131 +
19132 ++ /*
19133 ++ * no_numa isn't a worker_pool attribute, always clear it. See
19134 ++ * 'struct workqueue_attrs' comments for detail.
19135 ++ */
19136 ++ pool->attrs->no_numa = false;
19137 ++
19138 + /* if cpumask is contained inside a NUMA node, we belong to that node */
19139 + if (wq_numa_enabled) {
19140 + for_each_node(node) {
19141 +diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
19142 +index fa2f63f..3f25e75 100644
19143 +--- a/net/ipv4/sysctl_net_ipv4.c
19144 ++++ b/net/ipv4/sysctl_net_ipv4.c
19145 +@@ -36,6 +36,8 @@ static int tcp_adv_win_scale_min = -31;
19146 + static int tcp_adv_win_scale_max = 31;
19147 + static int ip_ttl_min = 1;
19148 + static int ip_ttl_max = 255;
19149 ++static int tcp_syn_retries_min = 1;
19150 ++static int tcp_syn_retries_max = MAX_TCP_SYNCNT;
19151 + static int ip_ping_group_range_min[] = { 0, 0 };
19152 + static int ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX };
19153 +
19154 +@@ -331,7 +333,9 @@ static struct ctl_table ipv4_table[] = {
19155 + .data = &sysctl_tcp_syn_retries,
19156 + .maxlen = sizeof(int),
19157 + .mode = 0644,
19158 +- .proc_handler = proc_dointvec
19159 ++ .proc_handler = proc_dointvec_minmax,
19160 ++ .extra1 = &tcp_syn_retries_min,
19161 ++ .extra2 = &tcp_syn_retries_max
19162 + },
19163 + {
19164 + .procname = "tcp_synack_retries",
19165 +diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
19166 +index 241fb8a..4b42124 100644
19167 +--- a/net/ipv6/ip6mr.c
19168 ++++ b/net/ipv6/ip6mr.c
19169 +@@ -259,10 +259,12 @@ static void __net_exit ip6mr_rules_exit(struct net *net)
19170 + {
19171 + struct mr6_table *mrt, *next;
19172 +
19173 ++ rtnl_lock();
19174 + list_for_each_entry_safe(mrt, next, &net->ipv6.mr6_tables, list) {
19175 + list_del(&mrt->list);
19176 + ip6mr_free_table(mrt);
19177 + }
19178 ++ rtnl_unlock();
19179 + fib_rules_unregister(net->ipv6.mr6_rules_ops);
19180 + }
19181 + #else
19182 +@@ -289,7 +291,10 @@ static int __net_init ip6mr_rules_init(struct net *net)
19183 +
19184 + static void __net_exit ip6mr_rules_exit(struct net *net)
19185 + {
19186 ++ rtnl_lock();
19187 + ip6mr_free_table(net->ipv6.mrt6);
19188 ++ net->ipv6.mrt6 = NULL;
19189 ++ rtnl_unlock();
19190 + }
19191 + #endif
19192 +
19193 +diff --git a/net/key/af_key.c b/net/key/af_key.c
19194 +index 9da8620..ab8bd2c 100644
19195 +--- a/net/key/af_key.c
19196 ++++ b/net/key/af_key.c
19197 +@@ -2081,6 +2081,7 @@ static int pfkey_xfrm_policy2msg(struct sk_buff *skb, const struct xfrm_policy *
19198 + pol->sadb_x_policy_type = IPSEC_POLICY_NONE;
19199 + }
19200 + pol->sadb_x_policy_dir = dir+1;
19201 ++ pol->sadb_x_policy_reserved = 0;
19202 + pol->sadb_x_policy_id = xp->index;
19203 + pol->sadb_x_policy_priority = xp->priority;
19204 +
19205 +@@ -3137,7 +3138,9 @@ static int pfkey_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *t, struct
19206 + pol->sadb_x_policy_exttype = SADB_X_EXT_POLICY;
19207 + pol->sadb_x_policy_type = IPSEC_POLICY_IPSEC;
19208 + pol->sadb_x_policy_dir = XFRM_POLICY_OUT + 1;
19209 ++ pol->sadb_x_policy_reserved = 0;
19210 + pol->sadb_x_policy_id = xp->index;
19211 ++ pol->sadb_x_policy_priority = xp->priority;
19212 +
19213 + /* Set sadb_comb's. */
19214 + if (x->id.proto == IPPROTO_AH)
19215 +@@ -3525,6 +3528,7 @@ static int pfkey_send_migrate(const struct xfrm_selector *sel, u8 dir, u8 type,
19216 + pol->sadb_x_policy_exttype = SADB_X_EXT_POLICY;
19217 + pol->sadb_x_policy_type = IPSEC_POLICY_IPSEC;
19218 + pol->sadb_x_policy_dir = dir + 1;
19219 ++ pol->sadb_x_policy_reserved = 0;
19220 + pol->sadb_x_policy_id = 0;
19221 + pol->sadb_x_policy_priority = 0;
19222 +
19223 +diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
19224 +index 4fdb306e..ae36f8e 100644
19225 +--- a/net/mac80211/cfg.c
19226 ++++ b/net/mac80211/cfg.c
19227 +@@ -652,6 +652,8 @@ static void ieee80211_get_et_stats(struct wiphy *wiphy,
19228 + if (sta->sdata->dev != dev)
19229 + continue;
19230 +
19231 ++ sinfo.filled = 0;
19232 ++ sta_set_sinfo(sta, &sinfo);
19233 + i = 0;
19234 + ADD_STA_STATS(sta);
19235 + }
19236 +diff --git a/net/mac80211/pm.c b/net/mac80211/pm.c
19237 +index 7fc5d0d..3401262 100644
19238 +--- a/net/mac80211/pm.c
19239 ++++ b/net/mac80211/pm.c
19240 +@@ -99,10 +99,13 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
19241 + }
19242 + mutex_unlock(&local->sta_mtx);
19243 +
19244 +- /* remove all interfaces */
19245 ++ /* remove all interfaces that were created in the driver */
19246 + list_for_each_entry(sdata, &local->interfaces, list) {
19247 +- if (!ieee80211_sdata_running(sdata))
19248 ++ if (!ieee80211_sdata_running(sdata) ||
19249 ++ sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||
19250 ++ sdata->vif.type == NL80211_IFTYPE_MONITOR)
19251 + continue;
19252 ++
19253 + drv_remove_interface(local, sdata);
19254 + }
19255 +
19256 +diff --git a/net/mac80211/rc80211_minstrel.c b/net/mac80211/rc80211_minstrel.c
19257 +index ac7ef54..e6512e2 100644
19258 +--- a/net/mac80211/rc80211_minstrel.c
19259 ++++ b/net/mac80211/rc80211_minstrel.c
19260 +@@ -290,7 +290,7 @@ minstrel_get_rate(void *priv, struct ieee80211_sta *sta,
19261 + struct minstrel_rate *msr, *mr;
19262 + unsigned int ndx;
19263 + bool mrr_capable;
19264 +- bool prev_sample = mi->prev_sample;
19265 ++ bool prev_sample;
19266 + int delta;
19267 + int sampling_ratio;
19268 +
19269 +@@ -314,6 +314,7 @@ minstrel_get_rate(void *priv, struct ieee80211_sta *sta,
19270 + (mi->sample_count + mi->sample_deferred / 2);
19271 +
19272 + /* delta < 0: no sampling required */
19273 ++ prev_sample = mi->prev_sample;
19274 + mi->prev_sample = false;
19275 + if (delta < 0 || (!mrr_capable && prev_sample))
19276 + return;
19277 +diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
19278 +index 5b2d301..f5aed96 100644
19279 +--- a/net/mac80211/rc80211_minstrel_ht.c
19280 ++++ b/net/mac80211/rc80211_minstrel_ht.c
19281 +@@ -804,10 +804,18 @@ minstrel_ht_get_rate(void *priv, struct ieee80211_sta *sta, void *priv_sta,
19282 +
19283 + sample_group = &minstrel_mcs_groups[sample_idx / MCS_GROUP_RATES];
19284 + info->flags |= IEEE80211_TX_CTL_RATE_CTRL_PROBE;
19285 ++ rate->count = 1;
19286 ++
19287 ++ if (sample_idx / MCS_GROUP_RATES == MINSTREL_CCK_GROUP) {
19288 ++ int idx = sample_idx % ARRAY_SIZE(mp->cck_rates);
19289 ++ rate->idx = mp->cck_rates[idx];
19290 ++ rate->flags = 0;
19291 ++ return;
19292 ++ }
19293 ++
19294 + rate->idx = sample_idx % MCS_GROUP_RATES +
19295 + (sample_group->streams - 1) * MCS_GROUP_RATES;
19296 + rate->flags = IEEE80211_TX_RC_MCS | sample_group->flags;
19297 +- rate->count = 1;
19298 + }
19299 +
19300 + static void
19301 +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
19302 +index 8e29526..83f6d29 100644
19303 +--- a/net/mac80211/rx.c
19304 ++++ b/net/mac80211/rx.c
19305 +@@ -932,8 +932,14 @@ ieee80211_rx_h_check(struct ieee80211_rx_data *rx)
19306 + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
19307 + struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
19308 +
19309 +- /* Drop duplicate 802.11 retransmissions (IEEE 802.11 Chap. 9.2.9) */
19310 +- if (rx->sta && !is_multicast_ether_addr(hdr->addr1)) {
19311 ++ /*
19312 ++ * Drop duplicate 802.11 retransmissions
19313 ++ * (IEEE 802.11-2012: 9.3.2.10 "Duplicate detection and recovery")
19314 ++ */
19315 ++ if (rx->skb->len >= 24 && rx->sta &&
19316 ++ !ieee80211_is_ctl(hdr->frame_control) &&
19317 ++ !ieee80211_is_qos_nullfunc(hdr->frame_control) &&
19318 ++ !is_multicast_ether_addr(hdr->addr1)) {
19319 + if (unlikely(ieee80211_has_retry(hdr->frame_control) &&
19320 + rx->sta->last_seq_ctrl[rx->seqno_idx] ==
19321 + hdr->seq_ctrl)) {
19322 +diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
19323 +index 2fd6dbe..1076fe1 100644
19324 +--- a/net/netlink/genetlink.c
19325 ++++ b/net/netlink/genetlink.c
19326 +@@ -877,8 +877,10 @@ static int ctrl_getfamily(struct sk_buff *skb, struct genl_info *info)
19327 + #ifdef CONFIG_MODULES
19328 + if (res == NULL) {
19329 + genl_unlock();
19330 ++ up_read(&cb_lock);
19331 + request_module("net-pf-%d-proto-%d-family-%s",
19332 + PF_NETLINK, NETLINK_GENERIC, name);
19333 ++ down_read(&cb_lock);
19334 + genl_lock();
19335 + res = genl_family_find_byname(name);
19336 + }
19337 +diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
19338 +index ca8e0a5..1f9c314 100644
19339 +--- a/net/sched/sch_atm.c
19340 ++++ b/net/sched/sch_atm.c
19341 +@@ -605,6 +605,7 @@ static int atm_tc_dump_class(struct Qdisc *sch, unsigned long cl,
19342 + struct sockaddr_atmpvc pvc;
19343 + int state;
19344 +
19345 ++ memset(&pvc, 0, sizeof(pvc));
19346 + pvc.sap_family = AF_ATMPVC;
19347 + pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1;
19348 + pvc.sap_addr.vpi = flow->vcc->vpi;
19349 +diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
19350 +index 1bc210f..8ec1598 100644
19351 +--- a/net/sched/sch_cbq.c
19352 ++++ b/net/sched/sch_cbq.c
19353 +@@ -1465,6 +1465,7 @@ static int cbq_dump_wrr(struct sk_buff *skb, struct cbq_class *cl)
19354 + unsigned char *b = skb_tail_pointer(skb);
19355 + struct tc_cbq_wrropt opt;
19356 +
19357 ++ memset(&opt, 0, sizeof(opt));
19358 + opt.flags = 0;
19359 + opt.allot = cl->allot;
19360 + opt.priority = cl->priority + 1;
19361 +diff --git a/net/sunrpc/auth_gss/gss_rpc_upcall.c b/net/sunrpc/auth_gss/gss_rpc_upcall.c
19362 +index d304f41..af7ffd4 100644
19363 +--- a/net/sunrpc/auth_gss/gss_rpc_upcall.c
19364 ++++ b/net/sunrpc/auth_gss/gss_rpc_upcall.c
19365 +@@ -120,7 +120,7 @@ static int gssp_rpc_create(struct net *net, struct rpc_clnt **_clnt)
19366 + if (IS_ERR(clnt)) {
19367 + dprintk("RPC: failed to create AF_LOCAL gssproxy "
19368 + "client (errno %ld).\n", PTR_ERR(clnt));
19369 +- result = -PTR_ERR(clnt);
19370 ++ result = PTR_ERR(clnt);
19371 + *_clnt = NULL;
19372 + goto out;
19373 + }
19374 +@@ -328,7 +328,6 @@ void gssp_free_upcall_data(struct gssp_upcall_data *data)
19375 + kfree(data->in_handle.data);
19376 + kfree(data->out_handle.data);
19377 + kfree(data->out_token.data);
19378 +- kfree(data->mech_oid.data);
19379 + free_svc_cred(&data->creds);
19380 + }
19381 +
19382 +diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
19383 +index 357f613..3c85d1c 100644
19384 +--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
19385 ++++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
19386 +@@ -430,7 +430,7 @@ static int dummy_enc_nameattr_array(struct xdr_stream *xdr,
19387 + static int dummy_dec_nameattr_array(struct xdr_stream *xdr,
19388 + struct gssx_name_attr_array *naa)
19389 + {
19390 +- struct gssx_name_attr dummy;
19391 ++ struct gssx_name_attr dummy = { .attr = {.len = 0} };
19392 + u32 count, i;
19393 + __be32 *p;
19394 +
19395 +@@ -493,12 +493,13 @@ static int gssx_enc_name(struct xdr_stream *xdr,
19396 + return err;
19397 + }
19398 +
19399 ++
19400 + static int gssx_dec_name(struct xdr_stream *xdr,
19401 + struct gssx_name *name)
19402 + {
19403 +- struct xdr_netobj dummy_netobj;
19404 +- struct gssx_name_attr_array dummy_name_attr_array;
19405 +- struct gssx_option_array dummy_option_array;
19406 ++ struct xdr_netobj dummy_netobj = { .len = 0 };
19407 ++ struct gssx_name_attr_array dummy_name_attr_array = { .count = 0 };
19408 ++ struct gssx_option_array dummy_option_array = { .count = 0 };
19409 + int err;
19410 +
19411 + /* name->display_name */
19412 +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
19413 +index b14b7e3..db8ead9 100644
19414 +--- a/net/wireless/nl80211.c
19415 ++++ b/net/wireless/nl80211.c
19416 +@@ -6588,12 +6588,14 @@ EXPORT_SYMBOL(cfg80211_testmode_alloc_event_skb);
19417 +
19418 + void cfg80211_testmode_event(struct sk_buff *skb, gfp_t gfp)
19419 + {
19420 ++ struct cfg80211_registered_device *rdev = ((void **)skb->cb)[0];
19421 + void *hdr = ((void **)skb->cb)[1];
19422 + struct nlattr *data = ((void **)skb->cb)[2];
19423 +
19424 + nla_nest_end(skb, data);
19425 + genlmsg_end(skb, hdr);
19426 +- genlmsg_multicast(skb, 0, nl80211_testmode_mcgrp.id, gfp);
19427 ++ genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), skb, 0,
19428 ++ nl80211_testmode_mcgrp.id, gfp);
19429 + }
19430 + EXPORT_SYMBOL(cfg80211_testmode_event);
19431 + #endif
19432 +@@ -10028,7 +10030,8 @@ void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie,
19433 +
19434 + genlmsg_end(msg, hdr);
19435 +
19436 +- genlmsg_multicast(msg, 0, nl80211_mlme_mcgrp.id, gfp);
19437 ++ genlmsg_multicast_netns(wiphy_net(&rdev->wiphy), msg, 0,
19438 ++ nl80211_mlme_mcgrp.id, gfp);
19439 + return;
19440 +
19441 + nla_put_failure:
19442 +diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
19443 +index 99db892..9896954 100644
19444 +--- a/sound/core/compress_offload.c
19445 ++++ b/sound/core/compress_offload.c
19446 +@@ -743,7 +743,7 @@ static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
19447 + mutex_lock(&stream->device->lock);
19448 + switch (_IOC_NR(cmd)) {
19449 + case _IOC_NR(SNDRV_COMPRESS_IOCTL_VERSION):
19450 +- put_user(SNDRV_COMPRESS_VERSION,
19451 ++ retval = put_user(SNDRV_COMPRESS_VERSION,
19452 + (int __user *)arg) ? -EFAULT : 0;
19453 + break;
19454 + case _IOC_NR(SNDRV_COMPRESS_GET_CAPS):
19455 +diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
19456 +index 7c11d46..48a9d00 100644
19457 +--- a/sound/pci/hda/hda_auto_parser.c
19458 ++++ b/sound/pci/hda/hda_auto_parser.c
19459 +@@ -860,7 +860,7 @@ void snd_hda_pick_fixup(struct hda_codec *codec,
19460 + }
19461 + }
19462 + if (id < 0 && quirk) {
19463 +- for (q = quirk; q->subvendor; q++) {
19464 ++ for (q = quirk; q->subvendor || q->subdevice; q++) {
19465 + unsigned int vendorid =
19466 + q->subdevice | (q->subvendor << 16);
19467 + unsigned int mask = 0xffff0000 | q->subdevice_mask;
19468 +diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
19469 +index e849e1e..dc4833f 100644
19470 +--- a/sound/pci/hda/patch_sigmatel.c
19471 ++++ b/sound/pci/hda/patch_sigmatel.c
19472 +@@ -2815,6 +2815,7 @@ static const struct hda_pintbl ecs202_pin_configs[] = {
19473 +
19474 + /* codec SSIDs for Intel Mac sharing the same PCI SSID 8384:7680 */
19475 + static const struct snd_pci_quirk stac922x_intel_mac_fixup_tbl[] = {
19476 ++ SND_PCI_QUIRK(0x0000, 0x0100, "Mac Mini", STAC_INTEL_MAC_V3),
19477 + SND_PCI_QUIRK(0x106b, 0x0800, "Mac", STAC_INTEL_MAC_V1),
19478 + SND_PCI_QUIRK(0x106b, 0x0600, "Mac", STAC_INTEL_MAC_V2),
19479 + SND_PCI_QUIRK(0x106b, 0x0700, "Mac", STAC_INTEL_MAC_V2),
19480
19481 Added: genpatches-2.6/trunk/3.10.7/1006_linux-3.10.7.patch
19482 ===================================================================
19483 --- genpatches-2.6/trunk/3.10.7/1006_linux-3.10.7.patch (rev 0)
19484 +++ genpatches-2.6/trunk/3.10.7/1006_linux-3.10.7.patch 2013-08-29 12:09:12 UTC (rev 2497)
19485 @@ -0,0 +1,2737 @@
19486 +diff --git a/Makefile b/Makefile
19487 +index fd92ffb..33e36ab 100644
19488 +--- a/Makefile
19489 ++++ b/Makefile
19490 +@@ -1,6 +1,6 @@
19491 + VERSION = 3
19492 + PATCHLEVEL = 10
19493 +-SUBLEVEL = 6
19494 ++SUBLEVEL = 7
19495 + EXTRAVERSION =
19496 + NAME = TOSSUG Baby Fish
19497 +
19498 +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
19499 +index 7a58ab9..e53e2b4 100644
19500 +--- a/arch/mips/Kconfig
19501 ++++ b/arch/mips/Kconfig
19502 +@@ -27,6 +27,7 @@ config MIPS
19503 + select HAVE_GENERIC_HARDIRQS
19504 + select GENERIC_IRQ_PROBE
19505 + select GENERIC_IRQ_SHOW
19506 ++ select GENERIC_PCI_IOMAP
19507 + select HAVE_ARCH_JUMP_LABEL
19508 + select ARCH_WANT_IPC_PARSE_VERSION
19509 + select IRQ_FORCED_THREADING
19510 +@@ -2412,7 +2413,6 @@ config PCI
19511 + bool "Support for PCI controller"
19512 + depends on HW_HAS_PCI
19513 + select PCI_DOMAINS
19514 +- select GENERIC_PCI_IOMAP
19515 + select NO_GENERIC_PCI_IOPORT_MAP
19516 + help
19517 + Find out whether you have a PCI motherboard. PCI is the name of a
19518 +diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
19519 +index b7e5985..b84e1fb 100644
19520 +--- a/arch/mips/include/asm/io.h
19521 ++++ b/arch/mips/include/asm/io.h
19522 +@@ -170,6 +170,11 @@ static inline void * isa_bus_to_virt(unsigned long address)
19523 + extern void __iomem * __ioremap(phys_t offset, phys_t size, unsigned long flags);
19524 + extern void __iounmap(const volatile void __iomem *addr);
19525 +
19526 ++#ifndef CONFIG_PCI
19527 ++struct pci_dev;
19528 ++static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr) {}
19529 ++#endif
19530 ++
19531 + static inline void __iomem * __ioremap_mode(phys_t offset, unsigned long size,
19532 + unsigned long flags)
19533 + {
19534 +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
19535 +index c33e3ad..74991fe 100644
19536 +--- a/arch/powerpc/Kconfig
19537 ++++ b/arch/powerpc/Kconfig
19538 +@@ -572,7 +572,7 @@ config SCHED_SMT
19539 + config PPC_DENORMALISATION
19540 + bool "PowerPC denormalisation exception handling"
19541 + depends on PPC_BOOK3S_64
19542 +- default "n"
19543 ++ default "y" if PPC_POWERNV
19544 + ---help---
19545 + Add support for handling denormalisation of single precision
19546 + values. Useful for bare metal only. If unsure say Y here.
19547 +diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
19548 +index 14a6583..419e712 100644
19549 +--- a/arch/powerpc/include/asm/processor.h
19550 ++++ b/arch/powerpc/include/asm/processor.h
19551 +@@ -247,6 +247,10 @@ struct thread_struct {
19552 + unsigned long tm_orig_msr; /* Thread's MSR on ctx switch */
19553 + struct pt_regs ckpt_regs; /* Checkpointed registers */
19554 +
19555 ++ unsigned long tm_tar;
19556 ++ unsigned long tm_ppr;
19557 ++ unsigned long tm_dscr;
19558 ++
19559 + /*
19560 + * Transactional FP and VSX 0-31 register set.
19561 + * NOTE: the sense of these is the opposite of the integer ckpt_regs!
19562 +diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
19563 +index 362142b..e1fb161 100644
19564 +--- a/arch/powerpc/include/asm/reg.h
19565 ++++ b/arch/powerpc/include/asm/reg.h
19566 +@@ -254,19 +254,28 @@
19567 + #define SPRN_HRMOR 0x139 /* Real mode offset register */
19568 + #define SPRN_HSRR0 0x13A /* Hypervisor Save/Restore 0 */
19569 + #define SPRN_HSRR1 0x13B /* Hypervisor Save/Restore 1 */
19570 ++/* HFSCR and FSCR bit numbers are the same */
19571 ++#define FSCR_TAR_LG 8 /* Enable Target Address Register */
19572 ++#define FSCR_EBB_LG 7 /* Enable Event Based Branching */
19573 ++#define FSCR_TM_LG 5 /* Enable Transactional Memory */
19574 ++#define FSCR_PM_LG 4 /* Enable prob/priv access to PMU SPRs */
19575 ++#define FSCR_BHRB_LG 3 /* Enable Branch History Rolling Buffer*/
19576 ++#define FSCR_DSCR_LG 2 /* Enable Data Stream Control Register */
19577 ++#define FSCR_VECVSX_LG 1 /* Enable VMX/VSX */
19578 ++#define FSCR_FP_LG 0 /* Enable Floating Point */
19579 + #define SPRN_FSCR 0x099 /* Facility Status & Control Register */
19580 +-#define FSCR_TAR (1 << (63-55)) /* Enable Target Address Register */
19581 +-#define FSCR_EBB (1 << (63-56)) /* Enable Event Based Branching */
19582 +-#define FSCR_DSCR (1 << (63-61)) /* Enable Data Stream Control Register */
19583 ++#define FSCR_TAR __MASK(FSCR_TAR_LG)
19584 ++#define FSCR_EBB __MASK(FSCR_EBB_LG)
19585 ++#define FSCR_DSCR __MASK(FSCR_DSCR_LG)
19586 + #define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */
19587 +-#define HFSCR_TAR (1 << (63-55)) /* Enable Target Address Register */
19588 +-#define HFSCR_EBB (1 << (63-56)) /* Enable Event Based Branching */
19589 +-#define HFSCR_TM (1 << (63-58)) /* Enable Transactional Memory */
19590 +-#define HFSCR_PM (1 << (63-60)) /* Enable prob/priv access to PMU SPRs */
19591 +-#define HFSCR_BHRB (1 << (63-59)) /* Enable Branch History Rolling Buffer*/
19592 +-#define HFSCR_DSCR (1 << (63-61)) /* Enable Data Stream Control Register */
19593 +-#define HFSCR_VECVSX (1 << (63-62)) /* Enable VMX/VSX */
19594 +-#define HFSCR_FP (1 << (63-63)) /* Enable Floating Point */
19595 ++#define HFSCR_TAR __MASK(FSCR_TAR_LG)
19596 ++#define HFSCR_EBB __MASK(FSCR_EBB_LG)
19597 ++#define HFSCR_TM __MASK(FSCR_TM_LG)
19598 ++#define HFSCR_PM __MASK(FSCR_PM_LG)
19599 ++#define HFSCR_BHRB __MASK(FSCR_BHRB_LG)
19600 ++#define HFSCR_DSCR __MASK(FSCR_DSCR_LG)
19601 ++#define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG)
19602 ++#define HFSCR_FP __MASK(FSCR_FP_LG)
19603 + #define SPRN_TAR 0x32f /* Target Address Register */
19604 + #define SPRN_LPCR 0x13E /* LPAR Control Register */
19605 + #define LPCR_VPM0 (1ul << (63-0))
19606 +diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h
19607 +index 200d763..685ecc8 100644
19608 +--- a/arch/powerpc/include/asm/switch_to.h
19609 ++++ b/arch/powerpc/include/asm/switch_to.h
19610 +@@ -15,6 +15,15 @@ extern struct task_struct *__switch_to(struct task_struct *,
19611 + struct thread_struct;
19612 + extern struct task_struct *_switch(struct thread_struct *prev,
19613 + struct thread_struct *next);
19614 ++#ifdef CONFIG_PPC_BOOK3S_64
19615 ++static inline void save_tar(struct thread_struct *prev)
19616 ++{
19617 ++ if (cpu_has_feature(CPU_FTR_ARCH_207S))
19618 ++ prev->tar = mfspr(SPRN_TAR);
19619 ++}
19620 ++#else
19621 ++static inline void save_tar(struct thread_struct *prev) {}
19622 ++#endif
19623 +
19624 + extern void giveup_fpu(struct task_struct *);
19625 + extern void load_up_fpu(void);
19626 +diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
19627 +index 6f16ffa..302886b 100644
19628 +--- a/arch/powerpc/kernel/asm-offsets.c
19629 ++++ b/arch/powerpc/kernel/asm-offsets.c
19630 +@@ -139,6 +139,9 @@ int main(void)
19631 + DEFINE(THREAD_TM_TFHAR, offsetof(struct thread_struct, tm_tfhar));
19632 + DEFINE(THREAD_TM_TEXASR, offsetof(struct thread_struct, tm_texasr));
19633 + DEFINE(THREAD_TM_TFIAR, offsetof(struct thread_struct, tm_tfiar));
19634 ++ DEFINE(THREAD_TM_TAR, offsetof(struct thread_struct, tm_tar));
19635 ++ DEFINE(THREAD_TM_PPR, offsetof(struct thread_struct, tm_ppr));
19636 ++ DEFINE(THREAD_TM_DSCR, offsetof(struct thread_struct, tm_dscr));
19637 + DEFINE(PT_CKPT_REGS, offsetof(struct thread_struct, ckpt_regs));
19638 + DEFINE(THREAD_TRANSACT_VR0, offsetof(struct thread_struct,
19639 + transact_vr[0]));
19640 +diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
19641 +index 8741c85..38847767 100644
19642 +--- a/arch/powerpc/kernel/entry_64.S
19643 ++++ b/arch/powerpc/kernel/entry_64.S
19644 +@@ -449,15 +449,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_DSCR)
19645 +
19646 + #ifdef CONFIG_PPC_BOOK3S_64
19647 + BEGIN_FTR_SECTION
19648 +- /*
19649 +- * Back up the TAR across context switches. Note that the TAR is not
19650 +- * available for use in the kernel. (To provide this, the TAR should
19651 +- * be backed up/restored on exception entry/exit instead, and be in
19652 +- * pt_regs. FIXME, this should be in pt_regs anyway (for debug).)
19653 +- */
19654 +- mfspr r0,SPRN_TAR
19655 +- std r0,THREAD_TAR(r3)
19656 +-
19657 + /* Event based branch registers */
19658 + mfspr r0, SPRN_BESCR
19659 + std r0, THREAD_BESCR(r3)
19660 +@@ -584,9 +575,34 @@ BEGIN_FTR_SECTION
19661 + ld r7,DSCR_DEFAULT@toc(2)
19662 + ld r0,THREAD_DSCR(r4)
19663 + cmpwi r6,0
19664 ++ li r8, FSCR_DSCR
19665 + bne 1f
19666 + ld r0,0(r7)
19667 +-1: cmpd r0,r25
19668 ++ b 3f
19669 ++1:
19670 ++ BEGIN_FTR_SECTION_NESTED(70)
19671 ++ mfspr r6, SPRN_FSCR
19672 ++ or r6, r6, r8
19673 ++ mtspr SPRN_FSCR, r6
19674 ++ BEGIN_FTR_SECTION_NESTED(69)
19675 ++ mfspr r6, SPRN_HFSCR
19676 ++ or r6, r6, r8
19677 ++ mtspr SPRN_HFSCR, r6
19678 ++ END_FTR_SECTION_NESTED(CPU_FTR_HVMODE, CPU_FTR_HVMODE, 69)
19679 ++ b 4f
19680 ++ END_FTR_SECTION_NESTED(CPU_FTR_ARCH_207S, CPU_FTR_ARCH_207S, 70)
19681 ++3:
19682 ++ BEGIN_FTR_SECTION_NESTED(70)
19683 ++ mfspr r6, SPRN_FSCR
19684 ++ andc r6, r6, r8
19685 ++ mtspr SPRN_FSCR, r6
19686 ++ BEGIN_FTR_SECTION_NESTED(69)
19687 ++ mfspr r6, SPRN_HFSCR
19688 ++ andc r6, r6, r8
19689 ++ mtspr SPRN_HFSCR, r6
19690 ++ END_FTR_SECTION_NESTED(CPU_FTR_HVMODE, CPU_FTR_HVMODE, 69)
19691 ++ END_FTR_SECTION_NESTED(CPU_FTR_ARCH_207S, CPU_FTR_ARCH_207S, 70)
19692 ++4: cmpd r0,r25
19693 + beq 2f
19694 + mtspr SPRN_DSCR,r0
19695 + 2:
19696 +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
19697 +index 4e00d22..902ca3c 100644
19698 +--- a/arch/powerpc/kernel/exceptions-64s.S
19699 ++++ b/arch/powerpc/kernel/exceptions-64s.S
19700 +@@ -848,7 +848,7 @@ hv_facility_unavailable_relon_trampoline:
19701 + . = 0x4f80
19702 + SET_SCRATCH0(r13)
19703 + EXCEPTION_PROLOG_0(PACA_EXGEN)
19704 +- b facility_unavailable_relon_hv
19705 ++ b hv_facility_unavailable_relon_hv
19706 +
19707 + STD_RELON_EXCEPTION_PSERIES(0x5300, 0x1300, instruction_breakpoint)
19708 + #ifdef CONFIG_PPC_DENORMALISATION
19709 +@@ -1175,6 +1175,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
19710 + b .ret_from_except
19711 +
19712 + STD_EXCEPTION_COMMON(0xf60, facility_unavailable, .facility_unavailable_exception)
19713 ++ STD_EXCEPTION_COMMON(0xf80, hv_facility_unavailable, .facility_unavailable_exception)
19714 +
19715 + .align 7
19716 + .globl __end_handlers
19717 +@@ -1188,7 +1189,7 @@ __end_handlers:
19718 + STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
19719 + STD_RELON_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
19720 + STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
19721 +- STD_RELON_EXCEPTION_HV_OOL(0xf80, facility_unavailable)
19722 ++ STD_RELON_EXCEPTION_HV_OOL(0xf80, hv_facility_unavailable)
19723 +
19724 + #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
19725 + /*
19726 +diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
19727 +index 076d124..7baa27b 100644
19728 +--- a/arch/powerpc/kernel/process.c
19729 ++++ b/arch/powerpc/kernel/process.c
19730 +@@ -600,6 +600,16 @@ struct task_struct *__switch_to(struct task_struct *prev,
19731 + struct ppc64_tlb_batch *batch;
19732 + #endif
19733 +
19734 ++ /* Back up the TAR across context switches.
19735 ++ * Note that the TAR is not available for use in the kernel. (To
19736 ++ * provide this, the TAR should be backed up/restored on exception
19737 ++ * entry/exit instead, and be in pt_regs. FIXME, this should be in
19738 ++ * pt_regs anyway (for debug).)
19739 ++ * Save the TAR here before we do treclaim/trecheckpoint as these
19740 ++ * will change the TAR.
19741 ++ */
19742 ++ save_tar(&prev->thread);
19743 ++
19744 + __switch_to_tm(prev);
19745 +
19746 + #ifdef CONFIG_SMP
19747 +diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
19748 +index 2da67e7..1edd6c2 100644
19749 +--- a/arch/powerpc/kernel/tm.S
19750 ++++ b/arch/powerpc/kernel/tm.S
19751 +@@ -224,6 +224,16 @@ dont_backup_fp:
19752 + std r5, _CCR(r7)
19753 + std r6, _XER(r7)
19754 +
19755 ++
19756 ++ /* ******************** TAR, PPR, DSCR ********** */
19757 ++ mfspr r3, SPRN_TAR
19758 ++ mfspr r4, SPRN_PPR
19759 ++ mfspr r5, SPRN_DSCR
19760 ++
19761 ++ std r3, THREAD_TM_TAR(r12)
19762 ++ std r4, THREAD_TM_PPR(r12)
19763 ++ std r5, THREAD_TM_DSCR(r12)
19764 ++
19765 + /* MSR and flags: We don't change CRs, and we don't need to alter
19766 + * MSR.
19767 + */
19768 +@@ -338,6 +348,16 @@ dont_restore_fp:
19769 + mtmsr r6 /* FP/Vec off again! */
19770 +
19771 + restore_gprs:
19772 ++
19773 ++ /* ******************** TAR, PPR, DSCR ********** */
19774 ++ ld r4, THREAD_TM_TAR(r3)
19775 ++ ld r5, THREAD_TM_PPR(r3)
19776 ++ ld r6, THREAD_TM_DSCR(r3)
19777 ++
19778 ++ mtspr SPRN_TAR, r4
19779 ++ mtspr SPRN_PPR, r5
19780 ++ mtspr SPRN_DSCR, r6
19781 ++
19782 + /* ******************** CR,LR,CCR,MSR ********** */
19783 + ld r3, _CTR(r7)
19784 + ld r4, _LINK(r7)
19785 +diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
19786 +index e4f205a..88929b1 100644
19787 +--- a/arch/powerpc/kernel/traps.c
19788 ++++ b/arch/powerpc/kernel/traps.c
19789 +@@ -44,9 +44,7 @@
19790 + #include <asm/machdep.h>
19791 + #include <asm/rtas.h>
19792 + #include <asm/pmc.h>
19793 +-#ifdef CONFIG_PPC32
19794 + #include <asm/reg.h>
19795 +-#endif
19796 + #ifdef CONFIG_PMAC_BACKLIGHT
19797 + #include <asm/backlight.h>
19798 + #endif
19799 +@@ -1282,43 +1280,54 @@ void vsx_unavailable_exception(struct pt_regs *regs)
19800 + die("Unrecoverable VSX Unavailable Exception", regs, SIGABRT);
19801 + }
19802 +
19803 ++#ifdef CONFIG_PPC64
19804 + void facility_unavailable_exception(struct pt_regs *regs)
19805 + {
19806 + static char *facility_strings[] = {
19807 +- "FPU",
19808 +- "VMX/VSX",
19809 +- "DSCR",
19810 +- "PMU SPRs",
19811 +- "BHRB",
19812 +- "TM",
19813 +- "AT",
19814 +- "EBB",
19815 +- "TAR",
19816 ++ [FSCR_FP_LG] = "FPU",
19817 ++ [FSCR_VECVSX_LG] = "VMX/VSX",
19818 ++ [FSCR_DSCR_LG] = "DSCR",
19819 ++ [FSCR_PM_LG] = "PMU SPRs",
19820 ++ [FSCR_BHRB_LG] = "BHRB",
19821 ++ [FSCR_TM_LG] = "TM",
19822 ++ [FSCR_EBB_LG] = "EBB",
19823 ++ [FSCR_TAR_LG] = "TAR",
19824 + };
19825 +- char *facility, *prefix;
19826 ++ char *facility = "unknown";
19827 + u64 value;
19828 ++ u8 status;
19829 ++ bool hv;
19830 +
19831 +- if (regs->trap == 0xf60) {
19832 +- value = mfspr(SPRN_FSCR);
19833 +- prefix = "";
19834 +- } else {
19835 ++ hv = (regs->trap == 0xf80);
19836 ++ if (hv)
19837 + value = mfspr(SPRN_HFSCR);
19838 +- prefix = "Hypervisor ";
19839 ++ else
19840 ++ value = mfspr(SPRN_FSCR);
19841 ++
19842 ++ status = value >> 56;
19843 ++ if (status == FSCR_DSCR_LG) {
19844 ++ /* User is acessing the DSCR. Set the inherit bit and allow
19845 ++ * the user to set it directly in future by setting via the
19846 ++ * H/FSCR DSCR bit.
19847 ++ */
19848 ++ current->thread.dscr_inherit = 1;
19849 ++ if (hv)
19850 ++ mtspr(SPRN_HFSCR, value | HFSCR_DSCR);
19851 ++ else
19852 ++ mtspr(SPRN_FSCR, value | FSCR_DSCR);
19853 ++ return;
19854 + }
19855 +
19856 +- value = value >> 56;
19857 ++ if ((status < ARRAY_SIZE(facility_strings)) &&
19858 ++ facility_strings[status])
19859 ++ facility = facility_strings[status];
19860 +
19861 + /* We restore the interrupt state now */
19862 + if (!arch_irq_disabled_regs(regs))
19863 + local_irq_enable();
19864 +
19865 +- if (value < ARRAY_SIZE(facility_strings))
19866 +- facility = facility_strings[value];
19867 +- else
19868 +- facility = "unknown";
19869 +-
19870 + pr_err("%sFacility '%s' unavailable, exception at 0x%lx, MSR=%lx\n",
19871 +- prefix, facility, regs->nip, regs->msr);
19872 ++ hv ? "Hypervisor " : "", facility, regs->nip, regs->msr);
19873 +
19874 + if (user_mode(regs)) {
19875 + _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
19876 +@@ -1327,6 +1336,7 @@ void facility_unavailable_exception(struct pt_regs *regs)
19877 +
19878 + die("Unexpected facility unavailable exception", regs, SIGABRT);
19879 + }
19880 ++#endif
19881 +
19882 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
19883 +
19884 +diff --git a/drivers/acpi/proc.c b/drivers/acpi/proc.c
19885 +index aa1227a..04a1378 100644
19886 +--- a/drivers/acpi/proc.c
19887 ++++ b/drivers/acpi/proc.c
19888 +@@ -311,6 +311,8 @@ acpi_system_wakeup_device_seq_show(struct seq_file *seq, void *offset)
19889 + dev->pnp.bus_id,
19890 + (u32) dev->wakeup.sleep_state);
19891 +
19892 ++ mutex_lock(&dev->physical_node_lock);
19893 ++
19894 + if (!dev->physical_node_count) {
19895 + seq_printf(seq, "%c%-8s\n",
19896 + dev->wakeup.flags.run_wake ? '*' : ' ',
19897 +@@ -338,6 +340,8 @@ acpi_system_wakeup_device_seq_show(struct seq_file *seq, void *offset)
19898 + put_device(ldev);
19899 + }
19900 + }
19901 ++
19902 ++ mutex_unlock(&dev->physical_node_lock);
19903 + }
19904 + mutex_unlock(&acpi_device_lock);
19905 + return 0;
19906 +@@ -347,12 +351,16 @@ static void physical_device_enable_wakeup(struct acpi_device *adev)
19907 + {
19908 + struct acpi_device_physical_node *entry;
19909 +
19910 ++ mutex_lock(&adev->physical_node_lock);
19911 ++
19912 + list_for_each_entry(entry,
19913 + &adev->physical_node_list, node)
19914 + if (entry->dev && device_can_wakeup(entry->dev)) {
19915 + bool enable = !device_may_wakeup(entry->dev);
19916 + device_set_wakeup_enable(entry->dev, enable);
19917 + }
19918 ++
19919 ++ mutex_unlock(&adev->physical_node_lock);
19920 + }
19921 +
19922 + static ssize_t
19923 +diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
19924 +index 507ee2d..46283fd 100644
19925 +--- a/drivers/base/regmap/regcache.c
19926 ++++ b/drivers/base/regmap/regcache.c
19927 +@@ -644,7 +644,8 @@ static int regcache_sync_block_raw(struct regmap *map, void *block,
19928 + }
19929 + }
19930 +
19931 +- return regcache_sync_block_raw_flush(map, &data, base, regtmp);
19932 ++ return regcache_sync_block_raw_flush(map, &data, base, regtmp +
19933 ++ map->reg_stride);
19934 + }
19935 +
19936 + int regcache_sync_block(struct regmap *map, void *block,
19937 +diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
19938 +index 1b456fe..fc45567 100644
19939 +--- a/drivers/char/virtio_console.c
19940 ++++ b/drivers/char/virtio_console.c
19941 +@@ -272,9 +272,12 @@ static struct port *find_port_by_devt_in_portdev(struct ports_device *portdev,
19942 + unsigned long flags;
19943 +
19944 + spin_lock_irqsave(&portdev->ports_lock, flags);
19945 +- list_for_each_entry(port, &portdev->ports, list)
19946 +- if (port->cdev->dev == dev)
19947 ++ list_for_each_entry(port, &portdev->ports, list) {
19948 ++ if (port->cdev->dev == dev) {
19949 ++ kref_get(&port->kref);
19950 + goto out;
19951 ++ }
19952 ++ }
19953 + port = NULL;
19954 + out:
19955 + spin_unlock_irqrestore(&portdev->ports_lock, flags);
19956 +@@ -746,6 +749,10 @@ static ssize_t port_fops_read(struct file *filp, char __user *ubuf,
19957 +
19958 + port = filp->private_data;
19959 +
19960 ++ /* Port is hot-unplugged. */
19961 ++ if (!port->guest_connected)
19962 ++ return -ENODEV;
19963 ++
19964 + if (!port_has_data(port)) {
19965 + /*
19966 + * If nothing's connected on the host just return 0 in
19967 +@@ -762,7 +769,7 @@ static ssize_t port_fops_read(struct file *filp, char __user *ubuf,
19968 + if (ret < 0)
19969 + return ret;
19970 + }
19971 +- /* Port got hot-unplugged. */
19972 ++ /* Port got hot-unplugged while we were waiting above. */
19973 + if (!port->guest_connected)
19974 + return -ENODEV;
19975 + /*
19976 +@@ -932,13 +939,25 @@ static ssize_t port_fops_splice_write(struct pipe_inode_info *pipe,
19977 + if (is_rproc_serial(port->out_vq->vdev))
19978 + return -EINVAL;
19979 +
19980 ++ /*
19981 ++ * pipe->nrbufs == 0 means there are no data to transfer,
19982 ++ * so this returns just 0 for no data.
19983 ++ */
19984 ++ pipe_lock(pipe);
19985 ++ if (!pipe->nrbufs) {
19986 ++ ret = 0;
19987 ++ goto error_out;
19988 ++ }
19989 ++
19990 + ret = wait_port_writable(port, filp->f_flags & O_NONBLOCK);
19991 + if (ret < 0)
19992 +- return ret;
19993 ++ goto error_out;
19994 +
19995 + buf = alloc_buf(port->out_vq, 0, pipe->nrbufs);
19996 +- if (!buf)
19997 +- return -ENOMEM;
19998 ++ if (!buf) {
19999 ++ ret = -ENOMEM;
20000 ++ goto error_out;
20001 ++ }
20002 +
20003 + sgl.n = 0;
20004 + sgl.len = 0;
20005 +@@ -946,12 +965,17 @@ static ssize_t port_fops_splice_write(struct pipe_inode_info *pipe,
20006 + sgl.sg = buf->sg;
20007 + sg_init_table(sgl.sg, sgl.size);
20008 + ret = __splice_from_pipe(pipe, &sd, pipe_to_sg);
20009 ++ pipe_unlock(pipe);
20010 + if (likely(ret > 0))
20011 + ret = __send_to_port(port, buf->sg, sgl.n, sgl.len, buf, true);
20012 +
20013 + if (unlikely(ret <= 0))
20014 + free_buf(buf, true);
20015 + return ret;
20016 ++
20017 ++error_out:
20018 ++ pipe_unlock(pipe);
20019 ++ return ret;
20020 + }
20021 +
20022 + static unsigned int port_fops_poll(struct file *filp, poll_table *wait)
20023 +@@ -1019,14 +1043,14 @@ static int port_fops_open(struct inode *inode, struct file *filp)
20024 + struct port *port;
20025 + int ret;
20026 +
20027 ++ /* We get the port with a kref here */
20028 + port = find_port_by_devt(cdev->dev);
20029 ++ if (!port) {
20030 ++ /* Port was unplugged before we could proceed */
20031 ++ return -ENXIO;
20032 ++ }
20033 + filp->private_data = port;
20034 +
20035 +- /* Prevent against a port getting hot-unplugged at the same time */
20036 +- spin_lock_irq(&port->portdev->ports_lock);
20037 +- kref_get(&port->kref);
20038 +- spin_unlock_irq(&port->portdev->ports_lock);
20039 +-
20040 + /*
20041 + * Don't allow opening of console port devices -- that's done
20042 + * via /dev/hvc
20043 +@@ -1498,14 +1522,6 @@ static void remove_port(struct kref *kref)
20044 +
20045 + port = container_of(kref, struct port, kref);
20046 +
20047 +- sysfs_remove_group(&port->dev->kobj, &port_attribute_group);
20048 +- device_destroy(pdrvdata.class, port->dev->devt);
20049 +- cdev_del(port->cdev);
20050 +-
20051 +- kfree(port->name);
20052 +-
20053 +- debugfs_remove(port->debugfs_file);
20054 +-
20055 + kfree(port);
20056 + }
20057 +
20058 +@@ -1539,12 +1555,14 @@ static void unplug_port(struct port *port)
20059 + spin_unlock_irq(&port->portdev->ports_lock);
20060 +
20061 + if (port->guest_connected) {
20062 ++ /* Let the app know the port is going down. */
20063 ++ send_sigio_to_port(port);
20064 ++
20065 ++ /* Do this after sigio is actually sent */
20066 + port->guest_connected = false;
20067 + port->host_connected = false;
20068 +- wake_up_interruptible(&port->waitqueue);
20069 +
20070 +- /* Let the app know the port is going down. */
20071 +- send_sigio_to_port(port);
20072 ++ wake_up_interruptible(&port->waitqueue);
20073 + }
20074 +
20075 + if (is_console_port(port)) {
20076 +@@ -1563,6 +1581,14 @@ static void unplug_port(struct port *port)
20077 + */
20078 + port->portdev = NULL;
20079 +
20080 ++ sysfs_remove_group(&port->dev->kobj, &port_attribute_group);
20081 ++ device_destroy(pdrvdata.class, port->dev->devt);
20082 ++ cdev_del(port->cdev);
20083 ++
20084 ++ kfree(port->name);
20085 ++
20086 ++ debugfs_remove(port->debugfs_file);
20087 ++
20088 + /*
20089 + * Locks around here are not necessary - a port can't be
20090 + * opened after we removed the port struct from ports_list
20091 +diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
20092 +index 0ceb2ef..f97cb3d 100644
20093 +--- a/drivers/cpufreq/cpufreq_conservative.c
20094 ++++ b/drivers/cpufreq/cpufreq_conservative.c
20095 +@@ -221,8 +221,8 @@ static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf,
20096 + return count;
20097 + }
20098 +
20099 +-static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf,
20100 +- size_t count)
20101 ++static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
20102 ++ const char *buf, size_t count)
20103 + {
20104 + struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
20105 + unsigned int input, j;
20106 +@@ -235,10 +235,10 @@ static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf,
20107 + if (input > 1)
20108 + input = 1;
20109 +
20110 +- if (input == cs_tuners->ignore_nice) /* nothing to do */
20111 ++ if (input == cs_tuners->ignore_nice_load) /* nothing to do */
20112 + return count;
20113 +
20114 +- cs_tuners->ignore_nice = input;
20115 ++ cs_tuners->ignore_nice_load = input;
20116 +
20117 + /* we need to re-evaluate prev_cpu_idle */
20118 + for_each_online_cpu(j) {
20119 +@@ -246,7 +246,7 @@ static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf,
20120 + dbs_info = &per_cpu(cs_cpu_dbs_info, j);
20121 + dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j,
20122 + &dbs_info->cdbs.prev_cpu_wall, 0);
20123 +- if (cs_tuners->ignore_nice)
20124 ++ if (cs_tuners->ignore_nice_load)
20125 + dbs_info->cdbs.prev_cpu_nice =
20126 + kcpustat_cpu(j).cpustat[CPUTIME_NICE];
20127 + }
20128 +@@ -279,7 +279,7 @@ show_store_one(cs, sampling_rate);
20129 + show_store_one(cs, sampling_down_factor);
20130 + show_store_one(cs, up_threshold);
20131 + show_store_one(cs, down_threshold);
20132 +-show_store_one(cs, ignore_nice);
20133 ++show_store_one(cs, ignore_nice_load);
20134 + show_store_one(cs, freq_step);
20135 + declare_show_sampling_rate_min(cs);
20136 +
20137 +@@ -287,7 +287,7 @@ gov_sys_pol_attr_rw(sampling_rate);
20138 + gov_sys_pol_attr_rw(sampling_down_factor);
20139 + gov_sys_pol_attr_rw(up_threshold);
20140 + gov_sys_pol_attr_rw(down_threshold);
20141 +-gov_sys_pol_attr_rw(ignore_nice);
20142 ++gov_sys_pol_attr_rw(ignore_nice_load);
20143 + gov_sys_pol_attr_rw(freq_step);
20144 + gov_sys_pol_attr_ro(sampling_rate_min);
20145 +
20146 +@@ -297,7 +297,7 @@ static struct attribute *dbs_attributes_gov_sys[] = {
20147 + &sampling_down_factor_gov_sys.attr,
20148 + &up_threshold_gov_sys.attr,
20149 + &down_threshold_gov_sys.attr,
20150 +- &ignore_nice_gov_sys.attr,
20151 ++ &ignore_nice_load_gov_sys.attr,
20152 + &freq_step_gov_sys.attr,
20153 + NULL
20154 + };
20155 +@@ -313,7 +313,7 @@ static struct attribute *dbs_attributes_gov_pol[] = {
20156 + &sampling_down_factor_gov_pol.attr,
20157 + &up_threshold_gov_pol.attr,
20158 + &down_threshold_gov_pol.attr,
20159 +- &ignore_nice_gov_pol.attr,
20160 ++ &ignore_nice_load_gov_pol.attr,
20161 + &freq_step_gov_pol.attr,
20162 + NULL
20163 + };
20164 +@@ -338,7 +338,7 @@ static int cs_init(struct dbs_data *dbs_data)
20165 + tuners->up_threshold = DEF_FREQUENCY_UP_THRESHOLD;
20166 + tuners->down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD;
20167 + tuners->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
20168 +- tuners->ignore_nice = 0;
20169 ++ tuners->ignore_nice_load = 0;
20170 + tuners->freq_step = DEF_FREQUENCY_STEP;
20171 +
20172 + dbs_data->tuners = tuners;
20173 +diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
20174 +index 5af40ad..a86ff72 100644
20175 +--- a/drivers/cpufreq/cpufreq_governor.c
20176 ++++ b/drivers/cpufreq/cpufreq_governor.c
20177 +@@ -91,9 +91,9 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu)
20178 + unsigned int j;
20179 +
20180 + if (dbs_data->cdata->governor == GOV_ONDEMAND)
20181 +- ignore_nice = od_tuners->ignore_nice;
20182 ++ ignore_nice = od_tuners->ignore_nice_load;
20183 + else
20184 +- ignore_nice = cs_tuners->ignore_nice;
20185 ++ ignore_nice = cs_tuners->ignore_nice_load;
20186 +
20187 + policy = cdbs->cur_policy;
20188 +
20189 +@@ -336,12 +336,12 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
20190 + cs_tuners = dbs_data->tuners;
20191 + cs_dbs_info = dbs_data->cdata->get_cpu_dbs_info_s(cpu);
20192 + sampling_rate = cs_tuners->sampling_rate;
20193 +- ignore_nice = cs_tuners->ignore_nice;
20194 ++ ignore_nice = cs_tuners->ignore_nice_load;
20195 + } else {
20196 + od_tuners = dbs_data->tuners;
20197 + od_dbs_info = dbs_data->cdata->get_cpu_dbs_info_s(cpu);
20198 + sampling_rate = od_tuners->sampling_rate;
20199 +- ignore_nice = od_tuners->ignore_nice;
20200 ++ ignore_nice = od_tuners->ignore_nice_load;
20201 + od_ops = dbs_data->cdata->gov_ops;
20202 + io_busy = od_tuners->io_is_busy;
20203 + }
20204 +diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
20205 +index e16a961..0d9e6be 100644
20206 +--- a/drivers/cpufreq/cpufreq_governor.h
20207 ++++ b/drivers/cpufreq/cpufreq_governor.h
20208 +@@ -165,7 +165,7 @@ struct cs_cpu_dbs_info_s {
20209 +
20210 + /* Per policy Governers sysfs tunables */
20211 + struct od_dbs_tuners {
20212 +- unsigned int ignore_nice;
20213 ++ unsigned int ignore_nice_load;
20214 + unsigned int sampling_rate;
20215 + unsigned int sampling_down_factor;
20216 + unsigned int up_threshold;
20217 +@@ -175,7 +175,7 @@ struct od_dbs_tuners {
20218 + };
20219 +
20220 + struct cs_dbs_tuners {
20221 +- unsigned int ignore_nice;
20222 ++ unsigned int ignore_nice_load;
20223 + unsigned int sampling_rate;
20224 + unsigned int sampling_down_factor;
20225 + unsigned int up_threshold;
20226 +diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
20227 +index 93eb5cb..c087347 100644
20228 +--- a/drivers/cpufreq/cpufreq_ondemand.c
20229 ++++ b/drivers/cpufreq/cpufreq_ondemand.c
20230 +@@ -403,8 +403,8 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
20231 + return count;
20232 + }
20233 +
20234 +-static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf,
20235 +- size_t count)
20236 ++static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
20237 ++ const char *buf, size_t count)
20238 + {
20239 + struct od_dbs_tuners *od_tuners = dbs_data->tuners;
20240 + unsigned int input;
20241 +@@ -419,10 +419,10 @@ static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf,
20242 + if (input > 1)
20243 + input = 1;
20244 +
20245 +- if (input == od_tuners->ignore_nice) { /* nothing to do */
20246 ++ if (input == od_tuners->ignore_nice_load) { /* nothing to do */
20247 + return count;
20248 + }
20249 +- od_tuners->ignore_nice = input;
20250 ++ od_tuners->ignore_nice_load = input;
20251 +
20252 + /* we need to re-evaluate prev_cpu_idle */
20253 + for_each_online_cpu(j) {
20254 +@@ -430,7 +430,7 @@ static ssize_t store_ignore_nice(struct dbs_data *dbs_data, const char *buf,
20255 + dbs_info = &per_cpu(od_cpu_dbs_info, j);
20256 + dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j,
20257 + &dbs_info->cdbs.prev_cpu_wall, od_tuners->io_is_busy);
20258 +- if (od_tuners->ignore_nice)
20259 ++ if (od_tuners->ignore_nice_load)
20260 + dbs_info->cdbs.prev_cpu_nice =
20261 + kcpustat_cpu(j).cpustat[CPUTIME_NICE];
20262 +
20263 +@@ -461,7 +461,7 @@ show_store_one(od, sampling_rate);
20264 + show_store_one(od, io_is_busy);
20265 + show_store_one(od, up_threshold);
20266 + show_store_one(od, sampling_down_factor);
20267 +-show_store_one(od, ignore_nice);
20268 ++show_store_one(od, ignore_nice_load);
20269 + show_store_one(od, powersave_bias);
20270 + declare_show_sampling_rate_min(od);
20271 +
20272 +@@ -469,7 +469,7 @@ gov_sys_pol_attr_rw(sampling_rate);
20273 + gov_sys_pol_attr_rw(io_is_busy);
20274 + gov_sys_pol_attr_rw(up_threshold);
20275 + gov_sys_pol_attr_rw(sampling_down_factor);
20276 +-gov_sys_pol_attr_rw(ignore_nice);
20277 ++gov_sys_pol_attr_rw(ignore_nice_load);
20278 + gov_sys_pol_attr_rw(powersave_bias);
20279 + gov_sys_pol_attr_ro(sampling_rate_min);
20280 +
20281 +@@ -478,7 +478,7 @@ static struct attribute *dbs_attributes_gov_sys[] = {
20282 + &sampling_rate_gov_sys.attr,
20283 + &up_threshold_gov_sys.attr,
20284 + &sampling_down_factor_gov_sys.attr,
20285 +- &ignore_nice_gov_sys.attr,
20286 ++ &ignore_nice_load_gov_sys.attr,
20287 + &powersave_bias_gov_sys.attr,
20288 + &io_is_busy_gov_sys.attr,
20289 + NULL
20290 +@@ -494,7 +494,7 @@ static struct attribute *dbs_attributes_gov_pol[] = {
20291 + &sampling_rate_gov_pol.attr,
20292 + &up_threshold_gov_pol.attr,
20293 + &sampling_down_factor_gov_pol.attr,
20294 +- &ignore_nice_gov_pol.attr,
20295 ++ &ignore_nice_load_gov_pol.attr,
20296 + &powersave_bias_gov_pol.attr,
20297 + &io_is_busy_gov_pol.attr,
20298 + NULL
20299 +@@ -544,7 +544,7 @@ static int od_init(struct dbs_data *dbs_data)
20300 + }
20301 +
20302 + tuners->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR;
20303 +- tuners->ignore_nice = 0;
20304 ++ tuners->ignore_nice_load = 0;
20305 + tuners->powersave_bias = default_powersave_bias;
20306 + tuners->io_is_busy = should_io_be_busy();
20307 +
20308 +diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
20309 +index d539127..f92b02a 100644
20310 +--- a/drivers/cpufreq/loongson2_cpufreq.c
20311 ++++ b/drivers/cpufreq/loongson2_cpufreq.c
20312 +@@ -118,11 +118,6 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
20313 + clk_put(cpuclk);
20314 + return -EINVAL;
20315 + }
20316 +- ret = clk_set_rate(cpuclk, rate);
20317 +- if (ret) {
20318 +- clk_put(cpuclk);
20319 +- return ret;
20320 +- }
20321 +
20322 + /* clock table init */
20323 + for (i = 2;
20324 +@@ -130,6 +125,12 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
20325 + i++)
20326 + loongson2_clockmod_table[i].frequency = (rate * i) / 8;
20327 +
20328 ++ ret = clk_set_rate(cpuclk, rate);
20329 ++ if (ret) {
20330 ++ clk_put(cpuclk);
20331 ++ return ret;
20332 ++ }
20333 ++
20334 + policy->cur = loongson2_cpufreq_get(policy->cpu);
20335 +
20336 + cpufreq_frequency_table_get_attr(&loongson2_clockmod_table[0],
20337 +diff --git a/drivers/gpu/drm/ast/ast_ttm.c b/drivers/gpu/drm/ast/ast_ttm.c
20338 +index 09da339..d5902e2 100644
20339 +--- a/drivers/gpu/drm/ast/ast_ttm.c
20340 ++++ b/drivers/gpu/drm/ast/ast_ttm.c
20341 +@@ -348,6 +348,7 @@ int ast_bo_create(struct drm_device *dev, int size, int align,
20342 +
20343 + astbo->gem.driver_private = NULL;
20344 + astbo->bo.bdev = &ast->ttm.bdev;
20345 ++ astbo->bo.bdev->dev_mapping = dev->dev_mapping;
20346 +
20347 + ast_ttm_placement(astbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
20348 +
20349 +diff --git a/drivers/gpu/drm/cirrus/cirrus_ttm.c b/drivers/gpu/drm/cirrus/cirrus_ttm.c
20350 +index 2ed8cfc..c18faff 100644
20351 +--- a/drivers/gpu/drm/cirrus/cirrus_ttm.c
20352 ++++ b/drivers/gpu/drm/cirrus/cirrus_ttm.c
20353 +@@ -353,6 +353,7 @@ int cirrus_bo_create(struct drm_device *dev, int size, int align,
20354 +
20355 + cirrusbo->gem.driver_private = NULL;
20356 + cirrusbo->bo.bdev = &cirrus->ttm.bdev;
20357 ++ cirrusbo->bo.bdev->dev_mapping = dev->dev_mapping;
20358 +
20359 + cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
20360 +
20361 +diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
20362 +index 8bcce78..f92da0a 100644
20363 +--- a/drivers/gpu/drm/drm_irq.c
20364 ++++ b/drivers/gpu/drm/drm_irq.c
20365 +@@ -708,7 +708,10 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
20366 + /* Subtract time delta from raw timestamp to get final
20367 + * vblank_time timestamp for end of vblank.
20368 + */
20369 +- etime = ktime_sub_ns(etime, delta_ns);
20370 ++ if (delta_ns < 0)
20371 ++ etime = ktime_add_ns(etime, -delta_ns);
20372 ++ else
20373 ++ etime = ktime_sub_ns(etime, delta_ns);
20374 + *vblank_time = ktime_to_timeval(etime);
20375 +
20376 + DRM_DEBUG("crtc %d : v %d p(%d,%d)@ %ld.%ld -> %ld.%ld [e %d us, %d rep]\n",
20377 +diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
20378 +index f968590..17d9b0b 100644
20379 +--- a/drivers/gpu/drm/i915/i915_dma.c
20380 ++++ b/drivers/gpu/drm/i915/i915_dma.c
20381 +@@ -1514,6 +1514,7 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
20382 + spin_lock_init(&dev_priv->irq_lock);
20383 + spin_lock_init(&dev_priv->gpu_error.lock);
20384 + spin_lock_init(&dev_priv->rps.lock);
20385 ++ spin_lock_init(&dev_priv->gt_lock);
20386 + mutex_init(&dev_priv->dpio_lock);
20387 + mutex_init(&dev_priv->rps.hw_lock);
20388 + mutex_init(&dev_priv->modeset_restore_lock);
20389 +diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
20390 +index 2cfe9f6..94ad6bc 100644
20391 +--- a/drivers/gpu/drm/i915/intel_pm.c
20392 ++++ b/drivers/gpu/drm/i915/intel_pm.c
20393 +@@ -4507,8 +4507,6 @@ void intel_gt_init(struct drm_device *dev)
20394 + {
20395 + struct drm_i915_private *dev_priv = dev->dev_private;
20396 +
20397 +- spin_lock_init(&dev_priv->gt_lock);
20398 +-
20399 + if (IS_VALLEYVIEW(dev)) {
20400 + dev_priv->gt.force_wake_get = vlv_force_wake_get;
20401 + dev_priv->gt.force_wake_put = vlv_force_wake_put;
20402 +diff --git a/drivers/gpu/drm/mgag200/mgag200_ttm.c b/drivers/gpu/drm/mgag200/mgag200_ttm.c
20403 +index 401c989..d2cb32f 100644
20404 +--- a/drivers/gpu/drm/mgag200/mgag200_ttm.c
20405 ++++ b/drivers/gpu/drm/mgag200/mgag200_ttm.c
20406 +@@ -347,6 +347,7 @@ int mgag200_bo_create(struct drm_device *dev, int size, int align,
20407 +
20408 + mgabo->gem.driver_private = NULL;
20409 + mgabo->bo.bdev = &mdev->ttm.bdev;
20410 ++ mgabo->bo.bdev->dev_mapping = dev->dev_mapping;
20411 +
20412 + mgag200_ttm_placement(mgabo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
20413 +
20414 +diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
20415 +index 0f89ce3..687b421 100644
20416 +--- a/drivers/gpu/drm/radeon/evergreen.c
20417 ++++ b/drivers/gpu/drm/radeon/evergreen.c
20418 +@@ -4681,6 +4681,8 @@ static int evergreen_startup(struct radeon_device *rdev)
20419 + /* enable pcie gen2 link */
20420 + evergreen_pcie_gen2_enable(rdev);
20421 +
20422 ++ evergreen_mc_program(rdev);
20423 ++
20424 + if (ASIC_IS_DCE5(rdev)) {
20425 + if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw || !rdev->mc_fw) {
20426 + r = ni_init_microcode(rdev);
20427 +@@ -4708,7 +4710,6 @@ static int evergreen_startup(struct radeon_device *rdev)
20428 + if (r)
20429 + return r;
20430 +
20431 +- evergreen_mc_program(rdev);
20432 + if (rdev->flags & RADEON_IS_AGP) {
20433 + evergreen_agp_enable(rdev);
20434 + } else {
20435 +@@ -4854,10 +4855,10 @@ int evergreen_resume(struct radeon_device *rdev)
20436 + int evergreen_suspend(struct radeon_device *rdev)
20437 + {
20438 + r600_audio_fini(rdev);
20439 ++ r600_uvd_stop(rdev);
20440 + radeon_uvd_suspend(rdev);
20441 + r700_cp_stop(rdev);
20442 + r600_dma_stop(rdev);
20443 +- r600_uvd_rbc_stop(rdev);
20444 + evergreen_irq_suspend(rdev);
20445 + radeon_wb_disable(rdev);
20446 + evergreen_pcie_gart_disable(rdev);
20447 +@@ -4988,6 +4989,7 @@ void evergreen_fini(struct radeon_device *rdev)
20448 + radeon_ib_pool_fini(rdev);
20449 + radeon_irq_kms_fini(rdev);
20450 + evergreen_pcie_gart_fini(rdev);
20451 ++ r600_uvd_stop(rdev);
20452 + radeon_uvd_fini(rdev);
20453 + r600_vram_scratch_fini(rdev);
20454 + radeon_gem_fini(rdev);
20455 +diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
20456 +index 8458330..3bf43a1 100644
20457 +--- a/drivers/gpu/drm/radeon/ni.c
20458 ++++ b/drivers/gpu/drm/radeon/ni.c
20459 +@@ -1929,6 +1929,8 @@ static int cayman_startup(struct radeon_device *rdev)
20460 + /* enable pcie gen2 link */
20461 + evergreen_pcie_gen2_enable(rdev);
20462 +
20463 ++ evergreen_mc_program(rdev);
20464 ++
20465 + if (rdev->flags & RADEON_IS_IGP) {
20466 + if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) {
20467 + r = ni_init_microcode(rdev);
20468 +@@ -1957,7 +1959,6 @@ static int cayman_startup(struct radeon_device *rdev)
20469 + if (r)
20470 + return r;
20471 +
20472 +- evergreen_mc_program(rdev);
20473 + r = cayman_pcie_gart_enable(rdev);
20474 + if (r)
20475 + return r;
20476 +@@ -2133,7 +2134,7 @@ int cayman_suspend(struct radeon_device *rdev)
20477 + radeon_vm_manager_fini(rdev);
20478 + cayman_cp_enable(rdev, false);
20479 + cayman_dma_stop(rdev);
20480 +- r600_uvd_rbc_stop(rdev);
20481 ++ r600_uvd_stop(rdev);
20482 + radeon_uvd_suspend(rdev);
20483 + evergreen_irq_suspend(rdev);
20484 + radeon_wb_disable(rdev);
20485 +@@ -2265,6 +2266,7 @@ void cayman_fini(struct radeon_device *rdev)
20486 + radeon_vm_manager_fini(rdev);
20487 + radeon_ib_pool_fini(rdev);
20488 + radeon_irq_kms_fini(rdev);
20489 ++ r600_uvd_stop(rdev);
20490 + radeon_uvd_fini(rdev);
20491 + cayman_pcie_gart_fini(rdev);
20492 + r600_vram_scratch_fini(rdev);
20493 +diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
20494 +index b60004e..f19620b 100644
20495 +--- a/drivers/gpu/drm/radeon/r600.c
20496 ++++ b/drivers/gpu/drm/radeon/r600.c
20497 +@@ -2675,12 +2675,29 @@ int r600_uvd_rbc_start(struct radeon_device *rdev)
20498 + return 0;
20499 + }
20500 +
20501 +-void r600_uvd_rbc_stop(struct radeon_device *rdev)
20502 ++void r600_uvd_stop(struct radeon_device *rdev)
20503 + {
20504 + struct radeon_ring *ring = &rdev->ring[R600_RING_TYPE_UVD_INDEX];
20505 +
20506 + /* force RBC into idle state */
20507 + WREG32(UVD_RBC_RB_CNTL, 0x11010101);
20508 ++
20509 ++ /* Stall UMC and register bus before resetting VCPU */
20510 ++ WREG32_P(UVD_LMI_CTRL2, 1 << 8, ~(1 << 8));
20511 ++ WREG32_P(UVD_RB_ARB_CTRL, 1 << 3, ~(1 << 3));
20512 ++ mdelay(1);
20513 ++
20514 ++ /* put VCPU into reset */
20515 ++ WREG32(UVD_SOFT_RESET, VCPU_SOFT_RESET);
20516 ++ mdelay(5);
20517 ++
20518 ++ /* disable VCPU clock */
20519 ++ WREG32(UVD_VCPU_CNTL, 0x0);
20520 ++
20521 ++ /* Unstall UMC and register bus */
20522 ++ WREG32_P(UVD_LMI_CTRL2, 0, ~(1 << 8));
20523 ++ WREG32_P(UVD_RB_ARB_CTRL, 0, ~(1 << 3));
20524 ++
20525 + ring->ready = false;
20526 + }
20527 +
20528 +@@ -2700,6 +2717,11 @@ int r600_uvd_init(struct radeon_device *rdev)
20529 + /* disable interupt */
20530 + WREG32_P(UVD_MASTINT_EN, 0, ~(1 << 1));
20531 +
20532 ++ /* Stall UMC and register bus before resetting VCPU */
20533 ++ WREG32_P(UVD_LMI_CTRL2, 1 << 8, ~(1 << 8));
20534 ++ WREG32_P(UVD_RB_ARB_CTRL, 1 << 3, ~(1 << 3));
20535 ++ mdelay(1);
20536 ++
20537 + /* put LMI, VCPU, RBC etc... into reset */
20538 + WREG32(UVD_SOFT_RESET, LMI_SOFT_RESET | VCPU_SOFT_RESET |
20539 + LBSI_SOFT_RESET | RBC_SOFT_RESET | CSM_SOFT_RESET |
20540 +@@ -2729,10 +2751,6 @@ int r600_uvd_init(struct radeon_device *rdev)
20541 + WREG32(UVD_MPC_SET_ALU, 0);
20542 + WREG32(UVD_MPC_SET_MUX, 0x88);
20543 +
20544 +- /* Stall UMC */
20545 +- WREG32_P(UVD_LMI_CTRL2, 1 << 8, ~(1 << 8));
20546 +- WREG32_P(UVD_RB_ARB_CTRL, 1 << 3, ~(1 << 3));
20547 +-
20548 + /* take all subblocks out of reset, except VCPU */
20549 + WREG32(UVD_SOFT_RESET, VCPU_SOFT_RESET);
20550 + mdelay(5);
20551 +@@ -3206,6 +3224,8 @@ static int r600_startup(struct radeon_device *rdev)
20552 + /* enable pcie gen2 link */
20553 + r600_pcie_gen2_enable(rdev);
20554 +
20555 ++ r600_mc_program(rdev);
20556 ++
20557 + if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) {
20558 + r = r600_init_microcode(rdev);
20559 + if (r) {
20560 +@@ -3218,7 +3238,6 @@ static int r600_startup(struct radeon_device *rdev)
20561 + if (r)
20562 + return r;
20563 +
20564 +- r600_mc_program(rdev);
20565 + if (rdev->flags & RADEON_IS_AGP) {
20566 + r600_agp_enable(rdev);
20567 + } else {
20568 +diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
20569 +index f48240b..b9b1139 100644
20570 +--- a/drivers/gpu/drm/radeon/r600_hdmi.c
20571 ++++ b/drivers/gpu/drm/radeon/r600_hdmi.c
20572 +@@ -242,9 +242,15 @@ void r600_audio_set_dto(struct drm_encoder *encoder, u32 clock)
20573 + /* according to the reg specs, this should DCE3.2 only, but in
20574 + * practice it seems to cover DCE3.0 as well.
20575 + */
20576 +- WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100);
20577 +- WREG32(DCCG_AUDIO_DTO0_MODULE, clock * 100);
20578 +- WREG32(DCCG_AUDIO_DTO_SELECT, 0); /* select DTO0 */
20579 ++ if (dig->dig_encoder == 0) {
20580 ++ WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100);
20581 ++ WREG32(DCCG_AUDIO_DTO0_MODULE, clock * 100);
20582 ++ WREG32(DCCG_AUDIO_DTO_SELECT, 0); /* select DTO0 */
20583 ++ } else {
20584 ++ WREG32(DCCG_AUDIO_DTO1_PHASE, base_rate * 100);
20585 ++ WREG32(DCCG_AUDIO_DTO1_MODULE, clock * 100);
20586 ++ WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
20587 ++ }
20588 + } else {
20589 + /* according to the reg specs, this should be DCE2.0 and DCE3.0 */
20590 + WREG32(AUDIO_DTO, AUDIO_DTO_PHASE(base_rate / 10) |
20591 +diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
20592 +index aad18e6..bdd9d56 100644
20593 +--- a/drivers/gpu/drm/radeon/radeon.h
20594 ++++ b/drivers/gpu/drm/radeon/radeon.h
20595 +@@ -1146,7 +1146,6 @@ struct radeon_uvd {
20596 + void *cpu_addr;
20597 + uint64_t gpu_addr;
20598 + void *saved_bo;
20599 +- unsigned fw_size;
20600 + atomic_t handles[RADEON_MAX_UVD_HANDLES];
20601 + struct drm_file *filp[RADEON_MAX_UVD_HANDLES];
20602 + struct delayed_work idle_work;
20603 +@@ -1686,6 +1685,7 @@ struct radeon_device {
20604 + const struct firmware *rlc_fw; /* r6/700 RLC firmware */
20605 + const struct firmware *mc_fw; /* NI MC firmware */
20606 + const struct firmware *ce_fw; /* SI CE firmware */
20607 ++ const struct firmware *uvd_fw; /* UVD firmware */
20608 + struct r600_blit r600_blit;
20609 + struct r600_vram_scratch vram_scratch;
20610 + int msi_enabled; /* msi enabled */
20611 +diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
20612 +index a72759e..34223fc 100644
20613 +--- a/drivers/gpu/drm/radeon/radeon_asic.h
20614 ++++ b/drivers/gpu/drm/radeon/radeon_asic.h
20615 +@@ -399,7 +399,7 @@ uint64_t r600_get_gpu_clock_counter(struct radeon_device *rdev);
20616 + /* uvd */
20617 + int r600_uvd_init(struct radeon_device *rdev);
20618 + int r600_uvd_rbc_start(struct radeon_device *rdev);
20619 +-void r600_uvd_rbc_stop(struct radeon_device *rdev);
20620 ++void r600_uvd_stop(struct radeon_device *rdev);
20621 + int r600_uvd_ib_test(struct radeon_device *rdev, struct radeon_ring *ring);
20622 + void r600_uvd_fence_emit(struct radeon_device *rdev,
20623 + struct radeon_fence *fence);
20624 +diff --git a/drivers/gpu/drm/radeon/radeon_fence.c b/drivers/gpu/drm/radeon/radeon_fence.c
20625 +index 7ddb0ef..ddb8f8e 100644
20626 +--- a/drivers/gpu/drm/radeon/radeon_fence.c
20627 ++++ b/drivers/gpu/drm/radeon/radeon_fence.c
20628 +@@ -782,7 +782,7 @@ int radeon_fence_driver_start_ring(struct radeon_device *rdev, int ring)
20629 +
20630 + } else {
20631 + /* put fence directly behind firmware */
20632 +- index = ALIGN(rdev->uvd.fw_size, 8);
20633 ++ index = ALIGN(rdev->uvd_fw->size, 8);
20634 + rdev->fence_drv[ring].cpu_addr = rdev->uvd.cpu_addr + index;
20635 + rdev->fence_drv[ring].gpu_addr = rdev->uvd.gpu_addr + index;
20636 + }
20637 +diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
20638 +index 1b3a91b..97002a0 100644
20639 +--- a/drivers/gpu/drm/radeon/radeon_uvd.c
20640 ++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
20641 +@@ -55,7 +55,6 @@ static void radeon_uvd_idle_work_handler(struct work_struct *work);
20642 + int radeon_uvd_init(struct radeon_device *rdev)
20643 + {
20644 + struct platform_device *pdev;
20645 +- const struct firmware *fw;
20646 + unsigned long bo_size;
20647 + const char *fw_name;
20648 + int i, r;
20649 +@@ -105,7 +104,7 @@ int radeon_uvd_init(struct radeon_device *rdev)
20650 + return -EINVAL;
20651 + }
20652 +
20653 +- r = request_firmware(&fw, fw_name, &pdev->dev);
20654 ++ r = request_firmware(&rdev->uvd_fw, fw_name, &pdev->dev);
20655 + if (r) {
20656 + dev_err(rdev->dev, "radeon_uvd: Can't load firmware \"%s\"\n",
20657 + fw_name);
20658 +@@ -115,7 +114,7 @@ int radeon_uvd_init(struct radeon_device *rdev)
20659 +
20660 + platform_device_unregister(pdev);
20661 +
20662 +- bo_size = RADEON_GPU_PAGE_ALIGN(fw->size + 8) +
20663 ++ bo_size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size + 8) +
20664 + RADEON_UVD_STACK_SIZE + RADEON_UVD_HEAP_SIZE;
20665 + r = radeon_bo_create(rdev, bo_size, PAGE_SIZE, true,
20666 + RADEON_GEM_DOMAIN_VRAM, NULL, &rdev->uvd.vcpu_bo);
20667 +@@ -148,12 +147,6 @@ int radeon_uvd_init(struct radeon_device *rdev)
20668 +
20669 + radeon_bo_unreserve(rdev->uvd.vcpu_bo);
20670 +
20671 +- rdev->uvd.fw_size = fw->size;
20672 +- memset(rdev->uvd.cpu_addr, 0, bo_size);
20673 +- memcpy(rdev->uvd.cpu_addr, fw->data, fw->size);
20674 +-
20675 +- release_firmware(fw);
20676 +-
20677 + for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
20678 + atomic_set(&rdev->uvd.handles[i], 0);
20679 + rdev->uvd.filp[i] = NULL;
20680 +@@ -177,33 +170,60 @@ void radeon_uvd_fini(struct radeon_device *rdev)
20681 + }
20682 +
20683 + radeon_bo_unref(&rdev->uvd.vcpu_bo);
20684 ++
20685 ++ release_firmware(rdev->uvd_fw);
20686 + }
20687 +
20688 + int radeon_uvd_suspend(struct radeon_device *rdev)
20689 + {
20690 + unsigned size;
20691 ++ void *ptr;
20692 ++ int i;
20693 +
20694 + if (rdev->uvd.vcpu_bo == NULL)
20695 + return 0;
20696 +
20697 ++ for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i)
20698 ++ if (atomic_read(&rdev->uvd.handles[i]))
20699 ++ break;
20700 ++
20701 ++ if (i == RADEON_MAX_UVD_HANDLES)
20702 ++ return 0;
20703 ++
20704 + size = radeon_bo_size(rdev->uvd.vcpu_bo);
20705 ++ size -= rdev->uvd_fw->size;
20706 ++
20707 ++ ptr = rdev->uvd.cpu_addr;
20708 ++ ptr += rdev->uvd_fw->size;
20709 ++
20710 + rdev->uvd.saved_bo = kmalloc(size, GFP_KERNEL);
20711 +- memcpy(rdev->uvd.saved_bo, rdev->uvd.cpu_addr, size);
20712 ++ memcpy(rdev->uvd.saved_bo, ptr, size);
20713 +
20714 + return 0;
20715 + }
20716 +
20717 + int radeon_uvd_resume(struct radeon_device *rdev)
20718 + {
20719 ++ unsigned size;
20720 ++ void *ptr;
20721 ++
20722 + if (rdev->uvd.vcpu_bo == NULL)
20723 + return -EINVAL;
20724 +
20725 ++ memcpy(rdev->uvd.cpu_addr, rdev->uvd_fw->data, rdev->uvd_fw->size);
20726 ++
20727 ++ size = radeon_bo_size(rdev->uvd.vcpu_bo);
20728 ++ size -= rdev->uvd_fw->size;
20729 ++
20730 ++ ptr = rdev->uvd.cpu_addr;
20731 ++ ptr += rdev->uvd_fw->size;
20732 ++
20733 + if (rdev->uvd.saved_bo != NULL) {
20734 +- unsigned size = radeon_bo_size(rdev->uvd.vcpu_bo);
20735 +- memcpy(rdev->uvd.cpu_addr, rdev->uvd.saved_bo, size);
20736 ++ memcpy(ptr, rdev->uvd.saved_bo, size);
20737 + kfree(rdev->uvd.saved_bo);
20738 + rdev->uvd.saved_bo = NULL;
20739 +- }
20740 ++ } else
20741 ++ memset(ptr, 0, size);
20742 +
20743 + return 0;
20744 + }
20745 +@@ -218,8 +238,8 @@ void radeon_uvd_free_handles(struct radeon_device *rdev, struct drm_file *filp)
20746 + {
20747 + int i, r;
20748 + for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) {
20749 +- if (rdev->uvd.filp[i] == filp) {
20750 +- uint32_t handle = atomic_read(&rdev->uvd.handles[i]);
20751 ++ uint32_t handle = atomic_read(&rdev->uvd.handles[i]);
20752 ++ if (handle != 0 && rdev->uvd.filp[i] == filp) {
20753 + struct radeon_fence *fence;
20754 +
20755 + r = radeon_uvd_get_destroy_msg(rdev,
20756 +diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
20757 +index 30ea14e..bcc68ec 100644
20758 +--- a/drivers/gpu/drm/radeon/rv770.c
20759 ++++ b/drivers/gpu/drm/radeon/rv770.c
20760 +@@ -813,7 +813,7 @@ int rv770_uvd_resume(struct radeon_device *rdev)
20761 +
20762 + /* programm the VCPU memory controller bits 0-27 */
20763 + addr = rdev->uvd.gpu_addr >> 3;
20764 +- size = RADEON_GPU_PAGE_ALIGN(rdev->uvd.fw_size + 4) >> 3;
20765 ++ size = RADEON_GPU_PAGE_ALIGN(rdev->uvd_fw->size + 4) >> 3;
20766 + WREG32(UVD_VCPU_CACHE_OFFSET0, addr);
20767 + WREG32(UVD_VCPU_CACHE_SIZE0, size);
20768 +
20769 +@@ -1829,6 +1829,8 @@ static int rv770_startup(struct radeon_device *rdev)
20770 + /* enable pcie gen2 link */
20771 + rv770_pcie_gen2_enable(rdev);
20772 +
20773 ++ rv770_mc_program(rdev);
20774 ++
20775 + if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) {
20776 + r = r600_init_microcode(rdev);
20777 + if (r) {
20778 +@@ -1841,7 +1843,6 @@ static int rv770_startup(struct radeon_device *rdev)
20779 + if (r)
20780 + return r;
20781 +
20782 +- rv770_mc_program(rdev);
20783 + if (rdev->flags & RADEON_IS_AGP) {
20784 + rv770_agp_enable(rdev);
20785 + } else {
20786 +@@ -1983,6 +1984,7 @@ int rv770_resume(struct radeon_device *rdev)
20787 + int rv770_suspend(struct radeon_device *rdev)
20788 + {
20789 + r600_audio_fini(rdev);
20790 ++ r600_uvd_stop(rdev);
20791 + radeon_uvd_suspend(rdev);
20792 + r700_cp_stop(rdev);
20793 + r600_dma_stop(rdev);
20794 +@@ -2098,6 +2100,7 @@ void rv770_fini(struct radeon_device *rdev)
20795 + radeon_ib_pool_fini(rdev);
20796 + radeon_irq_kms_fini(rdev);
20797 + rv770_pcie_gart_fini(rdev);
20798 ++ r600_uvd_stop(rdev);
20799 + radeon_uvd_fini(rdev);
20800 + r600_vram_scratch_fini(rdev);
20801 + radeon_gem_fini(rdev);
20802 +diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
20803 +index a1b0da6..1a96a16 100644
20804 +--- a/drivers/gpu/drm/radeon/si.c
20805 ++++ b/drivers/gpu/drm/radeon/si.c
20806 +@@ -5270,6 +5270,8 @@ static int si_startup(struct radeon_device *rdev)
20807 + struct radeon_ring *ring;
20808 + int r;
20809 +
20810 ++ si_mc_program(rdev);
20811 ++
20812 + if (!rdev->me_fw || !rdev->pfp_fw || !rdev->ce_fw ||
20813 + !rdev->rlc_fw || !rdev->mc_fw) {
20814 + r = si_init_microcode(rdev);
20815 +@@ -5289,7 +5291,6 @@ static int si_startup(struct radeon_device *rdev)
20816 + if (r)
20817 + return r;
20818 +
20819 +- si_mc_program(rdev);
20820 + r = si_pcie_gart_enable(rdev);
20821 + if (r)
20822 + return r;
20823 +@@ -5473,7 +5474,7 @@ int si_suspend(struct radeon_device *rdev)
20824 + si_cp_enable(rdev, false);
20825 + cayman_dma_stop(rdev);
20826 + if (rdev->has_uvd) {
20827 +- r600_uvd_rbc_stop(rdev);
20828 ++ r600_uvd_stop(rdev);
20829 + radeon_uvd_suspend(rdev);
20830 + }
20831 + si_irq_suspend(rdev);
20832 +@@ -5613,8 +5614,10 @@ void si_fini(struct radeon_device *rdev)
20833 + radeon_vm_manager_fini(rdev);
20834 + radeon_ib_pool_fini(rdev);
20835 + radeon_irq_kms_fini(rdev);
20836 +- if (rdev->has_uvd)
20837 ++ if (rdev->has_uvd) {
20838 ++ r600_uvd_stop(rdev);
20839 + radeon_uvd_fini(rdev);
20840 ++ }
20841 + si_pcie_gart_fini(rdev);
20842 + r600_vram_scratch_fini(rdev);
20843 + radeon_gem_fini(rdev);
20844 +diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c
20845 +index b83bf4b..5863735 100644
20846 +--- a/drivers/hwmon/adt7470.c
20847 ++++ b/drivers/hwmon/adt7470.c
20848 +@@ -215,7 +215,7 @@ static inline int adt7470_write_word_data(struct i2c_client *client, u8 reg,
20849 + u16 value)
20850 + {
20851 + return i2c_smbus_write_byte_data(client, reg, value & 0xFF)
20852 +- && i2c_smbus_write_byte_data(client, reg + 1, value >> 8);
20853 ++ || i2c_smbus_write_byte_data(client, reg + 1, value >> 8);
20854 + }
20855 +
20856 + static void adt7470_init_client(struct i2c_client *client)
20857 +diff --git a/drivers/i2c/busses/i2c-mxs.c b/drivers/i2c/busses/i2c-mxs.c
20858 +index 2039f23..6d8094d 100644
20859 +--- a/drivers/i2c/busses/i2c-mxs.c
20860 ++++ b/drivers/i2c/busses/i2c-mxs.c
20861 +@@ -494,7 +494,7 @@ static int mxs_i2c_xfer_msg(struct i2c_adapter *adap, struct i2c_msg *msg,
20862 + * based on this empirical measurement and a lot of previous frobbing.
20863 + */
20864 + i2c->cmd_err = 0;
20865 +- if (msg->len < 8) {
20866 ++ if (0) { /* disable PIO mode until a proper fix is made */
20867 + ret = mxs_i2c_pio_setup_xfer(adap, msg, flags);
20868 + if (ret)
20869 + mxs_i2c_reset(i2c);
20870 +diff --git a/drivers/media/usb/em28xx/em28xx-i2c.c b/drivers/media/usb/em28xx/em28xx-i2c.c
20871 +index 4851cc2..c4ff973 100644
20872 +--- a/drivers/media/usb/em28xx/em28xx-i2c.c
20873 ++++ b/drivers/media/usb/em28xx/em28xx-i2c.c
20874 +@@ -726,7 +726,7 @@ static int em28xx_i2c_eeprom(struct em28xx *dev, unsigned bus,
20875 +
20876 + *eedata = data;
20877 + *eedata_len = len;
20878 +- dev_config = (void *)eedata;
20879 ++ dev_config = (void *)*eedata;
20880 +
20881 + switch (le16_to_cpu(dev_config->chip_conf) >> 4 & 0x3) {
20882 + case 0:
20883 +diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig
20884 +index a60f6c1..50543f1 100644
20885 +--- a/drivers/mtd/nand/Kconfig
20886 ++++ b/drivers/mtd/nand/Kconfig
20887 +@@ -95,7 +95,7 @@ config MTD_NAND_OMAP2
20888 +
20889 + config MTD_NAND_OMAP_BCH
20890 + depends on MTD_NAND && MTD_NAND_OMAP2 && ARCH_OMAP3
20891 +- bool "Enable support for hardware BCH error correction"
20892 ++ tristate "Enable support for hardware BCH error correction"
20893 + default n
20894 + select BCH
20895 + select BCH_CONST_PARAMS
20896 +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
20897 +index 89178b8..9b60dc1 100644
20898 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c
20899 ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
20900 +@@ -3508,11 +3508,21 @@ static int megasas_init_fw(struct megasas_instance *instance)
20901 + break;
20902 + }
20903 +
20904 +- /*
20905 +- * We expect the FW state to be READY
20906 +- */
20907 +- if (megasas_transition_to_ready(instance, 0))
20908 +- goto fail_ready_state;
20909 ++ if (megasas_transition_to_ready(instance, 0)) {
20910 ++ atomic_set(&instance->fw_reset_no_pci_access, 1);
20911 ++ instance->instancet->adp_reset
20912 ++ (instance, instance->reg_set);
20913 ++ atomic_set(&instance->fw_reset_no_pci_access, 0);
20914 ++ dev_info(&instance->pdev->dev,
20915 ++ "megasas: FW restarted successfully from %s!\n",
20916 ++ __func__);
20917 ++
20918 ++ /*waitting for about 30 second before retry*/
20919 ++ ssleep(30);
20920 ++
20921 ++ if (megasas_transition_to_ready(instance, 0))
20922 ++ goto fail_ready_state;
20923 ++ }
20924 +
20925 + /* Check if MSI-X is supported while in ready state */
20926 + msix_enable = (instance->instancet->read_fw_status_reg(reg_set) &
20927 +diff --git a/drivers/scsi/nsp32.c b/drivers/scsi/nsp32.c
20928 +index 1e3879d..0665f9c 100644
20929 +--- a/drivers/scsi/nsp32.c
20930 ++++ b/drivers/scsi/nsp32.c
20931 +@@ -2899,7 +2899,7 @@ static void nsp32_do_bus_reset(nsp32_hw_data *data)
20932 + * reset SCSI bus
20933 + */
20934 + nsp32_write1(base, SCSI_BUS_CONTROL, BUSCTL_RST);
20935 +- udelay(RESET_HOLD_TIME);
20936 ++ mdelay(RESET_HOLD_TIME / 1000);
20937 + nsp32_write1(base, SCSI_BUS_CONTROL, 0);
20938 + for(i = 0; i < 5; i++) {
20939 + intrdat = nsp32_read2(base, IRQ_STATUS); /* dummy read */
20940 +diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
20941 +index 3b1ea34..eaa808e 100644
20942 +--- a/drivers/scsi/scsi.c
20943 ++++ b/drivers/scsi/scsi.c
20944 +@@ -1031,6 +1031,9 @@ int scsi_get_vpd_page(struct scsi_device *sdev, u8 page, unsigned char *buf,
20945 + {
20946 + int i, result;
20947 +
20948 ++ if (sdev->skip_vpd_pages)
20949 ++ goto fail;
20950 ++
20951 + /* Ask for all the pages supported by this device */
20952 + result = scsi_vpd_inquiry(sdev, buf, 0, buf_len);
20953 + if (result)
20954 +diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
20955 +index 2168258..74b88ef 100644
20956 +--- a/drivers/scsi/virtio_scsi.c
20957 ++++ b/drivers/scsi/virtio_scsi.c
20958 +@@ -751,7 +751,7 @@ static void __virtscsi_set_affinity(struct virtio_scsi *vscsi, bool affinity)
20959 +
20960 + vscsi->affinity_hint_set = true;
20961 + } else {
20962 +- for (i = 0; i < vscsi->num_queues - VIRTIO_SCSI_VQ_BASE; i++)
20963 ++ for (i = 0; i < vscsi->num_queues; i++)
20964 + virtqueue_set_affinity(vscsi->req_vqs[i].vq, -1);
20965 +
20966 + vscsi->affinity_hint_set = false;
20967 +diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
20968 +index dcceed2..81972fa 100644
20969 +--- a/drivers/staging/zcache/zcache-main.c
20970 ++++ b/drivers/staging/zcache/zcache-main.c
20971 +@@ -1811,10 +1811,12 @@ static int zcache_comp_init(void)
20972 + #else
20973 + if (*zcache_comp_name != '\0') {
20974 + ret = crypto_has_comp(zcache_comp_name, 0, 0);
20975 +- if (!ret)
20976 ++ if (!ret) {
20977 + pr_info("zcache: %s not supported\n",
20978 + zcache_comp_name);
20979 +- goto out;
20980 ++ ret = 1;
20981 ++ goto out;
20982 ++ }
20983 + }
20984 + if (!ret)
20985 + strcpy(zcache_comp_name, "lzo");
20986 +diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
20987 +index 1742ce5..a333d44 100644
20988 +--- a/drivers/staging/zram/zram_drv.c
20989 ++++ b/drivers/staging/zram/zram_drv.c
20990 +@@ -432,7 +432,7 @@ static inline int valid_io_request(struct zram *zram, struct bio *bio)
20991 + end = start + (bio->bi_size >> SECTOR_SHIFT);
20992 + bound = zram->disksize >> SECTOR_SHIFT;
20993 + /* out of range range */
20994 +- if (unlikely(start >= bound || end >= bound || start > end))
20995 ++ if (unlikely(start >= bound || end > bound || start > end))
20996 + return 0;
20997 +
20998 + /* I/O request is valid */
20999 +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
21000 +index b93fc88..da2905a 100644
21001 +--- a/drivers/usb/core/hub.c
21002 ++++ b/drivers/usb/core/hub.c
21003 +@@ -4796,7 +4796,8 @@ static void hub_events(void)
21004 + hub->ports[i - 1]->child;
21005 +
21006 + dev_dbg(hub_dev, "warm reset port %d\n", i);
21007 +- if (!udev) {
21008 ++ if (!udev || !(portstatus &
21009 ++ USB_PORT_STAT_CONNECTION)) {
21010 + status = hub_port_reset(hub, i,
21011 + NULL, HUB_BH_RESET_TIME,
21012 + true);
21013 +@@ -4806,8 +4807,8 @@ static void hub_events(void)
21014 + usb_lock_device(udev);
21015 + status = usb_reset_device(udev);
21016 + usb_unlock_device(udev);
21017 ++ connect_change = 0;
21018 + }
21019 +- connect_change = 0;
21020 + }
21021 +
21022 + if (connect_change)
21023 +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
21024 +index c276ac9..cf68596 100644
21025 +--- a/fs/btrfs/tree-log.c
21026 ++++ b/fs/btrfs/tree-log.c
21027 +@@ -3728,8 +3728,9 @@ next_slot:
21028 + }
21029 +
21030 + log_extents:
21031 ++ btrfs_release_path(path);
21032 ++ btrfs_release_path(dst_path);
21033 + if (fast_search) {
21034 +- btrfs_release_path(dst_path);
21035 + ret = btrfs_log_changed_extents(trans, root, inode, dst_path);
21036 + if (ret) {
21037 + err = ret;
21038 +@@ -3746,8 +3747,6 @@ log_extents:
21039 + }
21040 +
21041 + if (inode_only == LOG_INODE_ALL && S_ISDIR(inode->i_mode)) {
21042 +- btrfs_release_path(path);
21043 +- btrfs_release_path(dst_path);
21044 + ret = log_directory_changes(trans, root, inode, path, dst_path);
21045 + if (ret) {
21046 + err = ret;
21047 +diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
21048 +index f59d0d5..5c807b2 100644
21049 +--- a/fs/cifs/cifsencrypt.c
21050 ++++ b/fs/cifs/cifsencrypt.c
21051 +@@ -389,7 +389,7 @@ find_domain_name(struct cifs_ses *ses, const struct nls_table *nls_cp)
21052 + if (blobptr + attrsize > blobend)
21053 + break;
21054 + if (type == NTLMSSP_AV_NB_DOMAIN_NAME) {
21055 +- if (!attrsize)
21056 ++ if (!attrsize || attrsize >= CIFS_MAX_DOMAINNAME_LEN)
21057 + break;
21058 + if (!ses->domainName) {
21059 + ses->domainName =
21060 +diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
21061 +index 4f07f6f..ea3a0b3 100644
21062 +--- a/fs/cifs/cifsglob.h
21063 ++++ b/fs/cifs/cifsglob.h
21064 +@@ -44,6 +44,7 @@
21065 + #define MAX_TREE_SIZE (2 + MAX_SERVER_SIZE + 1 + MAX_SHARE_SIZE + 1)
21066 + #define MAX_SERVER_SIZE 15
21067 + #define MAX_SHARE_SIZE 80
21068 ++#define CIFS_MAX_DOMAINNAME_LEN 256 /* max domain name length */
21069 + #define MAX_USERNAME_SIZE 256 /* reasonable maximum for current servers */
21070 + #define MAX_PASSWORD_SIZE 512 /* max for windows seems to be 256 wide chars */
21071 +
21072 +diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
21073 +index e3bc39b..d6a5c5a 100644
21074 +--- a/fs/cifs/connect.c
21075 ++++ b/fs/cifs/connect.c
21076 +@@ -1662,7 +1662,8 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
21077 + if (string == NULL)
21078 + goto out_nomem;
21079 +
21080 +- if (strnlen(string, 256) == 256) {
21081 ++ if (strnlen(string, CIFS_MAX_DOMAINNAME_LEN)
21082 ++ == CIFS_MAX_DOMAINNAME_LEN) {
21083 + printk(KERN_WARNING "CIFS: domain name too"
21084 + " long\n");
21085 + goto cifs_parse_mount_err;
21086 +@@ -2288,8 +2289,8 @@ cifs_put_smb_ses(struct cifs_ses *ses)
21087 +
21088 + #ifdef CONFIG_KEYS
21089 +
21090 +-/* strlen("cifs:a:") + INET6_ADDRSTRLEN + 1 */
21091 +-#define CIFSCREDS_DESC_SIZE (7 + INET6_ADDRSTRLEN + 1)
21092 ++/* strlen("cifs:a:") + CIFS_MAX_DOMAINNAME_LEN + 1 */
21093 ++#define CIFSCREDS_DESC_SIZE (7 + CIFS_MAX_DOMAINNAME_LEN + 1)
21094 +
21095 + /* Populate username and pw fields from keyring if possible */
21096 + static int
21097 +diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
21098 +index 770d5a9..036279c 100644
21099 +--- a/fs/cifs/readdir.c
21100 ++++ b/fs/cifs/readdir.c
21101 +@@ -111,6 +111,14 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
21102 + return;
21103 + }
21104 +
21105 ++ /*
21106 ++ * If we know that the inode will need to be revalidated immediately,
21107 ++ * then don't create a new dentry for it. We'll end up doing an on
21108 ++ * the wire call either way and this spares us an invalidation.
21109 ++ */
21110 ++ if (fattr->cf_flags & CIFS_FATTR_NEED_REVAL)
21111 ++ return;
21112 ++
21113 + dentry = d_alloc(parent, name);
21114 + if (!dentry)
21115 + return;
21116 +diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
21117 +index f230571..8edc9eb 100644
21118 +--- a/fs/cifs/sess.c
21119 ++++ b/fs/cifs/sess.c
21120 +@@ -198,7 +198,7 @@ static void unicode_domain_string(char **pbcc_area, struct cifs_ses *ses,
21121 + bytes_ret = 0;
21122 + } else
21123 + bytes_ret = cifs_strtoUTF16((__le16 *) bcc_ptr, ses->domainName,
21124 +- 256, nls_cp);
21125 ++ CIFS_MAX_DOMAINNAME_LEN, nls_cp);
21126 + bcc_ptr += 2 * bytes_ret;
21127 + bcc_ptr += 2; /* account for null terminator */
21128 +
21129 +@@ -256,8 +256,8 @@ static void ascii_ssetup_strings(char **pbcc_area, struct cifs_ses *ses,
21130 +
21131 + /* copy domain */
21132 + if (ses->domainName != NULL) {
21133 +- strncpy(bcc_ptr, ses->domainName, 256);
21134 +- bcc_ptr += strnlen(ses->domainName, 256);
21135 ++ strncpy(bcc_ptr, ses->domainName, CIFS_MAX_DOMAINNAME_LEN);
21136 ++ bcc_ptr += strnlen(ses->domainName, CIFS_MAX_DOMAINNAME_LEN);
21137 + } /* else we will send a null domain name
21138 + so the server will default to its own domain */
21139 + *bcc_ptr = 0;
21140 +diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
21141 +index 4888cb3..c7c83ff 100644
21142 +--- a/fs/debugfs/inode.c
21143 ++++ b/fs/debugfs/inode.c
21144 +@@ -533,8 +533,7 @@ EXPORT_SYMBOL_GPL(debugfs_remove);
21145 + */
21146 + void debugfs_remove_recursive(struct dentry *dentry)
21147 + {
21148 +- struct dentry *child;
21149 +- struct dentry *parent;
21150 ++ struct dentry *child, *next, *parent;
21151 +
21152 + if (IS_ERR_OR_NULL(dentry))
21153 + return;
21154 +@@ -544,61 +543,37 @@ void debugfs_remove_recursive(struct dentry *dentry)
21155 + return;
21156 +
21157 + parent = dentry;
21158 ++ down:
21159 + mutex_lock(&parent->d_inode->i_mutex);
21160 ++ list_for_each_entry_safe(child, next, &parent->d_subdirs, d_u.d_child) {
21161 ++ if (!debugfs_positive(child))
21162 ++ continue;
21163 +
21164 +- while (1) {
21165 +- /*
21166 +- * When all dentries under "parent" has been removed,
21167 +- * walk up the tree until we reach our starting point.
21168 +- */
21169 +- if (list_empty(&parent->d_subdirs)) {
21170 +- mutex_unlock(&parent->d_inode->i_mutex);
21171 +- if (parent == dentry)
21172 +- break;
21173 +- parent = parent->d_parent;
21174 +- mutex_lock(&parent->d_inode->i_mutex);
21175 +- }
21176 +- child = list_entry(parent->d_subdirs.next, struct dentry,
21177 +- d_u.d_child);
21178 +- next_sibling:
21179 +-
21180 +- /*
21181 +- * If "child" isn't empty, walk down the tree and
21182 +- * remove all its descendants first.
21183 +- */
21184 ++ /* perhaps simple_empty(child) makes more sense */
21185 + if (!list_empty(&child->d_subdirs)) {
21186 + mutex_unlock(&parent->d_inode->i_mutex);
21187 + parent = child;
21188 +- mutex_lock(&parent->d_inode->i_mutex);
21189 +- continue;
21190 ++ goto down;
21191 + }
21192 +- __debugfs_remove(child, parent);
21193 +- if (parent->d_subdirs.next == &child->d_u.d_child) {
21194 +- /*
21195 +- * Try the next sibling.
21196 +- */
21197 +- if (child->d_u.d_child.next != &parent->d_subdirs) {
21198 +- child = list_entry(child->d_u.d_child.next,
21199 +- struct dentry,
21200 +- d_u.d_child);
21201 +- goto next_sibling;
21202 +- }
21203 +-
21204 +- /*
21205 +- * Avoid infinite loop if we fail to remove
21206 +- * one dentry.
21207 +- */
21208 +- mutex_unlock(&parent->d_inode->i_mutex);
21209 +- break;
21210 +- }
21211 +- simple_release_fs(&debugfs_mount, &debugfs_mount_count);
21212 ++ up:
21213 ++ if (!__debugfs_remove(child, parent))
21214 ++ simple_release_fs(&debugfs_mount, &debugfs_mount_count);
21215 + }
21216 +
21217 +- parent = dentry->d_parent;
21218 ++ mutex_unlock(&parent->d_inode->i_mutex);
21219 ++ child = parent;
21220 ++ parent = parent->d_parent;
21221 + mutex_lock(&parent->d_inode->i_mutex);
21222 +- __debugfs_remove(dentry, parent);
21223 ++
21224 ++ if (child != dentry) {
21225 ++ next = list_entry(child->d_u.d_child.next, struct dentry,
21226 ++ d_u.d_child);
21227 ++ goto up;
21228 ++ }
21229 ++
21230 ++ if (!__debugfs_remove(child, parent))
21231 ++ simple_release_fs(&debugfs_mount, &debugfs_mount_count);
21232 + mutex_unlock(&parent->d_inode->i_mutex);
21233 +- simple_release_fs(&debugfs_mount, &debugfs_mount_count);
21234 + }
21235 + EXPORT_SYMBOL_GPL(debugfs_remove_recursive);
21236 +
21237 +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
21238 +index fddf3d9..dc1e030 100644
21239 +--- a/fs/ext4/extents.c
21240 ++++ b/fs/ext4/extents.c
21241 +@@ -4389,7 +4389,7 @@ void ext4_ext_truncate(handle_t *handle, struct inode *inode)
21242 + retry:
21243 + err = ext4_es_remove_extent(inode, last_block,
21244 + EXT_MAX_BLOCKS - last_block);
21245 +- if (err == ENOMEM) {
21246 ++ if (err == -ENOMEM) {
21247 + cond_resched();
21248 + congestion_wait(BLK_RW_ASYNC, HZ/50);
21249 + goto retry;
21250 +diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
21251 +index 00a818d..3da3bf1 100644
21252 +--- a/fs/ext4/ialloc.c
21253 ++++ b/fs/ext4/ialloc.c
21254 +@@ -734,11 +734,8 @@ repeat_in_this_group:
21255 + ino = ext4_find_next_zero_bit((unsigned long *)
21256 + inode_bitmap_bh->b_data,
21257 + EXT4_INODES_PER_GROUP(sb), ino);
21258 +- if (ino >= EXT4_INODES_PER_GROUP(sb)) {
21259 +- if (++group == ngroups)
21260 +- group = 0;
21261 +- continue;
21262 +- }
21263 ++ if (ino >= EXT4_INODES_PER_GROUP(sb))
21264 ++ goto next_group;
21265 + if (group == 0 && (ino+1) < EXT4_FIRST_INO(sb)) {
21266 + ext4_error(sb, "reserved inode found cleared - "
21267 + "inode=%lu", ino + 1);
21268 +@@ -768,6 +765,9 @@ repeat_in_this_group:
21269 + goto got; /* we grabbed the inode! */
21270 + if (ino < EXT4_INODES_PER_GROUP(sb))
21271 + goto repeat_in_this_group;
21272 ++next_group:
21273 ++ if (++group == ngroups)
21274 ++ group = 0;
21275 + }
21276 + err = -ENOSPC;
21277 + goto out;
21278 +diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
21279 +index 9491ac0..c0427e2 100644
21280 +--- a/fs/ext4/ioctl.c
21281 ++++ b/fs/ext4/ioctl.c
21282 +@@ -77,8 +77,10 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
21283 + memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data));
21284 + memswap(&ei1->i_flags, &ei2->i_flags, sizeof(ei1->i_flags));
21285 + memswap(&ei1->i_disksize, &ei2->i_disksize, sizeof(ei1->i_disksize));
21286 +- memswap(&ei1->i_es_tree, &ei2->i_es_tree, sizeof(ei1->i_es_tree));
21287 +- memswap(&ei1->i_es_lru_nr, &ei2->i_es_lru_nr, sizeof(ei1->i_es_lru_nr));
21288 ++ ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS);
21289 ++ ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS);
21290 ++ ext4_es_lru_del(inode1);
21291 ++ ext4_es_lru_del(inode2);
21292 +
21293 + isize = i_size_read(inode1);
21294 + i_size_write(inode1, i_size_read(inode2));
21295 +diff --git a/fs/ext4/super.c b/fs/ext4/super.c
21296 +index 6681c03..3f7c39e 100644
21297 +--- a/fs/ext4/super.c
21298 ++++ b/fs/ext4/super.c
21299 +@@ -1341,7 +1341,7 @@ static const struct mount_opts {
21300 + {Opt_delalloc, EXT4_MOUNT_DELALLOC,
21301 + MOPT_EXT4_ONLY | MOPT_SET | MOPT_EXPLICIT},
21302 + {Opt_nodelalloc, EXT4_MOUNT_DELALLOC,
21303 +- MOPT_EXT4_ONLY | MOPT_CLEAR | MOPT_EXPLICIT},
21304 ++ MOPT_EXT4_ONLY | MOPT_CLEAR},
21305 + {Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
21306 + MOPT_EXT4_ONLY | MOPT_SET},
21307 + {Opt_journal_async_commit, (EXT4_MOUNT_JOURNAL_ASYNC_COMMIT |
21308 +@@ -3445,7 +3445,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
21309 + }
21310 + if (test_opt(sb, DIOREAD_NOLOCK)) {
21311 + ext4_msg(sb, KERN_ERR, "can't mount with "
21312 +- "both data=journal and delalloc");
21313 ++ "both data=journal and dioread_nolock");
21314 + goto failed_mount;
21315 + }
21316 + if (test_opt(sb, DELALLOC))
21317 +@@ -4646,6 +4646,21 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
21318 + goto restore_opts;
21319 + }
21320 +
21321 ++ if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA) {
21322 ++ if (test_opt2(sb, EXPLICIT_DELALLOC)) {
21323 ++ ext4_msg(sb, KERN_ERR, "can't mount with "
21324 ++ "both data=journal and delalloc");
21325 ++ err = -EINVAL;
21326 ++ goto restore_opts;
21327 ++ }
21328 ++ if (test_opt(sb, DIOREAD_NOLOCK)) {
21329 ++ ext4_msg(sb, KERN_ERR, "can't mount with "
21330 ++ "both data=journal and dioread_nolock");
21331 ++ err = -EINVAL;
21332 ++ goto restore_opts;
21333 ++ }
21334 ++ }
21335 ++
21336 + if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED)
21337 + ext4_abort(sb, "Abort forced by user");
21338 +
21339 +@@ -5400,6 +5415,7 @@ static void __exit ext4_exit_fs(void)
21340 + kset_unregister(ext4_kset);
21341 + ext4_exit_system_zone();
21342 + ext4_exit_pageio();
21343 ++ ext4_exit_es();
21344 + }
21345 +
21346 + MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");
21347 +diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
21348 +index 01bfe76..41e491b 100644
21349 +--- a/fs/lockd/clntlock.c
21350 ++++ b/fs/lockd/clntlock.c
21351 +@@ -64,12 +64,17 @@ struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
21352 + nlm_init->protocol, nlm_version,
21353 + nlm_init->hostname, nlm_init->noresvport,
21354 + nlm_init->net);
21355 +- if (host == NULL) {
21356 +- lockd_down(nlm_init->net);
21357 +- return ERR_PTR(-ENOLCK);
21358 +- }
21359 ++ if (host == NULL)
21360 ++ goto out_nohost;
21361 ++ if (host->h_rpcclnt == NULL && nlm_bind_host(host) == NULL)
21362 ++ goto out_nobind;
21363 +
21364 + return host;
21365 ++out_nobind:
21366 ++ nlmclnt_release_host(host);
21367 ++out_nohost:
21368 ++ lockd_down(nlm_init->net);
21369 ++ return ERR_PTR(-ENOLCK);
21370 + }
21371 + EXPORT_SYMBOL_GPL(nlmclnt_init);
21372 +
21373 +diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
21374 +index 9760ecb..acd3947 100644
21375 +--- a/fs/lockd/clntproc.c
21376 ++++ b/fs/lockd/clntproc.c
21377 +@@ -125,14 +125,15 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, struct file_lock *fl)
21378 + {
21379 + struct nlm_args *argp = &req->a_args;
21380 + struct nlm_lock *lock = &argp->lock;
21381 ++ char *nodename = req->a_host->h_rpcclnt->cl_nodename;
21382 +
21383 + nlmclnt_next_cookie(&argp->cookie);
21384 + memcpy(&lock->fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct nfs_fh));
21385 +- lock->caller = utsname()->nodename;
21386 ++ lock->caller = nodename;
21387 + lock->oh.data = req->a_owner;
21388 + lock->oh.len = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
21389 + (unsigned int)fl->fl_u.nfs_fl.owner->pid,
21390 +- utsname()->nodename);
21391 ++ nodename);
21392 + lock->svid = fl->fl_u.nfs_fl.owner->pid;
21393 + lock->fl.fl_start = fl->fl_start;
21394 + lock->fl.fl_end = fl->fl_end;
21395 +diff --git a/fs/reiserfs/procfs.c b/fs/reiserfs/procfs.c
21396 +index 33532f7..1d48974 100644
21397 +--- a/fs/reiserfs/procfs.c
21398 ++++ b/fs/reiserfs/procfs.c
21399 +@@ -19,12 +19,13 @@
21400 + /*
21401 + * LOCKING:
21402 + *
21403 +- * We rely on new Alexander Viro's super-block locking.
21404 ++ * These guys are evicted from procfs as the very first step in ->kill_sb().
21405 + *
21406 + */
21407 +
21408 +-static int show_version(struct seq_file *m, struct super_block *sb)
21409 ++static int show_version(struct seq_file *m, void *unused)
21410 + {
21411 ++ struct super_block *sb = m->private;
21412 + char *format;
21413 +
21414 + if (REISERFS_SB(sb)->s_properties & (1 << REISERFS_3_6)) {
21415 +@@ -66,8 +67,9 @@ static int show_version(struct seq_file *m, struct super_block *sb)
21416 + #define DJP( x ) le32_to_cpu( jp -> x )
21417 + #define JF( x ) ( r -> s_journal -> x )
21418 +
21419 +-static int show_super(struct seq_file *m, struct super_block *sb)
21420 ++static int show_super(struct seq_file *m, void *unused)
21421 + {
21422 ++ struct super_block *sb = m->private;
21423 + struct reiserfs_sb_info *r = REISERFS_SB(sb);
21424 +
21425 + seq_printf(m, "state: \t%s\n"
21426 +@@ -128,8 +130,9 @@ static int show_super(struct seq_file *m, struct super_block *sb)
21427 + return 0;
21428 + }
21429 +
21430 +-static int show_per_level(struct seq_file *m, struct super_block *sb)
21431 ++static int show_per_level(struct seq_file *m, void *unused)
21432 + {
21433 ++ struct super_block *sb = m->private;
21434 + struct reiserfs_sb_info *r = REISERFS_SB(sb);
21435 + int level;
21436 +
21437 +@@ -186,8 +189,9 @@ static int show_per_level(struct seq_file *m, struct super_block *sb)
21438 + return 0;
21439 + }
21440 +
21441 +-static int show_bitmap(struct seq_file *m, struct super_block *sb)
21442 ++static int show_bitmap(struct seq_file *m, void *unused)
21443 + {
21444 ++ struct super_block *sb = m->private;
21445 + struct reiserfs_sb_info *r = REISERFS_SB(sb);
21446 +
21447 + seq_printf(m, "free_block: %lu\n"
21448 +@@ -218,8 +222,9 @@ static int show_bitmap(struct seq_file *m, struct super_block *sb)
21449 + return 0;
21450 + }
21451 +
21452 +-static int show_on_disk_super(struct seq_file *m, struct super_block *sb)
21453 ++static int show_on_disk_super(struct seq_file *m, void *unused)
21454 + {
21455 ++ struct super_block *sb = m->private;
21456 + struct reiserfs_sb_info *sb_info = REISERFS_SB(sb);
21457 + struct reiserfs_super_block *rs = sb_info->s_rs;
21458 + int hash_code = DFL(s_hash_function_code);
21459 +@@ -261,8 +266,9 @@ static int show_on_disk_super(struct seq_file *m, struct super_block *sb)
21460 + return 0;
21461 + }
21462 +
21463 +-static int show_oidmap(struct seq_file *m, struct super_block *sb)
21464 ++static int show_oidmap(struct seq_file *m, void *unused)
21465 + {
21466 ++ struct super_block *sb = m->private;
21467 + struct reiserfs_sb_info *sb_info = REISERFS_SB(sb);
21468 + struct reiserfs_super_block *rs = sb_info->s_rs;
21469 + unsigned int mapsize = le16_to_cpu(rs->s_v1.s_oid_cursize);
21470 +@@ -291,8 +297,9 @@ static int show_oidmap(struct seq_file *m, struct super_block *sb)
21471 + return 0;
21472 + }
21473 +
21474 +-static int show_journal(struct seq_file *m, struct super_block *sb)
21475 ++static int show_journal(struct seq_file *m, void *unused)
21476 + {
21477 ++ struct super_block *sb = m->private;
21478 + struct reiserfs_sb_info *r = REISERFS_SB(sb);
21479 + struct reiserfs_super_block *rs = r->s_rs;
21480 + struct journal_params *jp = &rs->s_v1.s_journal;
21481 +@@ -383,92 +390,24 @@ static int show_journal(struct seq_file *m, struct super_block *sb)
21482 + return 0;
21483 + }
21484 +
21485 +-/* iterator */
21486 +-static int test_sb(struct super_block *sb, void *data)
21487 +-{
21488 +- return data == sb;
21489 +-}
21490 +-
21491 +-static int set_sb(struct super_block *sb, void *data)
21492 +-{
21493 +- return -ENOENT;
21494 +-}
21495 +-
21496 +-struct reiserfs_seq_private {
21497 +- struct super_block *sb;
21498 +- int (*show) (struct seq_file *, struct super_block *);
21499 +-};
21500 +-
21501 +-static void *r_start(struct seq_file *m, loff_t * pos)
21502 +-{
21503 +- struct reiserfs_seq_private *priv = m->private;
21504 +- loff_t l = *pos;
21505 +-
21506 +- if (l)
21507 +- return NULL;
21508 +-
21509 +- if (IS_ERR(sget(&reiserfs_fs_type, test_sb, set_sb, 0, priv->sb)))
21510 +- return NULL;
21511 +-
21512 +- up_write(&priv->sb->s_umount);
21513 +- return priv->sb;
21514 +-}
21515 +-
21516 +-static void *r_next(struct seq_file *m, void *v, loff_t * pos)
21517 +-{
21518 +- ++*pos;
21519 +- if (v)
21520 +- deactivate_super(v);
21521 +- return NULL;
21522 +-}
21523 +-
21524 +-static void r_stop(struct seq_file *m, void *v)
21525 +-{
21526 +- if (v)
21527 +- deactivate_super(v);
21528 +-}
21529 +-
21530 +-static int r_show(struct seq_file *m, void *v)
21531 +-{
21532 +- struct reiserfs_seq_private *priv = m->private;
21533 +- return priv->show(m, v);
21534 +-}
21535 +-
21536 +-static const struct seq_operations r_ops = {
21537 +- .start = r_start,
21538 +- .next = r_next,
21539 +- .stop = r_stop,
21540 +- .show = r_show,
21541 +-};
21542 +-
21543 + static int r_open(struct inode *inode, struct file *file)
21544 + {
21545 +- struct reiserfs_seq_private *priv;
21546 +- int ret = seq_open_private(file, &r_ops,
21547 +- sizeof(struct reiserfs_seq_private));
21548 +-
21549 +- if (!ret) {
21550 +- struct seq_file *m = file->private_data;
21551 +- priv = m->private;
21552 +- priv->sb = proc_get_parent_data(inode);
21553 +- priv->show = PDE_DATA(inode);
21554 +- }
21555 +- return ret;
21556 ++ return single_open(file, PDE_DATA(inode),
21557 ++ proc_get_parent_data(inode));
21558 + }
21559 +
21560 + static const struct file_operations r_file_operations = {
21561 + .open = r_open,
21562 + .read = seq_read,
21563 + .llseek = seq_lseek,
21564 +- .release = seq_release_private,
21565 +- .owner = THIS_MODULE,
21566 ++ .release = single_release,
21567 + };
21568 +
21569 + static struct proc_dir_entry *proc_info_root = NULL;
21570 + static const char proc_info_root_name[] = "fs/reiserfs";
21571 +
21572 + static void add_file(struct super_block *sb, char *name,
21573 +- int (*func) (struct seq_file *, struct super_block *))
21574 ++ int (*func) (struct seq_file *, void *))
21575 + {
21576 + proc_create_data(name, 0, REISERFS_SB(sb)->procdir,
21577 + &r_file_operations, func);
21578 +diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
21579 +index f8a23c3..e2e202a 100644
21580 +--- a/fs/reiserfs/super.c
21581 ++++ b/fs/reiserfs/super.c
21582 +@@ -499,6 +499,7 @@ int remove_save_link(struct inode *inode, int truncate)
21583 + static void reiserfs_kill_sb(struct super_block *s)
21584 + {
21585 + if (REISERFS_SB(s)) {
21586 ++ reiserfs_proc_info_done(s);
21587 + /*
21588 + * Force any pending inode evictions to occur now. Any
21589 + * inodes to be removed that have extended attributes
21590 +@@ -554,8 +555,6 @@ static void reiserfs_put_super(struct super_block *s)
21591 + REISERFS_SB(s)->reserved_blocks);
21592 + }
21593 +
21594 +- reiserfs_proc_info_done(s);
21595 +-
21596 + reiserfs_write_unlock(s);
21597 + mutex_destroy(&REISERFS_SB(s)->lock);
21598 + kfree(s->s_fs_info);
21599 +diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
21600 +index 4372658..44cdc11 100644
21601 +--- a/include/linux/ftrace_event.h
21602 ++++ b/include/linux/ftrace_event.h
21603 +@@ -78,6 +78,11 @@ struct trace_iterator {
21604 + /* trace_seq for __print_flags() and __print_symbolic() etc. */
21605 + struct trace_seq tmp_seq;
21606 +
21607 ++ cpumask_var_t started;
21608 ++
21609 ++ /* it's true when current open file is snapshot */
21610 ++ bool snapshot;
21611 ++
21612 + /* The below is zeroed out in pipe_read */
21613 + struct trace_seq seq;
21614 + struct trace_entry *ent;
21615 +@@ -90,10 +95,7 @@ struct trace_iterator {
21616 + loff_t pos;
21617 + long idx;
21618 +
21619 +- cpumask_var_t started;
21620 +-
21621 +- /* it's true when current open file is snapshot */
21622 +- bool snapshot;
21623 ++ /* All new field here will be zeroed out in pipe_read */
21624 + };
21625 +
21626 + enum trace_iter_flags {
21627 +diff --git a/include/linux/regmap.h b/include/linux/regmap.h
21628 +index 02d84e2..f91bb41 100644
21629 +--- a/include/linux/regmap.h
21630 ++++ b/include/linux/regmap.h
21631 +@@ -15,6 +15,7 @@
21632 +
21633 + #include <linux/list.h>
21634 + #include <linux/rbtree.h>
21635 ++#include <linux/err.h>
21636 +
21637 + struct module;
21638 + struct device;
21639 +diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
21640 +index 84ca436..9faf0f4 100644
21641 +--- a/include/linux/sunrpc/sched.h
21642 ++++ b/include/linux/sunrpc/sched.h
21643 +@@ -130,6 +130,7 @@ struct rpc_task_setup {
21644 + #define RPC_TASK_SOFTCONN 0x0400 /* Fail if can't connect */
21645 + #define RPC_TASK_SENT 0x0800 /* message was sent */
21646 + #define RPC_TASK_TIMEOUT 0x1000 /* fail with ETIMEDOUT on timeout */
21647 ++#define RPC_TASK_NOCONNECT 0x2000 /* return ENOTCONN if not connected */
21648 +
21649 + #define RPC_IS_ASYNC(t) ((t)->tk_flags & RPC_TASK_ASYNC)
21650 + #define RPC_IS_SWAPPER(t) ((t)->tk_flags & RPC_TASK_SWAPPER)
21651 +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
21652 +index f7bc3ce..06a5bce 100644
21653 +--- a/kernel/trace/trace.c
21654 ++++ b/kernel/trace/trace.c
21655 +@@ -232,23 +232,43 @@ int filter_current_check_discard(struct ring_buffer *buffer,
21656 + }
21657 + EXPORT_SYMBOL_GPL(filter_current_check_discard);
21658 +
21659 +-cycle_t ftrace_now(int cpu)
21660 ++cycle_t buffer_ftrace_now(struct trace_buffer *buf, int cpu)
21661 + {
21662 + u64 ts;
21663 +
21664 + /* Early boot up does not have a buffer yet */
21665 +- if (!global_trace.trace_buffer.buffer)
21666 ++ if (!buf->buffer)
21667 + return trace_clock_local();
21668 +
21669 +- ts = ring_buffer_time_stamp(global_trace.trace_buffer.buffer, cpu);
21670 +- ring_buffer_normalize_time_stamp(global_trace.trace_buffer.buffer, cpu, &ts);
21671 ++ ts = ring_buffer_time_stamp(buf->buffer, cpu);
21672 ++ ring_buffer_normalize_time_stamp(buf->buffer, cpu, &ts);
21673 +
21674 + return ts;
21675 + }
21676 +
21677 ++cycle_t ftrace_now(int cpu)
21678 ++{
21679 ++ return buffer_ftrace_now(&global_trace.trace_buffer, cpu);
21680 ++}
21681 ++
21682 ++/**
21683 ++ * tracing_is_enabled - Show if global_trace has been disabled
21684 ++ *
21685 ++ * Shows if the global trace has been enabled or not. It uses the
21686 ++ * mirror flag "buffer_disabled" to be used in fast paths such as for
21687 ++ * the irqsoff tracer. But it may be inaccurate due to races. If you
21688 ++ * need to know the accurate state, use tracing_is_on() which is a little
21689 ++ * slower, but accurate.
21690 ++ */
21691 + int tracing_is_enabled(void)
21692 + {
21693 +- return tracing_is_on();
21694 ++ /*
21695 ++ * For quick access (irqsoff uses this in fast path), just
21696 ++ * return the mirror variable of the state of the ring buffer.
21697 ++ * It's a little racy, but we don't really care.
21698 ++ */
21699 ++ smp_rmb();
21700 ++ return !global_trace.buffer_disabled;
21701 + }
21702 +
21703 + /*
21704 +@@ -361,6 +381,23 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
21705 + TRACE_ITER_GRAPH_TIME | TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE |
21706 + TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS | TRACE_ITER_FUNCTION;
21707 +
21708 ++void tracer_tracing_on(struct trace_array *tr)
21709 ++{
21710 ++ if (tr->trace_buffer.buffer)
21711 ++ ring_buffer_record_on(tr->trace_buffer.buffer);
21712 ++ /*
21713 ++ * This flag is looked at when buffers haven't been allocated
21714 ++ * yet, or by some tracers (like irqsoff), that just want to
21715 ++ * know if the ring buffer has been disabled, but it can handle
21716 ++ * races of where it gets disabled but we still do a record.
21717 ++ * As the check is in the fast path of the tracers, it is more
21718 ++ * important to be fast than accurate.
21719 ++ */
21720 ++ tr->buffer_disabled = 0;
21721 ++ /* Make the flag seen by readers */
21722 ++ smp_wmb();
21723 ++}
21724 ++
21725 + /**
21726 + * tracing_on - enable tracing buffers
21727 + *
21728 +@@ -369,15 +406,7 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
21729 + */
21730 + void tracing_on(void)
21731 + {
21732 +- if (global_trace.trace_buffer.buffer)
21733 +- ring_buffer_record_on(global_trace.trace_buffer.buffer);
21734 +- /*
21735 +- * This flag is only looked at when buffers haven't been
21736 +- * allocated yet. We don't really care about the race
21737 +- * between setting this flag and actually turning
21738 +- * on the buffer.
21739 +- */
21740 +- global_trace.buffer_disabled = 0;
21741 ++ tracer_tracing_on(&global_trace);
21742 + }
21743 + EXPORT_SYMBOL_GPL(tracing_on);
21744 +
21745 +@@ -571,6 +600,23 @@ void tracing_snapshot_alloc(void)
21746 + EXPORT_SYMBOL_GPL(tracing_snapshot_alloc);
21747 + #endif /* CONFIG_TRACER_SNAPSHOT */
21748 +
21749 ++void tracer_tracing_off(struct trace_array *tr)
21750 ++{
21751 ++ if (tr->trace_buffer.buffer)
21752 ++ ring_buffer_record_off(tr->trace_buffer.buffer);
21753 ++ /*
21754 ++ * This flag is looked at when buffers haven't been allocated
21755 ++ * yet, or by some tracers (like irqsoff), that just want to
21756 ++ * know if the ring buffer has been disabled, but it can handle
21757 ++ * races of where it gets disabled but we still do a record.
21758 ++ * As the check is in the fast path of the tracers, it is more
21759 ++ * important to be fast than accurate.
21760 ++ */
21761 ++ tr->buffer_disabled = 1;
21762 ++ /* Make the flag seen by readers */
21763 ++ smp_wmb();
21764 ++}
21765 ++
21766 + /**
21767 + * tracing_off - turn off tracing buffers
21768 + *
21769 +@@ -581,26 +627,29 @@ EXPORT_SYMBOL_GPL(tracing_snapshot_alloc);
21770 + */
21771 + void tracing_off(void)
21772 + {
21773 +- if (global_trace.trace_buffer.buffer)
21774 +- ring_buffer_record_off(global_trace.trace_buffer.buffer);
21775 +- /*
21776 +- * This flag is only looked at when buffers haven't been
21777 +- * allocated yet. We don't really care about the race
21778 +- * between setting this flag and actually turning
21779 +- * on the buffer.
21780 +- */
21781 +- global_trace.buffer_disabled = 1;
21782 ++ tracer_tracing_off(&global_trace);
21783 + }
21784 + EXPORT_SYMBOL_GPL(tracing_off);
21785 +
21786 + /**
21787 ++ * tracer_tracing_is_on - show real state of ring buffer enabled
21788 ++ * @tr : the trace array to know if ring buffer is enabled
21789 ++ *
21790 ++ * Shows real state of the ring buffer if it is enabled or not.
21791 ++ */
21792 ++int tracer_tracing_is_on(struct trace_array *tr)
21793 ++{
21794 ++ if (tr->trace_buffer.buffer)
21795 ++ return ring_buffer_record_is_on(tr->trace_buffer.buffer);
21796 ++ return !tr->buffer_disabled;
21797 ++}
21798 ++
21799 ++/**
21800 + * tracing_is_on - show state of ring buffers enabled
21801 + */
21802 + int tracing_is_on(void)
21803 + {
21804 +- if (global_trace.trace_buffer.buffer)
21805 +- return ring_buffer_record_is_on(global_trace.trace_buffer.buffer);
21806 +- return !global_trace.buffer_disabled;
21807 ++ return tracer_tracing_is_on(&global_trace);
21808 + }
21809 + EXPORT_SYMBOL_GPL(tracing_is_on);
21810 +
21811 +@@ -1150,7 +1199,7 @@ void tracing_reset_online_cpus(struct trace_buffer *buf)
21812 + /* Make sure all commits have finished */
21813 + synchronize_sched();
21814 +
21815 +- buf->time_start = ftrace_now(buf->cpu);
21816 ++ buf->time_start = buffer_ftrace_now(buf, buf->cpu);
21817 +
21818 + for_each_online_cpu(cpu)
21819 + ring_buffer_reset_cpu(buffer, cpu);
21820 +@@ -1158,11 +1207,6 @@ void tracing_reset_online_cpus(struct trace_buffer *buf)
21821 + ring_buffer_record_enable(buffer);
21822 + }
21823 +
21824 +-void tracing_reset_current(int cpu)
21825 +-{
21826 +- tracing_reset(&global_trace.trace_buffer, cpu);
21827 +-}
21828 +-
21829 + /* Must have trace_types_lock held */
21830 + void tracing_reset_all_online_cpus(void)
21831 + {
21832 +@@ -4060,7 +4104,7 @@ static int tracing_wait_pipe(struct file *filp)
21833 + *
21834 + * iter->pos will be 0 if we haven't read anything.
21835 + */
21836 +- if (!tracing_is_enabled() && iter->pos)
21837 ++ if (!tracing_is_on() && iter->pos)
21838 + break;
21839 + }
21840 +
21841 +@@ -4121,6 +4165,7 @@ waitagain:
21842 + memset(&iter->seq, 0,
21843 + sizeof(struct trace_iterator) -
21844 + offsetof(struct trace_iterator, seq));
21845 ++ cpumask_clear(iter->started);
21846 + iter->pos = -1;
21847 +
21848 + trace_event_read_lock();
21849 +@@ -4437,7 +4482,7 @@ tracing_free_buffer_release(struct inode *inode, struct file *filp)
21850 +
21851 + /* disable tracing ? */
21852 + if (trace_flags & TRACE_ITER_STOP_ON_FREE)
21853 +- tracing_off();
21854 ++ tracer_tracing_off(tr);
21855 + /* resize the ring buffer to 0 */
21856 + tracing_resize_ring_buffer(tr, 0, RING_BUFFER_ALL_CPUS);
21857 +
21858 +@@ -4602,12 +4647,12 @@ static ssize_t tracing_clock_write(struct file *filp, const char __user *ubuf,
21859 + * New clock may not be consistent with the previous clock.
21860 + * Reset the buffer so that it doesn't have incomparable timestamps.
21861 + */
21862 +- tracing_reset_online_cpus(&global_trace.trace_buffer);
21863 ++ tracing_reset_online_cpus(&tr->trace_buffer);
21864 +
21865 + #ifdef CONFIG_TRACER_MAX_TRACE
21866 + if (tr->flags & TRACE_ARRAY_FL_GLOBAL && tr->max_buffer.buffer)
21867 + ring_buffer_set_clock(tr->max_buffer.buffer, trace_clocks[i].func);
21868 +- tracing_reset_online_cpus(&global_trace.max_buffer);
21869 ++ tracing_reset_online_cpus(&tr->max_buffer);
21870 + #endif
21871 +
21872 + mutex_unlock(&trace_types_lock);
21873 +@@ -5771,15 +5816,10 @@ rb_simple_read(struct file *filp, char __user *ubuf,
21874 + size_t cnt, loff_t *ppos)
21875 + {
21876 + struct trace_array *tr = filp->private_data;
21877 +- struct ring_buffer *buffer = tr->trace_buffer.buffer;
21878 + char buf[64];
21879 + int r;
21880 +
21881 +- if (buffer)
21882 +- r = ring_buffer_record_is_on(buffer);
21883 +- else
21884 +- r = 0;
21885 +-
21886 ++ r = tracer_tracing_is_on(tr);
21887 + r = sprintf(buf, "%d\n", r);
21888 +
21889 + return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
21890 +@@ -5801,11 +5841,11 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
21891 + if (buffer) {
21892 + mutex_lock(&trace_types_lock);
21893 + if (val) {
21894 +- ring_buffer_record_on(buffer);
21895 ++ tracer_tracing_on(tr);
21896 + if (tr->current_trace->start)
21897 + tr->current_trace->start(tr);
21898 + } else {
21899 +- ring_buffer_record_off(buffer);
21900 ++ tracer_tracing_off(tr);
21901 + if (tr->current_trace->stop)
21902 + tr->current_trace->stop(tr);
21903 + }
21904 +diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
21905 +index b19d065..2aefbee 100644
21906 +--- a/kernel/trace/trace_irqsoff.c
21907 ++++ b/kernel/trace/trace_irqsoff.c
21908 +@@ -373,7 +373,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
21909 + struct trace_array_cpu *data;
21910 + unsigned long flags;
21911 +
21912 +- if (likely(!tracer_enabled))
21913 ++ if (!tracer_enabled || !tracing_is_enabled())
21914 + return;
21915 +
21916 + cpu = raw_smp_processor_id();
21917 +@@ -416,7 +416,7 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
21918 + else
21919 + return;
21920 +
21921 +- if (!tracer_enabled)
21922 ++ if (!tracer_enabled || !tracing_is_enabled())
21923 + return;
21924 +
21925 + data = per_cpu_ptr(tr->trace_buffer.data, cpu);
21926 +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
21927 +index 5a750b9..426f8fc 100644
21928 +--- a/net/sunrpc/clnt.c
21929 ++++ b/net/sunrpc/clnt.c
21930 +@@ -1644,6 +1644,10 @@ call_connect(struct rpc_task *task)
21931 + task->tk_action = call_connect_status;
21932 + if (task->tk_status < 0)
21933 + return;
21934 ++ if (task->tk_flags & RPC_TASK_NOCONNECT) {
21935 ++ rpc_exit(task, -ENOTCONN);
21936 ++ return;
21937 ++ }
21938 + xprt_connect(task);
21939 + }
21940 + }
21941 +diff --git a/net/sunrpc/netns.h b/net/sunrpc/netns.h
21942 +index 74d948f..779742c 100644
21943 +--- a/net/sunrpc/netns.h
21944 ++++ b/net/sunrpc/netns.h
21945 +@@ -23,6 +23,7 @@ struct sunrpc_net {
21946 + struct rpc_clnt *rpcb_local_clnt4;
21947 + spinlock_t rpcb_clnt_lock;
21948 + unsigned int rpcb_users;
21949 ++ unsigned int rpcb_is_af_local : 1;
21950 +
21951 + struct mutex gssp_lock;
21952 + wait_queue_head_t gssp_wq;
21953 +diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
21954 +index 3df764d..1891a10 100644
21955 +--- a/net/sunrpc/rpcb_clnt.c
21956 ++++ b/net/sunrpc/rpcb_clnt.c
21957 +@@ -204,13 +204,15 @@ void rpcb_put_local(struct net *net)
21958 + }
21959 +
21960 + static void rpcb_set_local(struct net *net, struct rpc_clnt *clnt,
21961 +- struct rpc_clnt *clnt4)
21962 ++ struct rpc_clnt *clnt4,
21963 ++ bool is_af_local)
21964 + {
21965 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
21966 +
21967 + /* Protected by rpcb_create_local_mutex */
21968 + sn->rpcb_local_clnt = clnt;
21969 + sn->rpcb_local_clnt4 = clnt4;
21970 ++ sn->rpcb_is_af_local = is_af_local ? 1 : 0;
21971 + smp_wmb();
21972 + sn->rpcb_users = 1;
21973 + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
21974 +@@ -238,6 +240,14 @@ static int rpcb_create_local_unix(struct net *net)
21975 + .program = &rpcb_program,
21976 + .version = RPCBVERS_2,
21977 + .authflavor = RPC_AUTH_NULL,
21978 ++ /*
21979 ++ * We turn off the idle timeout to prevent the kernel
21980 ++ * from automatically disconnecting the socket.
21981 ++ * Otherwise, we'd have to cache the mount namespace
21982 ++ * of the caller and somehow pass that to the socket
21983 ++ * reconnect code.
21984 ++ */
21985 ++ .flags = RPC_CLNT_CREATE_NO_IDLE_TIMEOUT,
21986 + };
21987 + struct rpc_clnt *clnt, *clnt4;
21988 + int result = 0;
21989 +@@ -263,7 +273,7 @@ static int rpcb_create_local_unix(struct net *net)
21990 + clnt4 = NULL;
21991 + }
21992 +
21993 +- rpcb_set_local(net, clnt, clnt4);
21994 ++ rpcb_set_local(net, clnt, clnt4, true);
21995 +
21996 + out:
21997 + return result;
21998 +@@ -315,7 +325,7 @@ static int rpcb_create_local_net(struct net *net)
21999 + clnt4 = NULL;
22000 + }
22001 +
22002 +- rpcb_set_local(net, clnt, clnt4);
22003 ++ rpcb_set_local(net, clnt, clnt4, false);
22004 +
22005 + out:
22006 + return result;
22007 +@@ -376,13 +386,16 @@ static struct rpc_clnt *rpcb_create(struct net *net, const char *hostname,
22008 + return rpc_create(&args);
22009 + }
22010 +
22011 +-static int rpcb_register_call(struct rpc_clnt *clnt, struct rpc_message *msg)
22012 ++static int rpcb_register_call(struct sunrpc_net *sn, struct rpc_clnt *clnt, struct rpc_message *msg, bool is_set)
22013 + {
22014 +- int result, error = 0;
22015 ++ int flags = RPC_TASK_NOCONNECT;
22016 ++ int error, result = 0;
22017 +
22018 ++ if (is_set || !sn->rpcb_is_af_local)
22019 ++ flags = RPC_TASK_SOFTCONN;
22020 + msg->rpc_resp = &result;
22021 +
22022 +- error = rpc_call_sync(clnt, msg, RPC_TASK_SOFTCONN);
22023 ++ error = rpc_call_sync(clnt, msg, flags);
22024 + if (error < 0) {
22025 + dprintk("RPC: failed to contact local rpcbind "
22026 + "server (errno %d).\n", -error);
22027 +@@ -439,16 +452,19 @@ int rpcb_register(struct net *net, u32 prog, u32 vers, int prot, unsigned short
22028 + .rpc_argp = &map,
22029 + };
22030 + struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
22031 ++ bool is_set = false;
22032 +
22033 + dprintk("RPC: %sregistering (%u, %u, %d, %u) with local "
22034 + "rpcbind\n", (port ? "" : "un"),
22035 + prog, vers, prot, port);
22036 +
22037 + msg.rpc_proc = &rpcb_procedures2[RPCBPROC_UNSET];
22038 +- if (port)
22039 ++ if (port != 0) {
22040 + msg.rpc_proc = &rpcb_procedures2[RPCBPROC_SET];
22041 ++ is_set = true;
22042 ++ }
22043 +
22044 +- return rpcb_register_call(sn->rpcb_local_clnt, &msg);
22045 ++ return rpcb_register_call(sn, sn->rpcb_local_clnt, &msg, is_set);
22046 + }
22047 +
22048 + /*
22049 +@@ -461,6 +477,7 @@ static int rpcb_register_inet4(struct sunrpc_net *sn,
22050 + const struct sockaddr_in *sin = (const struct sockaddr_in *)sap;
22051 + struct rpcbind_args *map = msg->rpc_argp;
22052 + unsigned short port = ntohs(sin->sin_port);
22053 ++ bool is_set = false;
22054 + int result;
22055 +
22056 + map->r_addr = rpc_sockaddr2uaddr(sap, GFP_KERNEL);
22057 +@@ -471,10 +488,12 @@ static int rpcb_register_inet4(struct sunrpc_net *sn,
22058 + map->r_addr, map->r_netid);
22059 +
22060 + msg->rpc_proc = &rpcb_procedures4[RPCBPROC_UNSET];
22061 +- if (port)
22062 ++ if (port != 0) {
22063 + msg->rpc_proc = &rpcb_procedures4[RPCBPROC_SET];
22064 ++ is_set = true;
22065 ++ }
22066 +
22067 +- result = rpcb_register_call(sn->rpcb_local_clnt4, msg);
22068 ++ result = rpcb_register_call(sn, sn->rpcb_local_clnt4, msg, is_set);
22069 + kfree(map->r_addr);
22070 + return result;
22071 + }
22072 +@@ -489,6 +508,7 @@ static int rpcb_register_inet6(struct sunrpc_net *sn,
22073 + const struct sockaddr_in6 *sin6 = (const struct sockaddr_in6 *)sap;
22074 + struct rpcbind_args *map = msg->rpc_argp;
22075 + unsigned short port = ntohs(sin6->sin6_port);
22076 ++ bool is_set = false;
22077 + int result;
22078 +
22079 + map->r_addr = rpc_sockaddr2uaddr(sap, GFP_KERNEL);
22080 +@@ -499,10 +519,12 @@ static int rpcb_register_inet6(struct sunrpc_net *sn,
22081 + map->r_addr, map->r_netid);
22082 +
22083 + msg->rpc_proc = &rpcb_procedures4[RPCBPROC_UNSET];
22084 +- if (port)
22085 ++ if (port != 0) {
22086 + msg->rpc_proc = &rpcb_procedures4[RPCBPROC_SET];
22087 ++ is_set = true;
22088 ++ }
22089 +
22090 +- result = rpcb_register_call(sn->rpcb_local_clnt4, msg);
22091 ++ result = rpcb_register_call(sn, sn->rpcb_local_clnt4, msg, is_set);
22092 + kfree(map->r_addr);
22093 + return result;
22094 + }
22095 +@@ -519,7 +541,7 @@ static int rpcb_unregister_all_protofamilies(struct sunrpc_net *sn,
22096 + map->r_addr = "";
22097 + msg->rpc_proc = &rpcb_procedures4[RPCBPROC_UNSET];
22098 +
22099 +- return rpcb_register_call(sn->rpcb_local_clnt4, msg);
22100 ++ return rpcb_register_call(sn, sn->rpcb_local_clnt4, msg, false);
22101 + }
22102 +
22103 + /**
22104 +diff --git a/sound/usb/6fire/comm.c b/sound/usb/6fire/comm.c
22105 +index 9e6e3ff..23452ee 100644
22106 +--- a/sound/usb/6fire/comm.c
22107 ++++ b/sound/usb/6fire/comm.c
22108 +@@ -110,19 +110,37 @@ static int usb6fire_comm_send_buffer(u8 *buffer, struct usb_device *dev)
22109 + static int usb6fire_comm_write8(struct comm_runtime *rt, u8 request,
22110 + u8 reg, u8 value)
22111 + {
22112 +- u8 buffer[13]; /* 13: maximum length of message */
22113 ++ u8 *buffer;
22114 ++ int ret;
22115 ++
22116 ++ /* 13: maximum length of message */
22117 ++ buffer = kmalloc(13, GFP_KERNEL);
22118 ++ if (!buffer)
22119 ++ return -ENOMEM;
22120 +
22121 + usb6fire_comm_init_buffer(buffer, 0x00, request, reg, value, 0x00);
22122 +- return usb6fire_comm_send_buffer(buffer, rt->chip->dev);
22123 ++ ret = usb6fire_comm_send_buffer(buffer, rt->chip->dev);
22124 ++
22125 ++ kfree(buffer);
22126 ++ return ret;
22127 + }
22128 +
22129 + static int usb6fire_comm_write16(struct comm_runtime *rt, u8 request,
22130 + u8 reg, u8 vl, u8 vh)
22131 + {
22132 +- u8 buffer[13]; /* 13: maximum length of message */
22133 ++ u8 *buffer;
22134 ++ int ret;
22135 ++
22136 ++ /* 13: maximum length of message */
22137 ++ buffer = kmalloc(13, GFP_KERNEL);
22138 ++ if (!buffer)
22139 ++ return -ENOMEM;
22140 +
22141 + usb6fire_comm_init_buffer(buffer, 0x00, request, reg, vl, vh);
22142 +- return usb6fire_comm_send_buffer(buffer, rt->chip->dev);
22143 ++ ret = usb6fire_comm_send_buffer(buffer, rt->chip->dev);
22144 ++
22145 ++ kfree(buffer);
22146 ++ return ret;
22147 + }
22148 +
22149 + int usb6fire_comm_init(struct sfire_chip *chip)
22150 +@@ -135,6 +153,12 @@ int usb6fire_comm_init(struct sfire_chip *chip)
22151 + if (!rt)
22152 + return -ENOMEM;
22153 +
22154 ++ rt->receiver_buffer = kzalloc(COMM_RECEIVER_BUFSIZE, GFP_KERNEL);
22155 ++ if (!rt->receiver_buffer) {
22156 ++ kfree(rt);
22157 ++ return -ENOMEM;
22158 ++ }
22159 ++
22160 + urb = &rt->receiver;
22161 + rt->serial = 1;
22162 + rt->chip = chip;
22163 +@@ -153,6 +177,7 @@ int usb6fire_comm_init(struct sfire_chip *chip)
22164 + urb->interval = 1;
22165 + ret = usb_submit_urb(urb, GFP_KERNEL);
22166 + if (ret < 0) {
22167 ++ kfree(rt->receiver_buffer);
22168 + kfree(rt);
22169 + snd_printk(KERN_ERR PREFIX "cannot create comm data receiver.");
22170 + return ret;
22171 +@@ -171,6 +196,9 @@ void usb6fire_comm_abort(struct sfire_chip *chip)
22172 +
22173 + void usb6fire_comm_destroy(struct sfire_chip *chip)
22174 + {
22175 +- kfree(chip->comm);
22176 ++ struct comm_runtime *rt = chip->comm;
22177 ++
22178 ++ kfree(rt->receiver_buffer);
22179 ++ kfree(rt);
22180 + chip->comm = NULL;
22181 + }
22182 +diff --git a/sound/usb/6fire/comm.h b/sound/usb/6fire/comm.h
22183 +index 6a0840b..780d5ed 100644
22184 +--- a/sound/usb/6fire/comm.h
22185 ++++ b/sound/usb/6fire/comm.h
22186 +@@ -24,7 +24,7 @@ struct comm_runtime {
22187 + struct sfire_chip *chip;
22188 +
22189 + struct urb receiver;
22190 +- u8 receiver_buffer[COMM_RECEIVER_BUFSIZE];
22191 ++ u8 *receiver_buffer;
22192 +
22193 + u8 serial; /* urb serial */
22194 +
22195 +diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
22196 +index 7a444b5..659950e 100644
22197 +--- a/sound/usb/endpoint.c
22198 ++++ b/sound/usb/endpoint.c
22199 +@@ -591,17 +591,16 @@ static int data_ep_set_params(struct snd_usb_endpoint *ep,
22200 + ep->stride = frame_bits >> 3;
22201 + ep->silence_value = pcm_format == SNDRV_PCM_FORMAT_U8 ? 0x80 : 0;
22202 +
22203 +- /* calculate max. frequency */
22204 +- if (ep->maxpacksize) {
22205 ++ /* assume max. frequency is 25% higher than nominal */
22206 ++ ep->freqmax = ep->freqn + (ep->freqn >> 2);
22207 ++ maxsize = ((ep->freqmax + 0xffff) * (frame_bits >> 3))
22208 ++ >> (16 - ep->datainterval);
22209 ++ /* but wMaxPacketSize might reduce this */
22210 ++ if (ep->maxpacksize && ep->maxpacksize < maxsize) {
22211 + /* whatever fits into a max. size packet */
22212 + maxsize = ep->maxpacksize;
22213 + ep->freqmax = (maxsize / (frame_bits >> 3))
22214 + << (16 - ep->datainterval);
22215 +- } else {
22216 +- /* no max. packet size: just take 25% higher than nominal */
22217 +- ep->freqmax = ep->freqn + (ep->freqn >> 2);
22218 +- maxsize = ((ep->freqmax + 0xffff) * (frame_bits >> 3))
22219 +- >> (16 - ep->datainterval);
22220 + }
22221 +
22222 + if (ep->fill_max)
22223
22224 Added: genpatches-2.6/trunk/3.10.7/1500_XATTR_USER_PREFIX.patch
22225 ===================================================================
22226 --- genpatches-2.6/trunk/3.10.7/1500_XATTR_USER_PREFIX.patch (rev 0)
22227 +++ genpatches-2.6/trunk/3.10.7/1500_XATTR_USER_PREFIX.patch 2013-08-29 12:09:12 UTC (rev 2497)
22228 @@ -0,0 +1,54 @@
22229 +From: Anthony G. Basile <blueness@g.o>
22230 +
22231 +This patch adds support for a restricted user-controlled namespace on
22232 +tmpfs filesystem used to house PaX flags. The namespace must be of the
22233 +form user.pax.* and its value cannot exceed a size of 8 bytes.
22234 +
22235 +This is needed even on all Gentoo systems so that XATTR_PAX flags
22236 +are preserved for users who might build packages using portage on
22237 +a tmpfs system with a non-hardened kernel and then switch to a
22238 +hardened kernel with XATTR_PAX enabled.
22239 +
22240 +The namespace is added to any user with Extended Attribute support
22241 +enabled for tmpfs. Users who do not enable xattrs will not have
22242 +the XATTR_PAX flags preserved.
22243 +
22244 +diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
22245 +index e4629b9..6958086 100644
22246 +--- a/include/uapi/linux/xattr.h
22247 ++++ b/include/uapi/linux/xattr.h
22248 +@@ -63,5 +63,9 @@
22249 + #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
22250 + #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
22251 +
22252 ++/* User namespace */
22253 ++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
22254 ++#define XATTR_PAX_FLAGS_SUFFIX "flags"
22255 ++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
22256 +
22257 + #endif /* _UAPI_LINUX_XATTR_H */
22258 +diff --git a/mm/shmem.c b/mm/shmem.c
22259 +index 1c44af7..f23bb1b 100644
22260 +--- a/mm/shmem.c
22261 ++++ b/mm/shmem.c
22262 +@@ -2201,6 +2201,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
22263 + static int shmem_xattr_validate(const char *name)
22264 + {
22265 + struct { const char *prefix; size_t len; } arr[] = {
22266 ++ { XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN},
22267 + { XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN },
22268 + { XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN }
22269 + };
22270 +@@ -2256,6 +2257,12 @@ static int shmem_setxattr(struct dentry *dentry, const char *name,
22271 + if (err)
22272 + return err;
22273 +
22274 ++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
22275 ++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
22276 ++ return -EOPNOTSUPP;
22277 ++ if (size > 8)
22278 ++ return -EINVAL;
22279 ++ }
22280 + return simple_xattr_set(&info->xattrs, name, value, size, flags);
22281 + }
22282 +
22283
22284 Added: genpatches-2.6/trunk/3.10.7/1700_enable-thinkpad-micled.patch
22285 ===================================================================
22286 --- genpatches-2.6/trunk/3.10.7/1700_enable-thinkpad-micled.patch (rev 0)
22287 +++ genpatches-2.6/trunk/3.10.7/1700_enable-thinkpad-micled.patch 2013-08-29 12:09:12 UTC (rev 2497)
22288 @@ -0,0 +1,23 @@
22289 +--- a/drivers/platform/x86/thinkpad_acpi.c 2013-02-06 08:46:53.546168051 -0500
22290 ++++ b/drivers/platform/x86/thinkpad_acpi.c 2013-02-06 08:52:16.846933455 -0500
22291 +@@ -5056,8 +5056,10 @@ static const char * const tpacpi_led_nam
22292 + "tpacpi::unknown_led2",
22293 + "tpacpi::unknown_led3",
22294 + "tpacpi::thinkvantage",
22295 ++ "tpacpi::unknown_led4",
22296 ++ "tpacpi::micmute",
22297 + };
22298 +-#define TPACPI_SAFE_LEDS 0x1081U
22299 ++#define TPACPI_SAFE_LEDS 0x5081U
22300 +
22301 + static inline bool tpacpi_is_led_restricted(const unsigned int led)
22302 + {
22303 +@@ -5280,7 +5282,7 @@ static const struct tpacpi_quirk led_use
22304 + { /* Lenovo */
22305 + .vendor = PCI_VENDOR_ID_LENOVO,
22306 + .bios = TPACPI_MATCH_ANY, .ec = TPACPI_MATCH_ANY,
22307 +- .quirks = 0x1fffU,
22308 ++ .quirks = 0x5fffU,
22309 + },
22310 + { /* IBM ThinkPads with no EC version string */
22311 + .vendor = PCI_VENDOR_ID_IBM,
22312
22313 Added: genpatches-2.6/trunk/3.10.7/1801_block-cgroups-kconfig-build-bits-for-BFQ-v6r2-3.10.patch
22314 ===================================================================
22315 --- genpatches-2.6/trunk/3.10.7/1801_block-cgroups-kconfig-build-bits-for-BFQ-v6r2-3.10.patch (rev 0)
22316 +++ genpatches-2.6/trunk/3.10.7/1801_block-cgroups-kconfig-build-bits-for-BFQ-v6r2-3.10.patch 2013-08-29 12:09:12 UTC (rev 2497)
22317 @@ -0,0 +1,97 @@
22318 +From 13fa5ddac2963e304e90c5beb4bc996e3557479d Mon Sep 17 00:00:00 2001
22319 +From: Matteo Bernardini <matteo.bernardini@×××××.com>
22320 +Date: Thu, 9 May 2013 18:58:50 +0200
22321 +Subject: [PATCH 1/3] block: cgroups, kconfig, build bits for BFQ-v6r2-3.10
22322 +
22323 +Update Kconfig.iosched and do the related Makefile changes to include
22324 +kernel configuration options for BFQ. Also add the bfqio controller
22325 +to the cgroups subsystem.
22326 +
22327 +Signed-off-by: Paolo Valente <paolo.valente@×××××××.it>
22328 +Signed-off-by: Arianna Avanzini <avanzini.arianna@×××××.com>
22329 +Signed-off-by: Matteo Bernardini <matteo.bernardini@×××××.com>
22330 +---
22331 + block/Kconfig.iosched | 25 +++++++++++++++++++++++++
22332 + block/Makefile | 1 +
22333 + include/linux/cgroup_subsys.h | 6 ++++++
22334 + 3 files changed, 32 insertions(+)
22335 +
22336 +diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
22337 +index 421bef9..695e064 100644
22338 +--- a/block/Kconfig.iosched
22339 ++++ b/block/Kconfig.iosched
22340 +@@ -39,6 +39,27 @@ config CFQ_GROUP_IOSCHED
22341 + ---help---
22342 + Enable group IO scheduling in CFQ.
22343 +
22344 ++config IOSCHED_BFQ
22345 ++ tristate "BFQ I/O scheduler"
22346 ++ default n
22347 ++ ---help---
22348 ++ The BFQ I/O scheduler tries to distribute bandwidth among
22349 ++ all processes according to their weights.
22350 ++ It aims at distributing the bandwidth as desired, independently of
22351 ++ the disk parameters and with any workload. It also tries to
22352 ++ guarantee low latency to interactive and soft real-time
22353 ++ applications. If compiled built-in (saying Y here), BFQ can
22354 ++ be configured to support hierarchical scheduling.
22355 ++
22356 ++config CGROUP_BFQIO
22357 ++ bool "BFQ hierarchical scheduling support"
22358 ++ depends on CGROUPS && IOSCHED_BFQ=y
22359 ++ default n
22360 ++ ---help---
22361 ++ Enable hierarchical scheduling in BFQ, using the cgroups
22362 ++ filesystem interface. The name of the subsystem will be
22363 ++ bfqio.
22364 ++
22365 + choice
22366 + prompt "Default I/O scheduler"
22367 + default DEFAULT_CFQ
22368 +@@ -52,6 +73,9 @@ choice
22369 + config DEFAULT_CFQ
22370 + bool "CFQ" if IOSCHED_CFQ=y
22371 +
22372 ++ config DEFAULT_BFQ
22373 ++ bool "BFQ" if IOSCHED_BFQ=y
22374 ++
22375 + config DEFAULT_NOOP
22376 + bool "No-op"
22377 +
22378 +@@ -61,6 +85,7 @@ config DEFAULT_IOSCHED
22379 + string
22380 + default "deadline" if DEFAULT_DEADLINE
22381 + default "cfq" if DEFAULT_CFQ
22382 ++ default "bfq" if DEFAULT_BFQ
22383 + default "noop" if DEFAULT_NOOP
22384 +
22385 + endmenu
22386 +diff --git a/block/Makefile b/block/Makefile
22387 +index 39b76ba..c0d20fa 100644
22388 +--- a/block/Makefile
22389 ++++ b/block/Makefile
22390 +@@ -15,6 +15,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
22391 + obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
22392 + obj-$(CONFIG_IOSCHED_DEADLINE) += deadline-iosched.o
22393 + obj-$(CONFIG_IOSCHED_CFQ) += cfq-iosched.o
22394 ++obj-$(CONFIG_IOSCHED_BFQ) += bfq-iosched.o
22395 +
22396 + obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
22397 + obj-$(CONFIG_BLK_DEV_INTEGRITY) += blk-integrity.o
22398 +diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
22399 +index 6e7ec64..ffa1d1f 100644
22400 +--- a/include/linux/cgroup_subsys.h
22401 ++++ b/include/linux/cgroup_subsys.h
22402 +@@ -84,3 +84,9 @@ SUBSYS(bcache)
22403 + #endif
22404 +
22405 + /* */
22406 ++
22407 ++#ifdef CONFIG_CGROUP_BFQIO
22408 ++SUBSYS(bfqio)
22409 ++#endif
22410 ++
22411 ++/* */
22412 +--
22413 +1.8.1.4
22414 +
22415
22416 Added: genpatches-2.6/trunk/3.10.7/1802_block-introduce-the-BFQ-v6r2-I-O-sched-for-3.10.patch1
22417 ===================================================================
22418 --- genpatches-2.6/trunk/3.10.7/1802_block-introduce-the-BFQ-v6r2-I-O-sched-for-3.10.patch1 (rev 0)
22419 +++ genpatches-2.6/trunk/3.10.7/1802_block-introduce-the-BFQ-v6r2-I-O-sched-for-3.10.patch1 2013-08-29 12:09:12 UTC (rev 2497)
22420 @@ -0,0 +1,5775 @@
22421 +From 2e949c3d4d8ba2af46dcedc80707ebba277d759f Mon Sep 17 00:00:00 2001
22422 +From: Arianna Avanzini <avanzini.arianna@×××××.com>
22423 +Date: Thu, 9 May 2013 19:10:02 +0200
22424 +Subject: [PATCH 2/3] block: introduce the BFQ-v6r2 I/O sched for 3.10
22425 +
22426 +Add the BFQ-v6r2 I/O scheduler to 3.10.
22427 +The general structure is borrowed from CFQ, as much code. A (bfq_)queue
22428 +is associated to each task doing I/O on a device, and each time a
22429 +scheduling decision has to be made a queue is selected and served until
22430 +it expires.
22431 +
22432 + - Slices are given in the service domain: tasks are assigned
22433 + budgets, measured in number of sectors. Once got the disk, a task
22434 + must however consume its assigned budget within a configurable
22435 + maximum time (by default, the maximum possible value of the
22436 + budgets is automatically computed to comply with this timeout).
22437 + This allows the desired latency vs "throughput boosting" tradeoff
22438 + to be set.
22439 +
22440 + - Budgets are scheduled according to a variant of WF2Q+, implemented
22441 + using an augmented rb-tree to take eligibility into account while
22442 + preserving an O(log N) overall complexity.
22443 +
22444 + - A low-latency tunable is provided; if enabled, both interactive
22445 + and soft real-time applications are guaranteed very low latency.
22446 +
22447 + - Latency guarantees are preserved also in presence of NCQ.
22448 +
22449 + - Also with flash-based devices, a high throughput is achieved while
22450 + still preserving latency guarantees.
22451 +
22452 + - Useful features borrowed from CFQ: cooperating-queues merging (with
22453 + some additional optimizations with respect to the original CFQ version),
22454 + static fallback queue for OOM.
22455 +
22456 + - BFQ supports full hierarchical scheduling, exporting a cgroups
22457 + interface. Each node has a full scheduler, so each group can
22458 + be assigned its own ioprio (mapped to a weight, see next point)
22459 + and an ioprio_class.
22460 +
22461 + - If the cgroups interface is used, weights can be explictly
22462 + assigned, otherwise ioprio values are mapped to weights using the
22463 + relation weight = IOPRIO_BE_NR - ioprio.
22464 +
22465 + - ioprio classes are served in strict priority order, i.e., lower
22466 + priority queues are not served as long as there are higher
22467 + priority queues. Among queues in the same class the bandwidth is
22468 + distributed in proportion to the weight of each queue. A very
22469 + thin extra bandwidth is however guaranteed to the Idle class, to
22470 + prevent it from starving.
22471 +
22472 +Signed-off-by: Paolo Valente <paolo.valente@×××××××.it>
22473 +Signed-off-by: Arianna Avanzini <avanzini.arianna@×××××.com>
22474 +---
22475 + block/bfq-cgroup.c | 881 ++++++++++++
22476 + block/bfq-ioc.c | 36 +
22477 + block/bfq-iosched.c | 3070 +++++++++++++++++++++++++++++++++++++++++
22478 + block/bfq-sched.c | 1072 ++++++++++++++
22479 + block/bfq.h | 603 ++++++++
22480 + include/linux/cgroup_subsys.h | 2 +-
22481 + 6 files changed, 5663 insertions(+), 1 deletion(-)
22482 + create mode 100644 block/bfq-cgroup.c
22483 + create mode 100644 block/bfq-ioc.c
22484 + create mode 100644 block/bfq-iosched.c
22485 + create mode 100644 block/bfq-sched.c
22486 + create mode 100644 block/bfq.h
22487 +
22488 +diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
22489 +new file mode 100644
22490 +index 0000000..6d57239
22491 +--- /dev/null
22492 ++++ b/block/bfq-cgroup.c
22493 +@@ -0,0 +1,881 @@
22494 ++/*
22495 ++ * BFQ: CGROUPS support.
22496 ++ *
22497 ++ * Based on ideas and code from CFQ:
22498 ++ * Copyright (C) 2003 Jens Axboe <axboe@××××××.dk>
22499 ++ *
22500 ++ * Copyright (C) 2008 Fabio Checconi <fabio@×××××××××××××.it>
22501 ++ * Paolo Valente <paolo.valente@×××××××.it>
22502 ++ *
22503 ++ * Copyright (C) 2010 Paolo Valente <paolo.valente@×××××××.it>
22504 ++ *
22505 ++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ file.
22506 ++ */
22507 ++
22508 ++#ifdef CONFIG_CGROUP_BFQIO
22509 ++
22510 ++static DEFINE_MUTEX(bfqio_mutex);
22511 ++
22512 ++static bool bfqio_is_removed(struct cgroup *cgroup)
22513 ++{
22514 ++ return test_bit(CGRP_REMOVED, &cgroup->flags);
22515 ++}
22516 ++
22517 ++static struct bfqio_cgroup bfqio_root_cgroup = {
22518 ++ .weight = BFQ_DEFAULT_GRP_WEIGHT,
22519 ++ .ioprio = BFQ_DEFAULT_GRP_IOPRIO,
22520 ++ .ioprio_class = BFQ_DEFAULT_GRP_CLASS,
22521 ++};
22522 ++
22523 ++static inline void bfq_init_entity(struct bfq_entity *entity,
22524 ++ struct bfq_group *bfqg)
22525 ++{
22526 ++ entity->weight = entity->new_weight;
22527 ++ entity->orig_weight = entity->new_weight;
22528 ++ entity->ioprio = entity->new_ioprio;
22529 ++ entity->ioprio_class = entity->new_ioprio_class;
22530 ++ entity->parent = bfqg->my_entity;
22531 ++ entity->sched_data = &bfqg->sched_data;
22532 ++}
22533 ++
22534 ++static struct bfqio_cgroup *cgroup_to_bfqio(struct cgroup *cgroup)
22535 ++{
22536 ++ return container_of(cgroup_subsys_state(cgroup, bfqio_subsys_id),
22537 ++ struct bfqio_cgroup, css);
22538 ++}
22539 ++
22540 ++/*
22541 ++ * Search the bfq_group for bfqd into the hash table (by now only a list)
22542 ++ * of bgrp. Must be called under rcu_read_lock().
22543 ++ */
22544 ++static struct bfq_group *bfqio_lookup_group(struct bfqio_cgroup *bgrp,
22545 ++ struct bfq_data *bfqd)
22546 ++{
22547 ++ struct bfq_group *bfqg;
22548 ++ void *key;
22549 ++
22550 ++ hlist_for_each_entry_rcu(bfqg, &bgrp->group_data, group_node) {
22551 ++ key = rcu_dereference(bfqg->bfqd);
22552 ++ if (key == bfqd)
22553 ++ return bfqg;
22554 ++ }
22555 ++
22556 ++ return NULL;
22557 ++}
22558 ++
22559 ++static inline void bfq_group_init_entity(struct bfqio_cgroup *bgrp,
22560 ++ struct bfq_group *bfqg)
22561 ++{
22562 ++ struct bfq_entity *entity = &bfqg->entity;
22563 ++
22564 ++ /*
22565 ++ * If the weight of the entity has never been set via the sysfs
22566 ++ * interface, then bgrp->weight == 0. In this case we initialize
22567 ++ * the weight from the current ioprio value. Otherwise, the group
22568 ++ * weight, if set, has priority over the ioprio value.
22569 ++ */
22570 ++ if (bgrp->weight == 0) {
22571 ++ entity->new_weight = bfq_ioprio_to_weight(bgrp->ioprio);
22572 ++ entity->new_ioprio = bgrp->ioprio;
22573 ++ } else {
22574 ++ entity->new_weight = bgrp->weight;
22575 ++ entity->new_ioprio = bfq_weight_to_ioprio(bgrp->weight);
22576 ++ }
22577 ++ entity->orig_weight = entity->weight = entity->new_weight;
22578 ++ entity->ioprio = entity->new_ioprio;
22579 ++ entity->ioprio_class = entity->new_ioprio_class = bgrp->ioprio_class;
22580 ++ entity->my_sched_data = &bfqg->sched_data;
22581 ++}
22582 ++
22583 ++static inline void bfq_group_set_parent(struct bfq_group *bfqg,
22584 ++ struct bfq_group *parent)
22585 ++{
22586 ++ struct bfq_entity *entity;
22587 ++
22588 ++ BUG_ON(parent == NULL);
22589 ++ BUG_ON(bfqg == NULL);
22590 ++
22591 ++ entity = &bfqg->entity;
22592 ++ entity->parent = parent->my_entity;
22593 ++ entity->sched_data = &parent->sched_data;
22594 ++}
22595 ++
22596 ++/**
22597 ++ * bfq_group_chain_alloc - allocate a chain of groups.
22598 ++ * @bfqd: queue descriptor.
22599 ++ * @cgroup: the leaf cgroup this chain starts from.
22600 ++ *
22601 ++ * Allocate a chain of groups starting from the one belonging to
22602 ++ * @cgroup up to the root cgroup. Stop if a cgroup on the chain
22603 ++ * to the root has already an allocated group on @bfqd.
22604 ++ */
22605 ++static struct bfq_group *bfq_group_chain_alloc(struct bfq_data *bfqd,
22606 ++ struct cgroup *cgroup)
22607 ++{
22608 ++ struct bfqio_cgroup *bgrp;
22609 ++ struct bfq_group *bfqg, *prev = NULL, *leaf = NULL;
22610 ++
22611 ++ for (; cgroup != NULL; cgroup = cgroup->parent) {
22612 ++ bgrp = cgroup_to_bfqio(cgroup);
22613 ++
22614 ++ bfqg = bfqio_lookup_group(bgrp, bfqd);
22615 ++ if (bfqg != NULL) {
22616 ++ /*
22617 ++ * All the cgroups in the path from there to the
22618 ++ * root must have a bfq_group for bfqd, so we don't
22619 ++ * need any more allocations.
22620 ++ */
22621 ++ break;
22622 ++ }
22623 ++
22624 ++ bfqg = kzalloc(sizeof(*bfqg), GFP_ATOMIC);
22625 ++ if (bfqg == NULL)
22626 ++ goto cleanup;
22627 ++
22628 ++ bfq_group_init_entity(bgrp, bfqg);
22629 ++ bfqg->my_entity = &bfqg->entity;
22630 ++
22631 ++ if (leaf == NULL) {
22632 ++ leaf = bfqg;
22633 ++ prev = leaf;
22634 ++ } else {
22635 ++ bfq_group_set_parent(prev, bfqg);
22636 ++ /*
22637 ++ * Build a list of allocated nodes using the bfqd
22638 ++ * filed, that is still unused and will be initialized
22639 ++ * only after the node will be connected.
22640 ++ */
22641 ++ prev->bfqd = bfqg;
22642 ++ prev = bfqg;
22643 ++ }
22644 ++ }
22645 ++
22646 ++ return leaf;
22647 ++
22648 ++cleanup:
22649 ++ while (leaf != NULL) {
22650 ++ prev = leaf;
22651 ++ leaf = leaf->bfqd;
22652 ++ kfree(prev);
22653 ++ }
22654 ++
22655 ++ return NULL;
22656 ++}
22657 ++
22658 ++/**
22659 ++ * bfq_group_chain_link - link an allocatd group chain to a cgroup hierarchy.
22660 ++ * @bfqd: the queue descriptor.
22661 ++ * @cgroup: the leaf cgroup to start from.
22662 ++ * @leaf: the leaf group (to be associated to @cgroup).
22663 ++ *
22664 ++ * Try to link a chain of groups to a cgroup hierarchy, connecting the
22665 ++ * nodes bottom-up, so we can be sure that when we find a cgroup in the
22666 ++ * hierarchy that already as a group associated to @bfqd all the nodes
22667 ++ * in the path to the root cgroup have one too.
22668 ++ *
22669 ++ * On locking: the queue lock protects the hierarchy (there is a hierarchy
22670 ++ * per device) while the bfqio_cgroup lock protects the list of groups
22671 ++ * belonging to the same cgroup.
22672 ++ */
22673 ++static void bfq_group_chain_link(struct bfq_data *bfqd, struct cgroup *cgroup,
22674 ++ struct bfq_group *leaf)
22675 ++{
22676 ++ struct bfqio_cgroup *bgrp;
22677 ++ struct bfq_group *bfqg, *next, *prev = NULL;
22678 ++ unsigned long flags;
22679 ++
22680 ++ assert_spin_locked(bfqd->queue->queue_lock);
22681 ++
22682 ++ for (; cgroup != NULL && leaf != NULL; cgroup = cgroup->parent) {
22683 ++ bgrp = cgroup_to_bfqio(cgroup);
22684 ++ next = leaf->bfqd;
22685 ++
22686 ++ bfqg = bfqio_lookup_group(bgrp, bfqd);
22687 ++ BUG_ON(bfqg != NULL);
22688 ++
22689 ++ spin_lock_irqsave(&bgrp->lock, flags);
22690 ++
22691 ++ rcu_assign_pointer(leaf->bfqd, bfqd);
22692 ++ hlist_add_head_rcu(&leaf->group_node, &bgrp->group_data);
22693 ++ hlist_add_head(&leaf->bfqd_node, &bfqd->group_list);
22694 ++
22695 ++ spin_unlock_irqrestore(&bgrp->lock, flags);
22696 ++
22697 ++ prev = leaf;
22698 ++ leaf = next;
22699 ++ }
22700 ++
22701 ++ BUG_ON(cgroup == NULL && leaf != NULL);
22702 ++ if (cgroup != NULL && prev != NULL) {
22703 ++ bgrp = cgroup_to_bfqio(cgroup);
22704 ++ bfqg = bfqio_lookup_group(bgrp, bfqd);
22705 ++ bfq_group_set_parent(prev, bfqg);
22706 ++ }
22707 ++}
22708 ++
22709 ++/**
22710 ++ * bfq_find_alloc_group - return the group associated to @bfqd in @cgroup.
22711 ++ * @bfqd: queue descriptor.
22712 ++ * @cgroup: cgroup being searched for.
22713 ++ *
22714 ++ * Return a group associated to @bfqd in @cgroup, allocating one if
22715 ++ * necessary. When a group is returned all the cgroups in the path
22716 ++ * to the root have a group associated to @bfqd.
22717 ++ *
22718 ++ * If the allocation fails, return the root group: this breaks guarantees
22719 ++ * but is a safe fallbak. If this loss becames a problem it can be
22720 ++ * mitigated using the equivalent weight (given by the product of the
22721 ++ * weights of the groups in the path from @group to the root) in the
22722 ++ * root scheduler.
22723 ++ *
22724 ++ * We allocate all the missing nodes in the path from the leaf cgroup
22725 ++ * to the root and we connect the nodes only after all the allocations
22726 ++ * have been successful.
22727 ++ */
22728 ++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
22729 ++ struct cgroup *cgroup)
22730 ++{
22731 ++ struct bfqio_cgroup *bgrp = cgroup_to_bfqio(cgroup);
22732 ++ struct bfq_group *bfqg;
22733 ++
22734 ++ bfqg = bfqio_lookup_group(bgrp, bfqd);
22735 ++ if (bfqg != NULL)
22736 ++ return bfqg;
22737 ++
22738 ++ bfqg = bfq_group_chain_alloc(bfqd, cgroup);
22739 ++ if (bfqg != NULL)
22740 ++ bfq_group_chain_link(bfqd, cgroup, bfqg);
22741 ++ else
22742 ++ bfqg = bfqd->root_group;
22743 ++
22744 ++ return bfqg;
22745 ++}
22746 ++
22747 ++/**
22748 ++ * bfq_bfqq_move - migrate @bfqq to @bfqg.
22749 ++ * @bfqd: queue descriptor.
22750 ++ * @bfqq: the queue to move.
22751 ++ * @entity: @bfqq's entity.
22752 ++ * @bfqg: the group to move to.
22753 ++ *
22754 ++ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
22755 ++ * it on the new one. Avoid putting the entity on the old group idle tree.
22756 ++ *
22757 ++ * Must be called under the queue lock; the cgroup owning @bfqg must
22758 ++ * not disappear (by now this just means that we are called under
22759 ++ * rcu_read_lock()).
22760 ++ */
22761 ++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
22762 ++ struct bfq_entity *entity, struct bfq_group *bfqg)
22763 ++{
22764 ++ int busy, resume;
22765 ++
22766 ++ busy = bfq_bfqq_busy(bfqq);
22767 ++ resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
22768 ++
22769 ++ BUG_ON(resume && !entity->on_st);
22770 ++ BUG_ON(busy && !resume && entity->on_st && bfqq != bfqd->active_queue);
22771 ++
22772 ++ if (busy) {
22773 ++ BUG_ON(atomic_read(&bfqq->ref) < 2);
22774 ++
22775 ++ if (!resume)
22776 ++ bfq_del_bfqq_busy(bfqd, bfqq, 0);
22777 ++ else
22778 ++ bfq_deactivate_bfqq(bfqd, bfqq, 0);
22779 ++ } else if (entity->on_st)
22780 ++ bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
22781 ++
22782 ++ /*
22783 ++ * Here we use a reference to bfqg. We don't need a refcounter
22784 ++ * as the cgroup reference will not be dropped, so that its
22785 ++ * destroy() callback will not be invoked.
22786 ++ */
22787 ++ entity->parent = bfqg->my_entity;
22788 ++ entity->sched_data = &bfqg->sched_data;
22789 ++
22790 ++ if (busy && resume)
22791 ++ bfq_activate_bfqq(bfqd, bfqq);
22792 ++
22793 ++ if (bfqd->active_queue == NULL && !bfqd->rq_in_driver)
22794 ++ bfq_schedule_dispatch(bfqd);
22795 ++}
22796 ++
22797 ++/**
22798 ++ * __bfq_bic_change_cgroup - move @bic to @cgroup.
22799 ++ * @bfqd: the queue descriptor.
22800 ++ * @bic: the bic to move.
22801 ++ * @cgroup: the cgroup to move to.
22802 ++ *
22803 ++ * Move bic to cgroup, assuming that bfqd->queue is locked; the caller
22804 ++ * has to make sure that the reference to cgroup is valid across the call.
22805 ++ *
22806 ++ * NOTE: an alternative approach might have been to store the current
22807 ++ * cgroup in bfqq and getting a reference to it, reducing the lookup
22808 ++ * time here, at the price of slightly more complex code.
22809 ++ */
22810 ++static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
22811 ++ struct bfq_io_cq *bic,
22812 ++ struct cgroup *cgroup)
22813 ++{
22814 ++ struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
22815 ++ struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
22816 ++ struct bfq_entity *entity;
22817 ++ struct bfq_group *bfqg;
22818 ++ struct bfqio_cgroup *bgrp;
22819 ++
22820 ++ bgrp = cgroup_to_bfqio(cgroup);
22821 ++
22822 ++ bfqg = bfq_find_alloc_group(bfqd, cgroup);
22823 ++ if (async_bfqq != NULL) {
22824 ++ entity = &async_bfqq->entity;
22825 ++
22826 ++ if (entity->sched_data != &bfqg->sched_data) {
22827 ++ bic_set_bfqq(bic, NULL, 0);
22828 ++ bfq_log_bfqq(bfqd, async_bfqq,
22829 ++ "bic_change_group: %p %d",
22830 ++ async_bfqq, atomic_read(&async_bfqq->ref));
22831 ++ bfq_put_queue(async_bfqq);
22832 ++ }
22833 ++ }
22834 ++
22835 ++ if (sync_bfqq != NULL) {
22836 ++ entity = &sync_bfqq->entity;
22837 ++ if (entity->sched_data != &bfqg->sched_data)
22838 ++ bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
22839 ++ }
22840 ++
22841 ++ return bfqg;
22842 ++}
22843 ++
22844 ++/**
22845 ++ * bfq_bic_change_cgroup - move @bic to @cgroup.
22846 ++ * @bic: the bic being migrated.
22847 ++ * @cgroup: the destination cgroup.
22848 ++ *
22849 ++ * When the task owning @bic is moved to @cgroup, @bic is immediately
22850 ++ * moved into its new parent group.
22851 ++ */
22852 ++static void bfq_bic_change_cgroup(struct bfq_io_cq *bic,
22853 ++ struct cgroup *cgroup)
22854 ++{
22855 ++ struct bfq_data *bfqd;
22856 ++ unsigned long uninitialized_var(flags);
22857 ++
22858 ++ bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data), &flags);
22859 ++ if (bfqd != NULL) {
22860 ++ __bfq_bic_change_cgroup(bfqd, bic, cgroup);
22861 ++ bfq_put_bfqd_unlock(bfqd, &flags);
22862 ++ }
22863 ++}
22864 ++
22865 ++/**
22866 ++ * bfq_bic_update_cgroup - update the cgroup of @bic.
22867 ++ * @bic: the @bic to update.
22868 ++ *
22869 ++ * Make sure that @bic is enqueued in the cgroup of the current task.
22870 ++ * We need this in addition to moving bics during the cgroup attach
22871 ++ * phase because the task owning @bic could be at its first disk
22872 ++ * access or we may end up in the root cgroup as the result of a
22873 ++ * memory allocation failure and here we try to move to the right
22874 ++ * group.
22875 ++ *
22876 ++ * Must be called under the queue lock. It is safe to use the returned
22877 ++ * value even after the rcu_read_unlock() as the migration/destruction
22878 ++ * paths act under the queue lock too. IOW it is impossible to race with
22879 ++ * group migration/destruction and end up with an invalid group as:
22880 ++ * a) here cgroup has not yet been destroyed, nor its destroy callback
22881 ++ * has started execution, as current holds a reference to it,
22882 ++ * b) if it is destroyed after rcu_read_unlock() [after current is
22883 ++ * migrated to a different cgroup] its attach() callback will have
22884 ++ * taken care of remove all the references to the old cgroup data.
22885 ++ */
22886 ++static struct bfq_group *bfq_bic_update_cgroup(struct bfq_io_cq *bic)
22887 ++{
22888 ++ struct bfq_data *bfqd = bic_to_bfqd(bic);
22889 ++ struct bfq_group *bfqg;
22890 ++ struct cgroup *cgroup;
22891 ++
22892 ++ BUG_ON(bfqd == NULL);
22893 ++
22894 ++ rcu_read_lock();
22895 ++ cgroup = task_cgroup(current, bfqio_subsys_id);
22896 ++ bfqg = __bfq_bic_change_cgroup(bfqd, bic, cgroup);
22897 ++ rcu_read_unlock();
22898 ++
22899 ++ return bfqg;
22900 ++}
22901 ++
22902 ++/**
22903 ++ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
22904 ++ * @st: the service tree being flushed.
22905 ++ */
22906 ++static inline void bfq_flush_idle_tree(struct bfq_service_tree *st)
22907 ++{
22908 ++ struct bfq_entity *entity = st->first_idle;
22909 ++
22910 ++ for (; entity != NULL; entity = st->first_idle)
22911 ++ __bfq_deactivate_entity(entity, 0);
22912 ++}
22913 ++
22914 ++/**
22915 ++ * bfq_reparent_leaf_entity - move leaf entity to the root_group.
22916 ++ * @bfqd: the device data structure with the root group.
22917 ++ * @entity: the entity to move.
22918 ++ */
22919 ++static inline void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
22920 ++ struct bfq_entity *entity)
22921 ++{
22922 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
22923 ++
22924 ++ BUG_ON(bfqq == NULL);
22925 ++ bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
22926 ++ return;
22927 ++}
22928 ++
22929 ++/**
22930 ++ * bfq_reparent_active_entities - move to the root group all active entities.
22931 ++ * @bfqd: the device data structure with the root group.
22932 ++ * @bfqg: the group to move from.
22933 ++ * @st: the service tree with the entities.
22934 ++ *
22935 ++ * Needs queue_lock to be taken and reference to be valid over the call.
22936 ++ */
22937 ++static inline void bfq_reparent_active_entities(struct bfq_data *bfqd,
22938 ++ struct bfq_group *bfqg,
22939 ++ struct bfq_service_tree *st)
22940 ++{
22941 ++ struct rb_root *active = &st->active;
22942 ++ struct bfq_entity *entity = NULL;
22943 ++
22944 ++ if (!RB_EMPTY_ROOT(&st->active))
22945 ++ entity = bfq_entity_of(rb_first(active));
22946 ++
22947 ++ for (; entity != NULL ; entity = bfq_entity_of(rb_first(active)))
22948 ++ bfq_reparent_leaf_entity(bfqd, entity);
22949 ++
22950 ++ if (bfqg->sched_data.active_entity != NULL)
22951 ++ bfq_reparent_leaf_entity(bfqd, bfqg->sched_data.active_entity);
22952 ++
22953 ++ return;
22954 ++}
22955 ++
22956 ++/**
22957 ++ * bfq_destroy_group - destroy @bfqg.
22958 ++ * @bgrp: the bfqio_cgroup containing @bfqg.
22959 ++ * @bfqg: the group being destroyed.
22960 ++ *
22961 ++ * Destroy @bfqg, making sure that it is not referenced from its parent.
22962 ++ */
22963 ++static void bfq_destroy_group(struct bfqio_cgroup *bgrp, struct bfq_group *bfqg)
22964 ++{
22965 ++ struct bfq_data *bfqd;
22966 ++ struct bfq_service_tree *st;
22967 ++ struct bfq_entity *entity = bfqg->my_entity;
22968 ++ unsigned long uninitialized_var(flags);
22969 ++ int i;
22970 ++
22971 ++ hlist_del(&bfqg->group_node);
22972 ++
22973 ++ /*
22974 ++ * Empty all service_trees belonging to this group before deactivating
22975 ++ * the group itself.
22976 ++ */
22977 ++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
22978 ++ st = bfqg->sched_data.service_tree + i;
22979 ++
22980 ++ /*
22981 ++ * The idle tree may still contain bfq_queues belonging
22982 ++ * to exited task because they never migrated to a different
22983 ++ * cgroup from the one being destroyed now. Noone else
22984 ++ * can access them so it's safe to act without any lock.
22985 ++ */
22986 ++ bfq_flush_idle_tree(st);
22987 ++
22988 ++ /*
22989 ++ * It may happen that some queues are still active
22990 ++ * (busy) upon group destruction (if the corresponding
22991 ++ * processes have been forced to terminate). We move
22992 ++ * all the leaf entities corresponding to these queues
22993 ++ * to the root_group.
22994 ++ * Also, it may happen that the group has an entity
22995 ++ * under service, which is disconnected from the active
22996 ++ * tree: it must be moved, too.
22997 ++ * There is no need to put the sync queues, as the
22998 ++ * scheduler has taken no reference.
22999 ++ */
23000 ++ bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
23001 ++ if (bfqd != NULL) {
23002 ++ bfq_reparent_active_entities(bfqd, bfqg, st);
23003 ++ bfq_put_bfqd_unlock(bfqd, &flags);
23004 ++ }
23005 ++ BUG_ON(!RB_EMPTY_ROOT(&st->active));
23006 ++ BUG_ON(!RB_EMPTY_ROOT(&st->idle));
23007 ++ }
23008 ++ BUG_ON(bfqg->sched_data.next_active != NULL);
23009 ++ BUG_ON(bfqg->sched_data.active_entity != NULL);
23010 ++
23011 ++ /*
23012 ++ * We may race with device destruction, take extra care when
23013 ++ * dereferencing bfqg->bfqd.
23014 ++ */
23015 ++ bfqd = bfq_get_bfqd_locked(&bfqg->bfqd, &flags);
23016 ++ if (bfqd != NULL) {
23017 ++ hlist_del(&bfqg->bfqd_node);
23018 ++ __bfq_deactivate_entity(entity, 0);
23019 ++ bfq_put_async_queues(bfqd, bfqg);
23020 ++ bfq_put_bfqd_unlock(bfqd, &flags);
23021 ++ }
23022 ++ BUG_ON(entity->tree != NULL);
23023 ++
23024 ++ /*
23025 ++ * No need to defer the kfree() to the end of the RCU grace
23026 ++ * period: we are called from the destroy() callback of our
23027 ++ * cgroup, so we can be sure that noone is a) still using
23028 ++ * this cgroup or b) doing lookups in it.
23029 ++ */
23030 ++ kfree(bfqg);
23031 ++}
23032 ++
23033 ++static void bfq_end_raising_async(struct bfq_data *bfqd)
23034 ++{
23035 ++ struct hlist_node *tmp;
23036 ++ struct bfq_group *bfqg;
23037 ++
23038 ++ hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node)
23039 ++ bfq_end_raising_async_queues(bfqd, bfqg);
23040 ++}
23041 ++
23042 ++/**
23043 ++ * bfq_disconnect_groups - diconnect @bfqd from all its groups.
23044 ++ * @bfqd: the device descriptor being exited.
23045 ++ *
23046 ++ * When the device exits we just make sure that no lookup can return
23047 ++ * the now unused group structures. They will be deallocated on cgroup
23048 ++ * destruction.
23049 ++ */
23050 ++static void bfq_disconnect_groups(struct bfq_data *bfqd)
23051 ++{
23052 ++ struct hlist_node *tmp;
23053 ++ struct bfq_group *bfqg;
23054 ++
23055 ++ bfq_log(bfqd, "disconnect_groups beginning") ;
23056 ++ hlist_for_each_entry_safe(bfqg, tmp, &bfqd->group_list, bfqd_node) {
23057 ++ hlist_del(&bfqg->bfqd_node);
23058 ++
23059 ++ __bfq_deactivate_entity(bfqg->my_entity, 0);
23060 ++
23061 ++ /*
23062 ++ * Don't remove from the group hash, just set an
23063 ++ * invalid key. No lookups can race with the
23064 ++ * assignment as bfqd is being destroyed; this
23065 ++ * implies also that new elements cannot be added
23066 ++ * to the list.
23067 ++ */
23068 ++ rcu_assign_pointer(bfqg->bfqd, NULL);
23069 ++
23070 ++ bfq_log(bfqd, "disconnect_groups: put async for group %p",
23071 ++ bfqg) ;
23072 ++ bfq_put_async_queues(bfqd, bfqg);
23073 ++ }
23074 ++}
23075 ++
23076 ++static inline void bfq_free_root_group(struct bfq_data *bfqd)
23077 ++{
23078 ++ struct bfqio_cgroup *bgrp = &bfqio_root_cgroup;
23079 ++ struct bfq_group *bfqg = bfqd->root_group;
23080 ++
23081 ++ bfq_put_async_queues(bfqd, bfqg);
23082 ++
23083 ++ spin_lock_irq(&bgrp->lock);
23084 ++ hlist_del_rcu(&bfqg->group_node);
23085 ++ spin_unlock_irq(&bgrp->lock);
23086 ++
23087 ++ /*
23088 ++ * No need to synchronize_rcu() here: since the device is gone
23089 ++ * there cannot be any read-side access to its root_group.
23090 ++ */
23091 ++ kfree(bfqg);
23092 ++}
23093 ++
23094 ++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
23095 ++{
23096 ++ struct bfq_group *bfqg;
23097 ++ struct bfqio_cgroup *bgrp;
23098 ++ int i;
23099 ++
23100 ++ bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
23101 ++ if (bfqg == NULL)
23102 ++ return NULL;
23103 ++
23104 ++ bfqg->entity.parent = NULL;
23105 ++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
23106 ++ bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
23107 ++
23108 ++ bgrp = &bfqio_root_cgroup;
23109 ++ spin_lock_irq(&bgrp->lock);
23110 ++ rcu_assign_pointer(bfqg->bfqd, bfqd);
23111 ++ hlist_add_head_rcu(&bfqg->group_node, &bgrp->group_data);
23112 ++ spin_unlock_irq(&bgrp->lock);
23113 ++
23114 ++ return bfqg;
23115 ++}
23116 ++
23117 ++#define SHOW_FUNCTION(__VAR) \
23118 ++static u64 bfqio_cgroup_##__VAR##_read(struct cgroup *cgroup, \
23119 ++ struct cftype *cftype) \
23120 ++{ \
23121 ++ struct bfqio_cgroup *bgrp; \
23122 ++ u64 ret = -ENODEV; \
23123 ++ \
23124 ++ mutex_lock(&bfqio_mutex); \
23125 ++ if (bfqio_is_removed(cgroup)) \
23126 ++ goto out_unlock; \
23127 ++ \
23128 ++ bgrp = cgroup_to_bfqio(cgroup); \
23129 ++ spin_lock_irq(&bgrp->lock); \
23130 ++ ret = bgrp->__VAR; \
23131 ++ spin_unlock_irq(&bgrp->lock); \
23132 ++ \
23133 ++out_unlock: \
23134 ++ mutex_unlock(&bfqio_mutex); \
23135 ++ return ret; \
23136 ++}
23137 ++
23138 ++SHOW_FUNCTION(weight);
23139 ++SHOW_FUNCTION(ioprio);
23140 ++SHOW_FUNCTION(ioprio_class);
23141 ++#undef SHOW_FUNCTION
23142 ++
23143 ++#define STORE_FUNCTION(__VAR, __MIN, __MAX) \
23144 ++static int bfqio_cgroup_##__VAR##_write(struct cgroup *cgroup, \
23145 ++ struct cftype *cftype, \
23146 ++ u64 val) \
23147 ++{ \
23148 ++ struct bfqio_cgroup *bgrp; \
23149 ++ struct bfq_group *bfqg; \
23150 ++ int ret = -EINVAL; \
23151 ++ \
23152 ++ if (val < (__MIN) || val > (__MAX)) \
23153 ++ return ret; \
23154 ++ \
23155 ++ ret = -ENODEV; \
23156 ++ mutex_lock(&bfqio_mutex); \
23157 ++ if (bfqio_is_removed(cgroup)) \
23158 ++ goto out_unlock; \
23159 ++ ret = 0; \
23160 ++ \
23161 ++ bgrp = cgroup_to_bfqio(cgroup); \
23162 ++ \
23163 ++ spin_lock_irq(&bgrp->lock); \
23164 ++ bgrp->__VAR = (unsigned short)val; \
23165 ++ hlist_for_each_entry(bfqg, &bgrp->group_data, group_node) { \
23166 ++ /* \
23167 ++ * Setting the ioprio_changed flag of the entity \
23168 ++ * to 1 with new_##__VAR == ##__VAR would re-set \
23169 ++ * the value of the weight to its ioprio mapping. \
23170 ++ * Set the flag only if necessary. \
23171 ++ */ \
23172 ++ if ((unsigned short)val != bfqg->entity.new_##__VAR) { \
23173 ++ bfqg->entity.new_##__VAR = (unsigned short)val; \
23174 ++ smp_wmb(); \
23175 ++ bfqg->entity.ioprio_changed = 1; \
23176 ++ } \
23177 ++ } \
23178 ++ spin_unlock_irq(&bgrp->lock); \
23179 ++ \
23180 ++out_unlock: \
23181 ++ mutex_unlock(&bfqio_mutex); \
23182 ++ return ret; \
23183 ++}
23184 ++
23185 ++STORE_FUNCTION(weight, BFQ_MIN_WEIGHT, BFQ_MAX_WEIGHT);
23186 ++STORE_FUNCTION(ioprio, 0, IOPRIO_BE_NR - 1);
23187 ++STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
23188 ++#undef STORE_FUNCTION
23189 ++
23190 ++static struct cftype bfqio_files[] = {
23191 ++ {
23192 ++ .name = "weight",
23193 ++ .read_u64 = bfqio_cgroup_weight_read,
23194 ++ .write_u64 = bfqio_cgroup_weight_write,
23195 ++ },
23196 ++ {
23197 ++ .name = "ioprio",
23198 ++ .read_u64 = bfqio_cgroup_ioprio_read,
23199 ++ .write_u64 = bfqio_cgroup_ioprio_write,
23200 ++ },
23201 ++ {
23202 ++ .name = "ioprio_class",
23203 ++ .read_u64 = bfqio_cgroup_ioprio_class_read,
23204 ++ .write_u64 = bfqio_cgroup_ioprio_class_write,
23205 ++ },
23206 ++ { }, /* terminate */
23207 ++};
23208 ++
23209 ++static struct cgroup_subsys_state *bfqio_create(struct cgroup *cgroup)
23210 ++{
23211 ++ struct bfqio_cgroup *bgrp;
23212 ++
23213 ++ if (cgroup->parent != NULL) {
23214 ++ bgrp = kzalloc(sizeof(*bgrp), GFP_KERNEL);
23215 ++ if (bgrp == NULL)
23216 ++ return ERR_PTR(-ENOMEM);
23217 ++ } else
23218 ++ bgrp = &bfqio_root_cgroup;
23219 ++
23220 ++ spin_lock_init(&bgrp->lock);
23221 ++ INIT_HLIST_HEAD(&bgrp->group_data);
23222 ++ bgrp->ioprio = BFQ_DEFAULT_GRP_IOPRIO;
23223 ++ bgrp->ioprio_class = BFQ_DEFAULT_GRP_CLASS;
23224 ++
23225 ++ return &bgrp->css;
23226 ++}
23227 ++
23228 ++/*
23229 ++ * We cannot support shared io contexts, as we have no means to support
23230 ++ * two tasks with the same ioc in two different groups without major rework
23231 ++ * of the main bic/bfqq data structures. By now we allow a task to change
23232 ++ * its cgroup only if it's the only owner of its ioc; the drawback of this
23233 ++ * behavior is that a group containing a task that forked using CLONE_IO
23234 ++ * will not be destroyed until the tasks sharing the ioc die.
23235 ++ */
23236 ++static int bfqio_can_attach(struct cgroup *cgroup, struct cgroup_taskset *tset)
23237 ++{
23238 ++ struct task_struct *task;
23239 ++ struct io_context *ioc;
23240 ++ int ret = 0;
23241 ++
23242 ++ cgroup_taskset_for_each(task, cgroup, tset) {
23243 ++ /* task_lock() is needed to avoid races with exit_io_context() */
23244 ++ task_lock(task);
23245 ++ ioc = task->io_context;
23246 ++ if (ioc != NULL && atomic_read(&ioc->nr_tasks) > 1)
23247 ++ /*
23248 ++ * ioc == NULL means that the task is either too young or
23249 ++ * exiting: if it has still no ioc the ioc can't be shared,
23250 ++ * if the task is exiting the attach will fail anyway, no
23251 ++ * matter what we return here.
23252 ++ */
23253 ++ ret = -EINVAL;
23254 ++ task_unlock(task);
23255 ++ if (ret)
23256 ++ break;
23257 ++ }
23258 ++
23259 ++ return ret;
23260 ++}
23261 ++
23262 ++static void bfqio_attach(struct cgroup *cgroup, struct cgroup_taskset *tset)
23263 ++{
23264 ++ struct task_struct *task;
23265 ++ struct io_context *ioc;
23266 ++ struct io_cq *icq;
23267 ++
23268 ++ /*
23269 ++ * IMPORTANT NOTE: The move of more than one process at a time to a
23270 ++ * new group has not yet been tested.
23271 ++ */
23272 ++ cgroup_taskset_for_each(task, cgroup, tset) {
23273 ++ ioc = get_task_io_context(task, GFP_ATOMIC, NUMA_NO_NODE);
23274 ++ if (ioc) {
23275 ++ /*
23276 ++ * Handle cgroup change here.
23277 ++ */
23278 ++ rcu_read_lock();
23279 ++ hlist_for_each_entry_rcu(icq, &ioc->icq_list, ioc_node)
23280 ++ if (!strncmp(icq->q->elevator->type->elevator_name,
23281 ++ "bfq", ELV_NAME_MAX))
23282 ++ bfq_bic_change_cgroup(icq_to_bic(icq),
23283 ++ cgroup);
23284 ++ rcu_read_unlock();
23285 ++ put_io_context(ioc);
23286 ++ }
23287 ++ }
23288 ++}
23289 ++
23290 ++static void bfqio_destroy(struct cgroup *cgroup)
23291 ++{
23292 ++ struct bfqio_cgroup *bgrp = cgroup_to_bfqio(cgroup);
23293 ++ struct hlist_node *tmp;
23294 ++ struct bfq_group *bfqg;
23295 ++
23296 ++ /*
23297 ++ * Since we are destroying the cgroup, there are no more tasks
23298 ++ * referencing it, and all the RCU grace periods that may have
23299 ++ * referenced it are ended (as the destruction of the parent
23300 ++ * cgroup is RCU-safe); bgrp->group_data will not be accessed by
23301 ++ * anything else and we don't need any synchronization.
23302 ++ */
23303 ++ hlist_for_each_entry_safe(bfqg, tmp, &bgrp->group_data, group_node)
23304 ++ bfq_destroy_group(bgrp, bfqg);
23305 ++
23306 ++ BUG_ON(!hlist_empty(&bgrp->group_data));
23307 ++
23308 ++ kfree(bgrp);
23309 ++}
23310 ++
23311 ++struct cgroup_subsys bfqio_subsys = {
23312 ++ .name = "bfqio",
23313 ++ .css_alloc = bfqio_create,
23314 ++ .can_attach = bfqio_can_attach,
23315 ++ .attach = bfqio_attach,
23316 ++ .css_free = bfqio_destroy,
23317 ++ .subsys_id = bfqio_subsys_id,
23318 ++ .base_cftypes = bfqio_files,
23319 ++};
23320 ++#else
23321 ++static inline void bfq_init_entity(struct bfq_entity *entity,
23322 ++ struct bfq_group *bfqg)
23323 ++{
23324 ++ entity->weight = entity->new_weight;
23325 ++ entity->orig_weight = entity->new_weight;
23326 ++ entity->ioprio = entity->new_ioprio;
23327 ++ entity->ioprio_class = entity->new_ioprio_class;
23328 ++ entity->sched_data = &bfqg->sched_data;
23329 ++}
23330 ++
23331 ++static inline struct bfq_group *
23332 ++bfq_bic_update_cgroup(struct bfq_io_cq *bic)
23333 ++{
23334 ++ struct bfq_data *bfqd = bic_to_bfqd(bic);
23335 ++ return bfqd->root_group;
23336 ++}
23337 ++
23338 ++static inline void bfq_bfqq_move(struct bfq_data *bfqd,
23339 ++ struct bfq_queue *bfqq,
23340 ++ struct bfq_entity *entity,
23341 ++ struct bfq_group *bfqg)
23342 ++{
23343 ++}
23344 ++
23345 ++static void bfq_end_raising_async(struct bfq_data *bfqd)
23346 ++{
23347 ++ bfq_end_raising_async_queues(bfqd, bfqd->root_group);
23348 ++}
23349 ++
23350 ++static inline void bfq_disconnect_groups(struct bfq_data *bfqd)
23351 ++{
23352 ++ bfq_put_async_queues(bfqd, bfqd->root_group);
23353 ++}
23354 ++
23355 ++static inline void bfq_free_root_group(struct bfq_data *bfqd)
23356 ++{
23357 ++ kfree(bfqd->root_group);
23358 ++}
23359 ++
23360 ++static struct bfq_group *bfq_alloc_root_group(struct bfq_data *bfqd, int node)
23361 ++{
23362 ++ struct bfq_group *bfqg;
23363 ++ int i;
23364 ++
23365 ++ bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
23366 ++ if (bfqg == NULL)
23367 ++ return NULL;
23368 ++
23369 ++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
23370 ++ bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
23371 ++
23372 ++ return bfqg;
23373 ++}
23374 ++#endif
23375 +diff --git a/block/bfq-ioc.c b/block/bfq-ioc.c
23376 +new file mode 100644
23377 +index 0000000..326e3ec
23378 +--- /dev/null
23379 ++++ b/block/bfq-ioc.c
23380 +@@ -0,0 +1,36 @@
23381 ++/*
23382 ++ * BFQ: I/O context handling.
23383 ++ *
23384 ++ * Based on ideas and code from CFQ:
23385 ++ * Copyright (C) 2003 Jens Axboe <axboe@××××××.dk>
23386 ++ *
23387 ++ * Copyright (C) 2008 Fabio Checconi <fabio@×××××××××××××.it>
23388 ++ * Paolo Valente <paolo.valente@×××××××.it>
23389 ++ *
23390 ++ * Copyright (C) 2010 Paolo Valente <paolo.valente@×××××××.it>
23391 ++ */
23392 ++
23393 ++/**
23394 ++ * icq_to_bic - convert iocontext queue structure to bfq_io_cq.
23395 ++ * @icq: the iocontext queue.
23396 ++ */
23397 ++static inline struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
23398 ++{
23399 ++ /* bic->icq is the first member, %NULL will convert to %NULL */
23400 ++ return container_of(icq, struct bfq_io_cq, icq);
23401 ++}
23402 ++
23403 ++/**
23404 ++ * bfq_bic_lookup - search into @ioc a bic associated to @bfqd.
23405 ++ * @bfqd: the lookup key.
23406 ++ * @ioc: the io_context of the process doing I/O.
23407 ++ *
23408 ++ * Queue lock must be held.
23409 ++ */
23410 ++static inline struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
23411 ++ struct io_context *ioc)
23412 ++{
23413 ++ if(ioc)
23414 ++ return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue));
23415 ++ return NULL;
23416 ++}
23417 +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
23418 +new file mode 100644
23419 +index 0000000..b230927
23420 +--- /dev/null
23421 ++++ b/block/bfq-iosched.c
23422 +@@ -0,0 +1,3070 @@
23423 ++/*
23424 ++ * BFQ, or Budget Fair Queueing, disk scheduler.
23425 ++ *
23426 ++ * Based on ideas and code from CFQ:
23427 ++ * Copyright (C) 2003 Jens Axboe <axboe@××××××.dk>
23428 ++ *
23429 ++ * Copyright (C) 2008 Fabio Checconi <fabio@×××××××××××××.it>
23430 ++ * Paolo Valente <paolo.valente@×××××××.it>
23431 ++ *
23432 ++ * Copyright (C) 2010 Paolo Valente <paolo.valente@×××××××.it>
23433 ++ *
23434 ++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ file.
23435 ++ *
23436 ++ * BFQ is a proportional share disk scheduling algorithm based on the
23437 ++ * slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
23438 ++ * measured in number of sectors, to tasks instead of time slices.
23439 ++ * The disk is not granted to the active task for a given time slice,
23440 ++ * but until it has exahusted its assigned budget. This change from
23441 ++ * the time to the service domain allows BFQ to distribute the disk
23442 ++ * bandwidth among tasks as desired, without any distortion due to
23443 ++ * ZBR, workload fluctuations or other factors. BFQ uses an ad hoc
23444 ++ * internal scheduler, called B-WF2Q+, to schedule tasks according to
23445 ++ * their budgets. Thanks to this accurate scheduler, BFQ can afford
23446 ++ * to assign high budgets to disk-bound non-seeky tasks (to boost the
23447 ++ * throughput), and yet guarantee low latencies to interactive and
23448 ++ * soft real-time applications.
23449 ++ *
23450 ++ * BFQ has been introduced in [1], where the interested reader can
23451 ++ * find an accurate description of the algorithm, the bandwidth
23452 ++ * distribution and latency guarantees it provides, plus formal proofs
23453 ++ * of all the properties. With respect to the algorithm presented in
23454 ++ * the paper, this implementation adds several little heuristics, and
23455 ++ * a hierarchical extension, based on H-WF2Q+.
23456 ++ *
23457 ++ * B-WF2Q+ is based on WF2Q+, that is described in [2], together with
23458 ++ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
23459 ++ * complexity derives from the one introduced with EEVDF in [3].
23460 ++ *
23461 ++ * [1] P. Valente and F. Checconi, ``High Throughput Disk Scheduling
23462 ++ * with Deterministic Guarantees on Bandwidth Distribution,'',
23463 ++ * IEEE Transactions on Computer, May 2010.
23464 ++ *
23465 ++ * http://algo.ing.unimo.it/people/paolo/disk_sched/bfq-techreport.pdf
23466 ++ *
23467 ++ * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing
23468 ++ * Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689,
23469 ++ * Oct 1997.
23470 ++ *
23471 ++ * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz
23472 ++ *
23473 ++ * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline
23474 ++ * First: A Flexible and Accurate Mechanism for Proportional Share
23475 ++ * Resource Allocation,'' technical report.
23476 ++ *
23477 ++ * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
23478 ++ */
23479 ++#include <linux/module.h>
23480 ++#include <linux/slab.h>
23481 ++#include <linux/blkdev.h>
23482 ++#include <linux/cgroup.h>
23483 ++#include <linux/elevator.h>
23484 ++#include <linux/jiffies.h>
23485 ++#include <linux/rbtree.h>
23486 ++#include <linux/ioprio.h>
23487 ++#include "bfq.h"
23488 ++#include "blk.h"
23489 ++
23490 ++/* Max number of dispatches in one round of service. */
23491 ++static const int bfq_quantum = 4;
23492 ++
23493 ++/* Expiration time of sync (0) and async (1) requests, in jiffies. */
23494 ++static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
23495 ++
23496 ++/* Maximum backwards seek, in KiB. */
23497 ++static const int bfq_back_max = 16 * 1024;
23498 ++
23499 ++/* Penalty of a backwards seek, in number of sectors. */
23500 ++static const int bfq_back_penalty = 2;
23501 ++
23502 ++/* Idling period duration, in jiffies. */
23503 ++static int bfq_slice_idle = HZ / 125;
23504 ++
23505 ++/* Default maximum budget values, in sectors and number of requests. */
23506 ++static const int bfq_default_max_budget = 16 * 1024;
23507 ++static const int bfq_max_budget_async_rq = 4;
23508 ++
23509 ++/*
23510 ++ * Async to sync throughput distribution is controlled as follows:
23511 ++ * when an async request is served, the entity is charged the number
23512 ++ * of sectors of the request, multipled by the factor below
23513 ++ */
23514 ++static const int bfq_async_charge_factor = 10;
23515 ++
23516 ++/* Default timeout values, in jiffies, approximating CFQ defaults. */
23517 ++static const int bfq_timeout_sync = HZ / 8;
23518 ++static int bfq_timeout_async = HZ / 25;
23519 ++
23520 ++struct kmem_cache *bfq_pool;
23521 ++
23522 ++/* Below this threshold (in ms), we consider thinktime immediate. */
23523 ++#define BFQ_MIN_TT 2
23524 ++
23525 ++/* hw_tag detection: parallel requests threshold and min samples needed. */
23526 ++#define BFQ_HW_QUEUE_THRESHOLD 4
23527 ++#define BFQ_HW_QUEUE_SAMPLES 32
23528 ++
23529 ++#define BFQQ_SEEK_THR (sector_t)(8 * 1024)
23530 ++#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
23531 ++
23532 ++/* Min samples used for peak rate estimation (for autotuning). */
23533 ++#define BFQ_PEAK_RATE_SAMPLES 32
23534 ++
23535 ++/* Shift used for peak rate fixed precision calculations. */
23536 ++#define BFQ_RATE_SHIFT 16
23537 ++
23538 ++/*
23539 ++ * The duration of the weight raising for interactive applications is
23540 ++ * computed automatically (as default behaviour), using the following
23541 ++ * formula: duration = (R / r) * T, where r is the peak rate of the
23542 ++ * disk, and R and T are two reference parameters. In particular, R is
23543 ++ * the peak rate of a reference disk, and T is about the maximum time
23544 ++ * for starting popular large applications on that disk, under BFQ and
23545 ++ * while reading two files in parallel. Finally, BFQ uses two
23546 ++ * different pairs (R, T) depending on whether the disk is rotational
23547 ++ * or non-rotational.
23548 ++ */
23549 ++#define T_rot (msecs_to_jiffies(5500))
23550 ++#define T_nonrot (msecs_to_jiffies(2000))
23551 ++/* Next two quantities are in sectors/usec, left-shifted by BFQ_RATE_SHIFT */
23552 ++#define R_rot 17415
23553 ++#define R_nonrot 34791
23554 ++
23555 ++#define BFQ_SERVICE_TREE_INIT ((struct bfq_service_tree) \
23556 ++ { RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
23557 ++
23558 ++#define RQ_BIC(rq) ((struct bfq_io_cq *) (rq)->elv.priv[0])
23559 ++#define RQ_BFQQ(rq) ((rq)->elv.priv[1])
23560 ++
23561 ++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd);
23562 ++
23563 ++#include "bfq-ioc.c"
23564 ++#include "bfq-sched.c"
23565 ++#include "bfq-cgroup.c"
23566 ++
23567 ++#define bfq_class_idle(bfqq) ((bfqq)->entity.ioprio_class ==\
23568 ++ IOPRIO_CLASS_IDLE)
23569 ++#define bfq_class_rt(bfqq) ((bfqq)->entity.ioprio_class ==\
23570 ++ IOPRIO_CLASS_RT)
23571 ++
23572 ++#define bfq_sample_valid(samples) ((samples) > 80)
23573 ++
23574 ++/*
23575 ++ * We regard a request as SYNC, if either it's a read or has the SYNC bit
23576 ++ * set (in which case it could also be a direct WRITE).
23577 ++ */
23578 ++static inline int bfq_bio_sync(struct bio *bio)
23579 ++{
23580 ++ if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
23581 ++ return 1;
23582 ++
23583 ++ return 0;
23584 ++}
23585 ++
23586 ++/*
23587 ++ * Scheduler run of queue, if there are requests pending and no one in the
23588 ++ * driver that will restart queueing.
23589 ++ */
23590 ++static inline void bfq_schedule_dispatch(struct bfq_data *bfqd)
23591 ++{
23592 ++ if (bfqd->queued != 0) {
23593 ++ bfq_log(bfqd, "schedule dispatch");
23594 ++ kblockd_schedule_work(bfqd->queue, &bfqd->unplug_work);
23595 ++ }
23596 ++}
23597 ++
23598 ++/*
23599 ++ * Lifted from AS - choose which of rq1 and rq2 that is best served now.
23600 ++ * We choose the request that is closesr to the head right now. Distance
23601 ++ * behind the head is penalized and only allowed to a certain extent.
23602 ++ */
23603 ++static struct request *bfq_choose_req(struct bfq_data *bfqd,
23604 ++ struct request *rq1,
23605 ++ struct request *rq2,
23606 ++ sector_t last)
23607 ++{
23608 ++ sector_t s1, s2, d1 = 0, d2 = 0;
23609 ++ unsigned long back_max;
23610 ++#define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */
23611 ++#define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */
23612 ++ unsigned wrap = 0; /* bit mask: requests behind the disk head? */
23613 ++
23614 ++ if (rq1 == NULL || rq1 == rq2)
23615 ++ return rq2;
23616 ++ if (rq2 == NULL)
23617 ++ return rq1;
23618 ++
23619 ++ if (rq_is_sync(rq1) && !rq_is_sync(rq2))
23620 ++ return rq1;
23621 ++ else if (rq_is_sync(rq2) && !rq_is_sync(rq1))
23622 ++ return rq2;
23623 ++ if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META))
23624 ++ return rq1;
23625 ++ else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META))
23626 ++ return rq2;
23627 ++
23628 ++ s1 = blk_rq_pos(rq1);
23629 ++ s2 = blk_rq_pos(rq2);
23630 ++
23631 ++ /*
23632 ++ * By definition, 1KiB is 2 sectors.
23633 ++ */
23634 ++ back_max = bfqd->bfq_back_max * 2;
23635 ++
23636 ++ /*
23637 ++ * Strict one way elevator _except_ in the case where we allow
23638 ++ * short backward seeks which are biased as twice the cost of a
23639 ++ * similar forward seek.
23640 ++ */
23641 ++ if (s1 >= last)
23642 ++ d1 = s1 - last;
23643 ++ else if (s1 + back_max >= last)
23644 ++ d1 = (last - s1) * bfqd->bfq_back_penalty;
23645 ++ else
23646 ++ wrap |= BFQ_RQ1_WRAP;
23647 ++
23648 ++ if (s2 >= last)
23649 ++ d2 = s2 - last;
23650 ++ else if (s2 + back_max >= last)
23651 ++ d2 = (last - s2) * bfqd->bfq_back_penalty;
23652 ++ else
23653 ++ wrap |= BFQ_RQ2_WRAP;
23654 ++
23655 ++ /* Found required data */
23656 ++
23657 ++ /*
23658 ++ * By doing switch() on the bit mask "wrap" we avoid having to
23659 ++ * check two variables for all permutations: --> faster!
23660 ++ */
23661 ++ switch (wrap) {
23662 ++ case 0: /* common case for CFQ: rq1 and rq2 not wrapped */
23663 ++ if (d1 < d2)
23664 ++ return rq1;
23665 ++ else if (d2 < d1)
23666 ++ return rq2;
23667 ++ else {
23668 ++ if (s1 >= s2)
23669 ++ return rq1;
23670 ++ else
23671 ++ return rq2;
23672 ++ }
23673 ++
23674 ++ case BFQ_RQ2_WRAP:
23675 ++ return rq1;
23676 ++ case BFQ_RQ1_WRAP:
23677 ++ return rq2;
23678 ++ case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */
23679 ++ default:
23680 ++ /*
23681 ++ * Since both rqs are wrapped,
23682 ++ * start with the one that's further behind head
23683 ++ * (--> only *one* back seek required),
23684 ++ * since back seek takes more time than forward.
23685 ++ */
23686 ++ if (s1 <= s2)
23687 ++ return rq1;
23688 ++ else
23689 ++ return rq2;
23690 ++ }
23691 ++}
23692 ++
23693 ++static struct bfq_queue *
23694 ++bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
23695 ++ sector_t sector, struct rb_node **ret_parent,
23696 ++ struct rb_node ***rb_link)
23697 ++{
23698 ++ struct rb_node **p, *parent;
23699 ++ struct bfq_queue *bfqq = NULL;
23700 ++
23701 ++ parent = NULL;
23702 ++ p = &root->rb_node;
23703 ++ while (*p) {
23704 ++ struct rb_node **n;
23705 ++
23706 ++ parent = *p;
23707 ++ bfqq = rb_entry(parent, struct bfq_queue, pos_node);
23708 ++
23709 ++ /*
23710 ++ * Sort strictly based on sector. Smallest to the left,
23711 ++ * largest to the right.
23712 ++ */
23713 ++ if (sector > blk_rq_pos(bfqq->next_rq))
23714 ++ n = &(*p)->rb_right;
23715 ++ else if (sector < blk_rq_pos(bfqq->next_rq))
23716 ++ n = &(*p)->rb_left;
23717 ++ else
23718 ++ break;
23719 ++ p = n;
23720 ++ bfqq = NULL;
23721 ++ }
23722 ++
23723 ++ *ret_parent = parent;
23724 ++ if (rb_link)
23725 ++ *rb_link = p;
23726 ++
23727 ++ bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d",
23728 ++ (long long unsigned)sector,
23729 ++ bfqq != NULL ? bfqq->pid : 0);
23730 ++
23731 ++ return bfqq;
23732 ++}
23733 ++
23734 ++static void bfq_rq_pos_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq)
23735 ++{
23736 ++ struct rb_node **p, *parent;
23737 ++ struct bfq_queue *__bfqq;
23738 ++
23739 ++ if (bfqq->pos_root != NULL) {
23740 ++ rb_erase(&bfqq->pos_node, bfqq->pos_root);
23741 ++ bfqq->pos_root = NULL;
23742 ++ }
23743 ++
23744 ++ if (bfq_class_idle(bfqq))
23745 ++ return;
23746 ++ if (!bfqq->next_rq)
23747 ++ return;
23748 ++
23749 ++ bfqq->pos_root = &bfqd->rq_pos_tree;
23750 ++ __bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
23751 ++ blk_rq_pos(bfqq->next_rq), &parent, &p);
23752 ++ if (__bfqq == NULL) {
23753 ++ rb_link_node(&bfqq->pos_node, parent, p);
23754 ++ rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
23755 ++ } else
23756 ++ bfqq->pos_root = NULL;
23757 ++}
23758 ++
23759 ++static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
23760 ++ struct bfq_queue *bfqq,
23761 ++ struct request *last)
23762 ++{
23763 ++ struct rb_node *rbnext = rb_next(&last->rb_node);
23764 ++ struct rb_node *rbprev = rb_prev(&last->rb_node);
23765 ++ struct request *next = NULL, *prev = NULL;
23766 ++
23767 ++ BUG_ON(RB_EMPTY_NODE(&last->rb_node));
23768 ++
23769 ++ if (rbprev != NULL)
23770 ++ prev = rb_entry_rq(rbprev);
23771 ++
23772 ++ if (rbnext != NULL)
23773 ++ next = rb_entry_rq(rbnext);
23774 ++ else {
23775 ++ rbnext = rb_first(&bfqq->sort_list);
23776 ++ if (rbnext && rbnext != &last->rb_node)
23777 ++ next = rb_entry_rq(rbnext);
23778 ++ }
23779 ++
23780 ++ return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last));
23781 ++}
23782 ++
23783 ++static void bfq_del_rq_rb(struct request *rq)
23784 ++{
23785 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
23786 ++ struct bfq_data *bfqd = bfqq->bfqd;
23787 ++ const int sync = rq_is_sync(rq);
23788 ++
23789 ++ BUG_ON(bfqq->queued[sync] == 0);
23790 ++ bfqq->queued[sync]--;
23791 ++ bfqd->queued--;
23792 ++
23793 ++ elv_rb_del(&bfqq->sort_list, rq);
23794 ++
23795 ++ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
23796 ++ if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->active_queue)
23797 ++ bfq_del_bfqq_busy(bfqd, bfqq, 1);
23798 ++ /*
23799 ++ * Remove queue from request-position tree as it is empty.
23800 ++ */
23801 ++ if (bfqq->pos_root != NULL) {
23802 ++ rb_erase(&bfqq->pos_node, bfqq->pos_root);
23803 ++ bfqq->pos_root = NULL;
23804 ++ }
23805 ++ }
23806 ++}
23807 ++
23808 ++/* see the definition of bfq_async_charge_factor for details */
23809 ++static inline unsigned long bfq_serv_to_charge(struct request *rq,
23810 ++ struct bfq_queue *bfqq)
23811 ++{
23812 ++ return blk_rq_sectors(rq) *
23813 ++ (1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->raising_coeff == 1) *
23814 ++ bfq_async_charge_factor));
23815 ++}
23816 ++
23817 ++/**
23818 ++ * bfq_updated_next_req - update the queue after a new next_rq selection.
23819 ++ * @bfqd: the device data the queue belongs to.
23820 ++ * @bfqq: the queue to update.
23821 ++ *
23822 ++ * If the first request of a queue changes we make sure that the queue
23823 ++ * has enough budget to serve at least its first request (if the
23824 ++ * request has grown). We do this because if the queue has not enough
23825 ++ * budget for its first request, it has to go through two dispatch
23826 ++ * rounds to actually get it dispatched.
23827 ++ */
23828 ++static void bfq_updated_next_req(struct bfq_data *bfqd,
23829 ++ struct bfq_queue *bfqq)
23830 ++{
23831 ++ struct bfq_entity *entity = &bfqq->entity;
23832 ++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
23833 ++ struct request *next_rq = bfqq->next_rq;
23834 ++ unsigned long new_budget;
23835 ++
23836 ++ if (next_rq == NULL)
23837 ++ return;
23838 ++
23839 ++ if (bfqq == bfqd->active_queue)
23840 ++ /*
23841 ++ * In order not to break guarantees, budgets cannot be
23842 ++ * changed after an entity has been selected.
23843 ++ */
23844 ++ return;
23845 ++
23846 ++ BUG_ON(entity->tree != &st->active);
23847 ++ BUG_ON(entity == entity->sched_data->active_entity);
23848 ++
23849 ++ new_budget = max_t(unsigned long, bfqq->max_budget,
23850 ++ bfq_serv_to_charge(next_rq, bfqq));
23851 ++ entity->budget = new_budget;
23852 ++ bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu", new_budget);
23853 ++ bfq_activate_bfqq(bfqd, bfqq);
23854 ++}
23855 ++
23856 ++static inline unsigned int bfq_wrais_duration(struct bfq_data *bfqd)
23857 ++{
23858 ++ u64 dur;
23859 ++
23860 ++ if (bfqd->bfq_raising_max_time > 0)
23861 ++ return bfqd->bfq_raising_max_time;
23862 ++
23863 ++ dur = bfqd->RT_prod;
23864 ++ do_div(dur, bfqd->peak_rate);
23865 ++
23866 ++ return dur;
23867 ++}
23868 ++
23869 ++static void bfq_add_rq_rb(struct request *rq)
23870 ++{
23871 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
23872 ++ struct bfq_entity *entity = &bfqq->entity;
23873 ++ struct bfq_data *bfqd = bfqq->bfqd;
23874 ++ struct request *next_rq, *prev;
23875 ++ unsigned long old_raising_coeff = bfqq->raising_coeff;
23876 ++ int idle_for_long_time = bfqq->budget_timeout +
23877 ++ bfqd->bfq_raising_min_idle_time < jiffies;
23878 ++
23879 ++ bfq_log_bfqq(bfqd, bfqq, "add_rq_rb %d", rq_is_sync(rq));
23880 ++ bfqq->queued[rq_is_sync(rq)]++;
23881 ++ bfqd->queued++;
23882 ++
23883 ++ elv_rb_add(&bfqq->sort_list, rq);
23884 ++
23885 ++ /*
23886 ++ * Check if this request is a better next-serve candidate.
23887 ++ */
23888 ++ prev = bfqq->next_rq;
23889 ++ next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
23890 ++ BUG_ON(next_rq == NULL);
23891 ++ bfqq->next_rq = next_rq;
23892 ++
23893 ++ /*
23894 ++ * Adjust priority tree position, if next_rq changes.
23895 ++ */
23896 ++ if (prev != bfqq->next_rq)
23897 ++ bfq_rq_pos_tree_add(bfqd, bfqq);
23898 ++
23899 ++ if (!bfq_bfqq_busy(bfqq)) {
23900 ++ int soft_rt = bfqd->bfq_raising_max_softrt_rate > 0 &&
23901 ++ bfqq->soft_rt_next_start < jiffies;
23902 ++ entity->budget = max_t(unsigned long, bfqq->max_budget,
23903 ++ bfq_serv_to_charge(next_rq, bfqq));
23904 ++
23905 ++ if (! bfqd->low_latency)
23906 ++ goto add_bfqq_busy;
23907 ++
23908 ++ /*
23909 ++ * If the queue is not being boosted and has been idle
23910 ++ * for enough time, start a weight-raising period
23911 ++ */
23912 ++ if(old_raising_coeff == 1 && (idle_for_long_time || soft_rt)) {
23913 ++ bfqq->raising_coeff = bfqd->bfq_raising_coeff;
23914 ++ if (idle_for_long_time)
23915 ++ bfqq->raising_cur_max_time =
23916 ++ bfq_wrais_duration(bfqd);
23917 ++ else
23918 ++ bfqq->raising_cur_max_time =
23919 ++ bfqd->bfq_raising_rt_max_time;
23920 ++ bfq_log_bfqq(bfqd, bfqq,
23921 ++ "wrais starting at %llu msec,"
23922 ++ "rais_max_time %u",
23923 ++ bfqq->last_rais_start_finish,
23924 ++ jiffies_to_msecs(bfqq->
23925 ++ raising_cur_max_time));
23926 ++ } else if (old_raising_coeff > 1) {
23927 ++ if (idle_for_long_time)
23928 ++ bfqq->raising_cur_max_time =
23929 ++ bfq_wrais_duration(bfqd);
23930 ++ else if (bfqq->raising_cur_max_time ==
23931 ++ bfqd->bfq_raising_rt_max_time &&
23932 ++ !soft_rt) {
23933 ++ bfqq->raising_coeff = 1;
23934 ++ bfq_log_bfqq(bfqd, bfqq,
23935 ++ "wrais ending at %llu msec,"
23936 ++ "rais_max_time %u",
23937 ++ bfqq->last_rais_start_finish,
23938 ++ jiffies_to_msecs(bfqq->
23939 ++ raising_cur_max_time));
23940 ++ }
23941 ++ }
23942 ++ if (old_raising_coeff != bfqq->raising_coeff)
23943 ++ entity->ioprio_changed = 1;
23944 ++add_bfqq_busy:
23945 ++ bfq_add_bfqq_busy(bfqd, bfqq);
23946 ++ } else {
23947 ++ if(bfqd->low_latency && old_raising_coeff == 1 &&
23948 ++ !rq_is_sync(rq) &&
23949 ++ bfqq->last_rais_start_finish +
23950 ++ bfqd->bfq_raising_min_inter_arr_async < jiffies) {
23951 ++ bfqq->raising_coeff = bfqd->bfq_raising_coeff;
23952 ++ bfqq->raising_cur_max_time = bfq_wrais_duration(bfqd);
23953 ++
23954 ++ entity->ioprio_changed = 1;
23955 ++ bfq_log_bfqq(bfqd, bfqq,
23956 ++ "non-idle wrais starting at %llu msec,"
23957 ++ "rais_max_time %u",
23958 ++ bfqq->last_rais_start_finish,
23959 ++ jiffies_to_msecs(bfqq->
23960 ++ raising_cur_max_time));
23961 ++ }
23962 ++ bfq_updated_next_req(bfqd, bfqq);
23963 ++ }
23964 ++
23965 ++ if(bfqd->low_latency &&
23966 ++ (old_raising_coeff == 1 || bfqq->raising_coeff == 1 ||
23967 ++ idle_for_long_time))
23968 ++ bfqq->last_rais_start_finish = jiffies;
23969 ++}
23970 ++
23971 ++static void bfq_reposition_rq_rb(struct bfq_queue *bfqq, struct request *rq)
23972 ++{
23973 ++ elv_rb_del(&bfqq->sort_list, rq);
23974 ++ bfqq->queued[rq_is_sync(rq)]--;
23975 ++ bfqq->bfqd->queued--;
23976 ++ bfq_add_rq_rb(rq);
23977 ++}
23978 ++
23979 ++static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
23980 ++ struct bio *bio)
23981 ++{
23982 ++ struct task_struct *tsk = current;
23983 ++ struct bfq_io_cq *bic;
23984 ++ struct bfq_queue *bfqq;
23985 ++
23986 ++ bic = bfq_bic_lookup(bfqd, tsk->io_context);
23987 ++ if (bic == NULL)
23988 ++ return NULL;
23989 ++
23990 ++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
23991 ++ if (bfqq != NULL) {
23992 ++ sector_t sector = bio->bi_sector + bio_sectors(bio);
23993 ++
23994 ++ return elv_rb_find(&bfqq->sort_list, sector);
23995 ++ }
23996 ++
23997 ++ return NULL;
23998 ++}
23999 ++
24000 ++static void bfq_activate_request(struct request_queue *q, struct request *rq)
24001 ++{
24002 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
24003 ++
24004 ++ bfqd->rq_in_driver++;
24005 ++ bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
24006 ++ bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
24007 ++ (long long unsigned)bfqd->last_position);
24008 ++}
24009 ++
24010 ++static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
24011 ++{
24012 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
24013 ++
24014 ++ WARN_ON(bfqd->rq_in_driver == 0);
24015 ++ bfqd->rq_in_driver--;
24016 ++}
24017 ++
24018 ++static void bfq_remove_request(struct request *rq)
24019 ++{
24020 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
24021 ++ struct bfq_data *bfqd = bfqq->bfqd;
24022 ++
24023 ++ if (bfqq->next_rq == rq) {
24024 ++ bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
24025 ++ bfq_updated_next_req(bfqd, bfqq);
24026 ++ }
24027 ++
24028 ++ list_del_init(&rq->queuelist);
24029 ++ bfq_del_rq_rb(rq);
24030 ++
24031 ++ if (rq->cmd_flags & REQ_META) {
24032 ++ WARN_ON(bfqq->meta_pending == 0);
24033 ++ bfqq->meta_pending--;
24034 ++ }
24035 ++}
24036 ++
24037 ++static int bfq_merge(struct request_queue *q, struct request **req,
24038 ++ struct bio *bio)
24039 ++{
24040 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
24041 ++ struct request *__rq;
24042 ++
24043 ++ __rq = bfq_find_rq_fmerge(bfqd, bio);
24044 ++ if (__rq != NULL && elv_rq_merge_ok(__rq, bio)) {
24045 ++ *req = __rq;
24046 ++ return ELEVATOR_FRONT_MERGE;
24047 ++ }
24048 ++
24049 ++ return ELEVATOR_NO_MERGE;
24050 ++}
24051 ++
24052 ++static void bfq_merged_request(struct request_queue *q, struct request *req,
24053 ++ int type)
24054 ++{
24055 ++ if (type == ELEVATOR_FRONT_MERGE) {
24056 ++ struct bfq_queue *bfqq = RQ_BFQQ(req);
24057 ++
24058 ++ bfq_reposition_rq_rb(bfqq, req);
24059 ++ }
24060 ++}
24061 ++
24062 ++static void bfq_merged_requests(struct request_queue *q, struct request *rq,
24063 ++ struct request *next)
24064 ++{
24065 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
24066 ++
24067 ++ /*
24068 ++ * Reposition in fifo if next is older than rq.
24069 ++ */
24070 ++ if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
24071 ++ time_before(rq_fifo_time(next), rq_fifo_time(rq))) {
24072 ++ list_move(&rq->queuelist, &next->queuelist);
24073 ++ rq_set_fifo_time(rq, rq_fifo_time(next));
24074 ++ }
24075 ++
24076 ++ if (bfqq->next_rq == next)
24077 ++ bfqq->next_rq = rq;
24078 ++
24079 ++ bfq_remove_request(next);
24080 ++}
24081 ++
24082 ++/* Must be called with bfqq != NULL */
24083 ++static inline void bfq_bfqq_end_raising(struct bfq_queue *bfqq)
24084 ++{
24085 ++ BUG_ON(bfqq == NULL);
24086 ++ bfqq->raising_coeff = 1;
24087 ++ bfqq->raising_cur_max_time = 0;
24088 ++ /* Trigger a weight change on the next activation of the queue */
24089 ++ bfqq->entity.ioprio_changed = 1;
24090 ++}
24091 ++
24092 ++static void bfq_end_raising_async_queues(struct bfq_data *bfqd,
24093 ++ struct bfq_group *bfqg)
24094 ++{
24095 ++ int i, j;
24096 ++
24097 ++ for (i = 0; i < 2; i++)
24098 ++ for (j = 0; j < IOPRIO_BE_NR; j++)
24099 ++ if (bfqg->async_bfqq[i][j] != NULL)
24100 ++ bfq_bfqq_end_raising(bfqg->async_bfqq[i][j]);
24101 ++ if (bfqg->async_idle_bfqq != NULL)
24102 ++ bfq_bfqq_end_raising(bfqg->async_idle_bfqq);
24103 ++}
24104 ++
24105 ++static void bfq_end_raising(struct bfq_data *bfqd)
24106 ++{
24107 ++ struct bfq_queue *bfqq;
24108 ++
24109 ++ spin_lock_irq(bfqd->queue->queue_lock);
24110 ++
24111 ++ list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
24112 ++ bfq_bfqq_end_raising(bfqq);
24113 ++ list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
24114 ++ bfq_bfqq_end_raising(bfqq);
24115 ++ bfq_end_raising_async(bfqd);
24116 ++
24117 ++ spin_unlock_irq(bfqd->queue->queue_lock);
24118 ++}
24119 ++
24120 ++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
24121 ++ struct bio *bio)
24122 ++{
24123 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
24124 ++ struct bfq_io_cq *bic;
24125 ++ struct bfq_queue *bfqq;
24126 ++
24127 ++ /*
24128 ++ * Disallow merge of a sync bio into an async request.
24129 ++ */
24130 ++ if (bfq_bio_sync(bio) && !rq_is_sync(rq))
24131 ++ return 0;
24132 ++
24133 ++ /*
24134 ++ * Lookup the bfqq that this bio will be queued with. Allow
24135 ++ * merge only if rq is queued there.
24136 ++ * Queue lock is held here.
24137 ++ */
24138 ++ bic = bfq_bic_lookup(bfqd, current->io_context);
24139 ++ if (bic == NULL)
24140 ++ return 0;
24141 ++
24142 ++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
24143 ++ return bfqq == RQ_BFQQ(rq);
24144 ++}
24145 ++
24146 ++static void __bfq_set_active_queue(struct bfq_data *bfqd,
24147 ++ struct bfq_queue *bfqq)
24148 ++{
24149 ++ if (bfqq != NULL) {
24150 ++ bfq_mark_bfqq_must_alloc(bfqq);
24151 ++ bfq_mark_bfqq_budget_new(bfqq);
24152 ++ bfq_clear_bfqq_fifo_expire(bfqq);
24153 ++
24154 ++ bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
24155 ++
24156 ++ bfq_log_bfqq(bfqd, bfqq, "set_active_queue, cur-budget = %lu",
24157 ++ bfqq->entity.budget);
24158 ++ }
24159 ++
24160 ++ bfqd->active_queue = bfqq;
24161 ++}
24162 ++
24163 ++/*
24164 ++ * Get and set a new active queue for service.
24165 ++ */
24166 ++static struct bfq_queue *bfq_set_active_queue(struct bfq_data *bfqd,
24167 ++ struct bfq_queue *bfqq)
24168 ++{
24169 ++ if (!bfqq)
24170 ++ bfqq = bfq_get_next_queue(bfqd);
24171 ++ else
24172 ++ bfq_get_next_queue_forced(bfqd, bfqq);
24173 ++
24174 ++ __bfq_set_active_queue(bfqd, bfqq);
24175 ++ return bfqq;
24176 ++}
24177 ++
24178 ++static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
24179 ++ struct request *rq)
24180 ++{
24181 ++ if (blk_rq_pos(rq) >= bfqd->last_position)
24182 ++ return blk_rq_pos(rq) - bfqd->last_position;
24183 ++ else
24184 ++ return bfqd->last_position - blk_rq_pos(rq);
24185 ++}
24186 ++
24187 ++/*
24188 ++ * Return true if bfqq has no request pending and rq is close enough to
24189 ++ * bfqd->last_position, or if rq is closer to bfqd->last_position than
24190 ++ * bfqq->next_rq
24191 ++ */
24192 ++static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
24193 ++{
24194 ++ return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
24195 ++}
24196 ++
24197 ++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
24198 ++{
24199 ++ struct rb_root *root = &bfqd->rq_pos_tree;
24200 ++ struct rb_node *parent, *node;
24201 ++ struct bfq_queue *__bfqq;
24202 ++ sector_t sector = bfqd->last_position;
24203 ++
24204 ++ if (RB_EMPTY_ROOT(root))
24205 ++ return NULL;
24206 ++
24207 ++ /*
24208 ++ * First, if we find a request starting at the end of the last
24209 ++ * request, choose it.
24210 ++ */
24211 ++ __bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL);
24212 ++ if (__bfqq != NULL)
24213 ++ return __bfqq;
24214 ++
24215 ++ /*
24216 ++ * If the exact sector wasn't found, the parent of the NULL leaf
24217 ++ * will contain the closest sector (rq_pos_tree sorted by next_request
24218 ++ * position).
24219 ++ */
24220 ++ __bfqq = rb_entry(parent, struct bfq_queue, pos_node);
24221 ++ if (bfq_rq_close(bfqd, __bfqq->next_rq))
24222 ++ return __bfqq;
24223 ++
24224 ++ if (blk_rq_pos(__bfqq->next_rq) < sector)
24225 ++ node = rb_next(&__bfqq->pos_node);
24226 ++ else
24227 ++ node = rb_prev(&__bfqq->pos_node);
24228 ++ if (node == NULL)
24229 ++ return NULL;
24230 ++
24231 ++ __bfqq = rb_entry(node, struct bfq_queue, pos_node);
24232 ++ if (bfq_rq_close(bfqd, __bfqq->next_rq))
24233 ++ return __bfqq;
24234 ++
24235 ++ return NULL;
24236 ++}
24237 ++
24238 ++/*
24239 ++ * bfqd - obvious
24240 ++ * cur_bfqq - passed in so that we don't decide that the current queue
24241 ++ * is closely cooperating with itself.
24242 ++ *
24243 ++ * We are assuming that cur_bfqq has dispatched at least one request,
24244 ++ * and that bfqd->last_position reflects a position on the disk associated
24245 ++ * with the I/O issued by cur_bfqq.
24246 ++ */
24247 ++static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
24248 ++ struct bfq_queue *cur_bfqq)
24249 ++{
24250 ++ struct bfq_queue *bfqq;
24251 ++
24252 ++ if (bfq_class_idle(cur_bfqq))
24253 ++ return NULL;
24254 ++ if (!bfq_bfqq_sync(cur_bfqq))
24255 ++ return NULL;
24256 ++ if (BFQQ_SEEKY(cur_bfqq))
24257 ++ return NULL;
24258 ++
24259 ++ /* If device has only one backlogged bfq_queue, don't search. */
24260 ++ if (bfqd->busy_queues == 1)
24261 ++ return NULL;
24262 ++
24263 ++ /*
24264 ++ * We should notice if some of the queues are cooperating, e.g.
24265 ++ * working closely on the same area of the disk. In that case,
24266 ++ * we can group them together and don't waste time idling.
24267 ++ */
24268 ++ bfqq = bfqq_close(bfqd);
24269 ++ if (bfqq == NULL || bfqq == cur_bfqq)
24270 ++ return NULL;
24271 ++
24272 ++ /*
24273 ++ * Do not merge queues from different bfq_groups.
24274 ++ */
24275 ++ if (bfqq->entity.parent != cur_bfqq->entity.parent)
24276 ++ return NULL;
24277 ++
24278 ++ /*
24279 ++ * It only makes sense to merge sync queues.
24280 ++ */
24281 ++ if (!bfq_bfqq_sync(bfqq))
24282 ++ return NULL;
24283 ++ if (BFQQ_SEEKY(bfqq))
24284 ++ return NULL;
24285 ++
24286 ++ /*
24287 ++ * Do not merge queues of different priority classes.
24288 ++ */
24289 ++ if (bfq_class_rt(bfqq) != bfq_class_rt(cur_bfqq))
24290 ++ return NULL;
24291 ++
24292 ++ return bfqq;
24293 ++}
24294 ++
24295 ++/*
24296 ++ * If enough samples have been computed, return the current max budget
24297 ++ * stored in bfqd, which is dynamically updated according to the
24298 ++ * estimated disk peak rate; otherwise return the default max budget
24299 ++ */
24300 ++static inline unsigned long bfq_max_budget(struct bfq_data *bfqd)
24301 ++{
24302 ++ if (bfqd->budgets_assigned < 194)
24303 ++ return bfq_default_max_budget;
24304 ++ else
24305 ++ return bfqd->bfq_max_budget;
24306 ++}
24307 ++
24308 ++/*
24309 ++ * Return min budget, which is a fraction of the current or default
24310 ++ * max budget (trying with 1/32)
24311 ++ */
24312 ++static inline unsigned long bfq_min_budget(struct bfq_data *bfqd)
24313 ++{
24314 ++ if (bfqd->budgets_assigned < 194)
24315 ++ return bfq_default_max_budget / 32;
24316 ++ else
24317 ++ return bfqd->bfq_max_budget / 32;
24318 ++}
24319 ++
24320 ++/*
24321 ++ * Decides whether idling should be done for given device and
24322 ++ * given active queue.
24323 ++ */
24324 ++static inline bool bfq_queue_nonrot_noidle(struct bfq_data *bfqd,
24325 ++ struct bfq_queue *active_bfqq)
24326 ++{
24327 ++ if (active_bfqq == NULL)
24328 ++ return false;
24329 ++ /*
24330 ++ * If device is SSD it has no seek penalty, disable idling; but
24331 ++ * do so only if:
24332 ++ * - device does not support queuing, otherwise we still have
24333 ++ * a problem with sync vs async workloads;
24334 ++ * - the queue is not weight-raised, to preserve guarantees.
24335 ++ */
24336 ++ return (blk_queue_nonrot(bfqd->queue) && bfqd->hw_tag &&
24337 ++ active_bfqq->raising_coeff == 1);
24338 ++}
24339 ++
24340 ++static void bfq_arm_slice_timer(struct bfq_data *bfqd)
24341 ++{
24342 ++ struct bfq_queue *bfqq = bfqd->active_queue;
24343 ++ struct bfq_io_cq *bic;
24344 ++ unsigned long sl;
24345 ++
24346 ++ WARN_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
24347 ++
24348 ++ /* Tasks have exited, don't wait. */
24349 ++ bic = bfqd->active_bic;
24350 ++ if (bic == NULL || atomic_read(&bic->icq.ioc->active_ref) == 0)
24351 ++ return;
24352 ++
24353 ++ bfq_mark_bfqq_wait_request(bfqq);
24354 ++
24355 ++ /*
24356 ++ * We don't want to idle for seeks, but we do want to allow
24357 ++ * fair distribution of slice time for a process doing back-to-back
24358 ++ * seeks. So allow a little bit of time for him to submit a new rq.
24359 ++ *
24360 ++ * To prevent processes with (partly) seeky workloads from
24361 ++ * being too ill-treated, grant them a small fraction of the
24362 ++ * assigned budget before reducing the waiting time to
24363 ++ * BFQ_MIN_TT. This happened to help reduce latency.
24364 ++ */
24365 ++ sl = bfqd->bfq_slice_idle;
24366 ++ if (bfq_sample_valid(bfqq->seek_samples) && BFQQ_SEEKY(bfqq) &&
24367 ++ bfqq->entity.service > bfq_max_budget(bfqd) / 8 &&
24368 ++ bfqq->raising_coeff == 1)
24369 ++ sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
24370 ++ else if (bfqq->raising_coeff > 1)
24371 ++ sl = sl * 3;
24372 ++ bfqd->last_idling_start = ktime_get();
24373 ++ mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
24374 ++ bfq_log(bfqd, "arm idle: %u/%u ms",
24375 ++ jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
24376 ++}
24377 ++
24378 ++/*
24379 ++ * Set the maximum time for the active queue to consume its
24380 ++ * budget. This prevents seeky processes from lowering the disk
24381 ++ * throughput (always guaranteed with a time slice scheme as in CFQ).
24382 ++ */
24383 ++static void bfq_set_budget_timeout(struct bfq_data *bfqd)
24384 ++{
24385 ++ struct bfq_queue *bfqq = bfqd->active_queue;
24386 ++ unsigned int timeout_coeff;
24387 ++ if (bfqq->raising_cur_max_time == bfqd->bfq_raising_rt_max_time)
24388 ++ timeout_coeff = 1;
24389 ++ else
24390 ++ timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
24391 ++
24392 ++ bfqd->last_budget_start = ktime_get();
24393 ++
24394 ++ bfq_clear_bfqq_budget_new(bfqq);
24395 ++ bfqq->budget_timeout = jiffies +
24396 ++ bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
24397 ++
24398 ++ bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
24399 ++ jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
24400 ++ timeout_coeff));
24401 ++}
24402 ++
24403 ++/*
24404 ++ * Move request from internal lists to the request queue dispatch list.
24405 ++ */
24406 ++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
24407 ++{
24408 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
24409 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
24410 ++
24411 ++ bfq_remove_request(rq);
24412 ++ bfqq->dispatched++;
24413 ++ elv_dispatch_sort(q, rq);
24414 ++
24415 ++ if (bfq_bfqq_sync(bfqq))
24416 ++ bfqd->sync_flight++;
24417 ++}
24418 ++
24419 ++/*
24420 ++ * Return expired entry, or NULL to just start from scratch in rbtree.
24421 ++ */
24422 ++static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
24423 ++{
24424 ++ struct request *rq = NULL;
24425 ++
24426 ++ if (bfq_bfqq_fifo_expire(bfqq))
24427 ++ return NULL;
24428 ++
24429 ++ bfq_mark_bfqq_fifo_expire(bfqq);
24430 ++
24431 ++ if (list_empty(&bfqq->fifo))
24432 ++ return NULL;
24433 ++
24434 ++ rq = rq_entry_fifo(bfqq->fifo.next);
24435 ++
24436 ++ if (time_before(jiffies, rq_fifo_time(rq)))
24437 ++ return NULL;
24438 ++
24439 ++ return rq;
24440 ++}
24441 ++
24442 ++/*
24443 ++ * Must be called with the queue_lock held.
24444 ++ */
24445 ++static int bfqq_process_refs(struct bfq_queue *bfqq)
24446 ++{
24447 ++ int process_refs, io_refs;
24448 ++
24449 ++ io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
24450 ++ process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
24451 ++ BUG_ON(process_refs < 0);
24452 ++ return process_refs;
24453 ++}
24454 ++
24455 ++static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
24456 ++{
24457 ++ int process_refs, new_process_refs;
24458 ++ struct bfq_queue *__bfqq;
24459 ++
24460 ++ /*
24461 ++ * If there are no process references on the new_bfqq, then it is
24462 ++ * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
24463 ++ * may have dropped their last reference (not just their last process
24464 ++ * reference).
24465 ++ */
24466 ++ if (!bfqq_process_refs(new_bfqq))
24467 ++ return;
24468 ++
24469 ++ /* Avoid a circular list and skip interim queue merges. */
24470 ++ while ((__bfqq = new_bfqq->new_bfqq)) {
24471 ++ if (__bfqq == bfqq)
24472 ++ return;
24473 ++ new_bfqq = __bfqq;
24474 ++ }
24475 ++
24476 ++ process_refs = bfqq_process_refs(bfqq);
24477 ++ new_process_refs = bfqq_process_refs(new_bfqq);
24478 ++ /*
24479 ++ * If the process for the bfqq has gone away, there is no
24480 ++ * sense in merging the queues.
24481 ++ */
24482 ++ if (process_refs == 0 || new_process_refs == 0)
24483 ++ return;
24484 ++
24485 ++ /*
24486 ++ * Merge in the direction of the lesser amount of work.
24487 ++ */
24488 ++ if (new_process_refs >= process_refs) {
24489 ++ bfqq->new_bfqq = new_bfqq;
24490 ++ atomic_add(process_refs, &new_bfqq->ref);
24491 ++ } else {
24492 ++ new_bfqq->new_bfqq = bfqq;
24493 ++ atomic_add(new_process_refs, &bfqq->ref);
24494 ++ }
24495 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
24496 ++ new_bfqq->pid);
24497 ++}
24498 ++
24499 ++static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
24500 ++{
24501 ++ struct bfq_entity *entity = &bfqq->entity;
24502 ++ return entity->budget - entity->service;
24503 ++}
24504 ++
24505 ++static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
24506 ++{
24507 ++ BUG_ON(bfqq != bfqd->active_queue);
24508 ++
24509 ++ __bfq_bfqd_reset_active(bfqd);
24510 ++
24511 ++ /*
24512 ++ * If this bfqq is shared between multiple processes, check
24513 ++ * to make sure that those processes are still issuing I/Os
24514 ++ * within the mean seek distance. If not, it may be time to
24515 ++ * break the queues apart again.
24516 ++ */
24517 ++ if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq))
24518 ++ bfq_mark_bfqq_split_coop(bfqq);
24519 ++
24520 ++ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
24521 ++ /*
24522 ++ * overloading budget_timeout field to store when
24523 ++ * the queue remains with no backlog, used by
24524 ++ * the weight-raising mechanism
24525 ++ */
24526 ++ bfqq->budget_timeout = jiffies ;
24527 ++ bfq_del_bfqq_busy(bfqd, bfqq, 1);
24528 ++ } else {
24529 ++ bfq_activate_bfqq(bfqd, bfqq);
24530 ++ /*
24531 ++ * Resort priority tree of potential close cooperators.
24532 ++ */
24533 ++ bfq_rq_pos_tree_add(bfqd, bfqq);
24534 ++ }
24535 ++}
24536 ++
24537 ++/**
24538 ++ * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior.
24539 ++ * @bfqd: device data.
24540 ++ * @bfqq: queue to update.
24541 ++ * @reason: reason for expiration.
24542 ++ *
24543 ++ * Handle the feedback on @bfqq budget. See the body for detailed
24544 ++ * comments.
24545 ++ */
24546 ++static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
24547 ++ struct bfq_queue *bfqq,
24548 ++ enum bfqq_expiration reason)
24549 ++{
24550 ++ struct request *next_rq;
24551 ++ unsigned long budget, min_budget;
24552 ++
24553 ++ budget = bfqq->max_budget;
24554 ++ min_budget = bfq_min_budget(bfqd);
24555 ++
24556 ++ BUG_ON(bfqq != bfqd->active_queue);
24557 ++
24558 ++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %lu, budg left %lu",
24559 ++ bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
24560 ++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %lu, min budg %lu",
24561 ++ budget, bfq_min_budget(bfqd));
24562 ++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
24563 ++ bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->active_queue));
24564 ++
24565 ++ if (bfq_bfqq_sync(bfqq)) {
24566 ++ switch (reason) {
24567 ++ /*
24568 ++ * Caveat: in all the following cases we trade latency
24569 ++ * for throughput.
24570 ++ */
24571 ++ case BFQ_BFQQ_TOO_IDLE:
24572 ++ /*
24573 ++ * This is the only case where we may reduce
24574 ++ * the budget: if there is no requets of the
24575 ++ * process still waiting for completion, then
24576 ++ * we assume (tentatively) that the timer has
24577 ++ * expired because the batch of requests of
24578 ++ * the process could have been served with a
24579 ++ * smaller budget. Hence, betting that
24580 ++ * process will behave in the same way when it
24581 ++ * becomes backlogged again, we reduce its
24582 ++ * next budget. As long as we guess right,
24583 ++ * this budget cut reduces the latency
24584 ++ * experienced by the process.
24585 ++ *
24586 ++ * However, if there are still outstanding
24587 ++ * requests, then the process may have not yet
24588 ++ * issued its next request just because it is
24589 ++ * still waiting for the completion of some of
24590 ++ * the still oustanding ones. So in this
24591 ++ * subcase we do not reduce its budget, on the
24592 ++ * contrary we increase it to possibly boost
24593 ++ * the throughput, as discussed in the
24594 ++ * comments to the BUDGET_TIMEOUT case.
24595 ++ */
24596 ++ if (bfqq->dispatched > 0) /* still oustanding reqs */
24597 ++ budget = min(budget * 2, bfqd->bfq_max_budget);
24598 ++ else {
24599 ++ if (budget > 5 * min_budget)
24600 ++ budget -= 4 * min_budget;
24601 ++ else
24602 ++ budget = min_budget;
24603 ++ }
24604 ++ break;
24605 ++ case BFQ_BFQQ_BUDGET_TIMEOUT:
24606 ++ /*
24607 ++ * We double the budget here because: 1) it
24608 ++ * gives the chance to boost the throughput if
24609 ++ * this is not a seeky process (which may have
24610 ++ * bumped into this timeout because of, e.g.,
24611 ++ * ZBR), 2) together with charge_full_budget
24612 ++ * it helps give seeky processes higher
24613 ++ * timestamps, and hence be served less
24614 ++ * frequently.
24615 ++ */
24616 ++ budget = min(budget * 2, bfqd->bfq_max_budget);
24617 ++ break;
24618 ++ case BFQ_BFQQ_BUDGET_EXHAUSTED:
24619 ++ /*
24620 ++ * The process still has backlog, and did not
24621 ++ * let either the budget timeout or the disk
24622 ++ * idling timeout expire. Hence it is not
24623 ++ * seeky, has a short thinktime and may be
24624 ++ * happy with a higher budget too. So
24625 ++ * definitely increase the budget of this good
24626 ++ * candidate to boost the disk throughput.
24627 ++ */
24628 ++ budget = min(budget * 4, bfqd->bfq_max_budget);
24629 ++ break;
24630 ++ case BFQ_BFQQ_NO_MORE_REQUESTS:
24631 ++ /*
24632 ++ * Leave the budget unchanged.
24633 ++ */
24634 ++ default:
24635 ++ return;
24636 ++ }
24637 ++ } else /* async queue */
24638 ++ /* async queues get always the maximum possible budget
24639 ++ * (their ability to dispatch is limited by
24640 ++ * @bfqd->bfq_max_budget_async_rq).
24641 ++ */
24642 ++ budget = bfqd->bfq_max_budget;
24643 ++
24644 ++ bfqq->max_budget = budget;
24645 ++
24646 ++ if (bfqd->budgets_assigned >= 194 && bfqd->bfq_user_max_budget == 0 &&
24647 ++ bfqq->max_budget > bfqd->bfq_max_budget)
24648 ++ bfqq->max_budget = bfqd->bfq_max_budget;
24649 ++
24650 ++ /*
24651 ++ * Make sure that we have enough budget for the next request.
24652 ++ * Since the finish time of the bfqq must be kept in sync with
24653 ++ * the budget, be sure to call __bfq_bfqq_expire() after the
24654 ++ * update.
24655 ++ */
24656 ++ next_rq = bfqq->next_rq;
24657 ++ if (next_rq != NULL)
24658 ++ bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
24659 ++ bfq_serv_to_charge(next_rq, bfqq));
24660 ++ else
24661 ++ bfqq->entity.budget = bfqq->max_budget;
24662 ++
24663 ++ bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %lu",
24664 ++ next_rq != NULL ? blk_rq_sectors(next_rq) : 0,
24665 ++ bfqq->entity.budget);
24666 ++}
24667 ++
24668 ++static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
24669 ++{
24670 ++ unsigned long max_budget;
24671 ++
24672 ++ /*
24673 ++ * The max_budget calculated when autotuning is equal to the
24674 ++ * amount of sectors transfered in timeout_sync at the
24675 ++ * estimated peak rate.
24676 ++ */
24677 ++ max_budget = (unsigned long)(peak_rate * 1000 *
24678 ++ timeout >> BFQ_RATE_SHIFT);
24679 ++
24680 ++ return max_budget;
24681 ++}
24682 ++
24683 ++/*
24684 ++ * In addition to updating the peak rate, checks whether the process
24685 ++ * is "slow", and returns 1 if so. This slow flag is used, in addition
24686 ++ * to the budget timeout, to reduce the amount of service provided to
24687 ++ * seeky processes, and hence reduce their chances to lower the
24688 ++ * throughput. See the code for more details.
24689 ++ */
24690 ++static int bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
24691 ++ int compensate, enum bfqq_expiration reason)
24692 ++{
24693 ++ u64 bw, usecs, expected, timeout;
24694 ++ ktime_t delta;
24695 ++ int update = 0;
24696 ++
24697 ++ if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
24698 ++ return 0;
24699 ++
24700 ++ if (compensate)
24701 ++ delta = bfqd->last_idling_start;
24702 ++ else
24703 ++ delta = ktime_get();
24704 ++ delta = ktime_sub(delta, bfqd->last_budget_start);
24705 ++ usecs = ktime_to_us(delta);
24706 ++
24707 ++ /* Don't trust short/unrealistic values. */
24708 ++ if (usecs < 100 || usecs >= LONG_MAX)
24709 ++ return 0;
24710 ++
24711 ++ /*
24712 ++ * Calculate the bandwidth for the last slice. We use a 64 bit
24713 ++ * value to store the peak rate, in sectors per usec in fixed
24714 ++ * point math. We do so to have enough precision in the estimate
24715 ++ * and to avoid overflows.
24716 ++ */
24717 ++ bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
24718 ++ do_div(bw, (unsigned long)usecs);
24719 ++
24720 ++ timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
24721 ++
24722 ++ /*
24723 ++ * Use only long (> 20ms) intervals to filter out spikes for
24724 ++ * the peak rate estimation.
24725 ++ */
24726 ++ if (usecs > 20000) {
24727 ++ if (bw > bfqd->peak_rate ||
24728 ++ (!BFQQ_SEEKY(bfqq) &&
24729 ++ reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
24730 ++ bfq_log(bfqd, "measured bw =%llu", bw);
24731 ++ /*
24732 ++ * To smooth oscillations use a low-pass filter with
24733 ++ * alpha=7/8, i.e.,
24734 ++ * new_rate = (7/8) * old_rate + (1/8) * bw
24735 ++ */
24736 ++ do_div(bw, 8);
24737 ++ if (bw == 0)
24738 ++ return 0;
24739 ++ bfqd->peak_rate *= 7;
24740 ++ do_div(bfqd->peak_rate, 8);
24741 ++ bfqd->peak_rate += bw;
24742 ++ update = 1;
24743 ++ bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
24744 ++ }
24745 ++
24746 ++ update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
24747 ++
24748 ++ if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
24749 ++ bfqd->peak_rate_samples++;
24750 ++
24751 ++ if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
24752 ++ update && bfqd->bfq_user_max_budget == 0) {
24753 ++ bfqd->bfq_max_budget =
24754 ++ bfq_calc_max_budget(bfqd->peak_rate, timeout);
24755 ++ bfq_log(bfqd, "new max_budget=%lu",
24756 ++ bfqd->bfq_max_budget);
24757 ++ }
24758 ++ }
24759 ++
24760 ++ /*
24761 ++ * If the process has been served for a too short time
24762 ++ * interval to let its possible sequential accesses prevail on
24763 ++ * the initial seek time needed to move the disk head on the
24764 ++ * first sector it requested, then give the process a chance
24765 ++ * and for the moment return false.
24766 ++ */
24767 ++ if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
24768 ++ return 0;
24769 ++
24770 ++ /*
24771 ++ * A process is considered ``slow'' (i.e., seeky, so that we
24772 ++ * cannot treat it fairly in the service domain, as it would
24773 ++ * slow down too much the other processes) if, when a slice
24774 ++ * ends for whatever reason, it has received service at a
24775 ++ * rate that would not be high enough to complete the budget
24776 ++ * before the budget timeout expiration.
24777 ++ */
24778 ++ expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
24779 ++
24780 ++ /*
24781 ++ * Caveat: processes doing IO in the slower disk zones will
24782 ++ * tend to be slow(er) even if not seeky. And the estimated
24783 ++ * peak rate will actually be an average over the disk
24784 ++ * surface. Hence, to not be too harsh with unlucky processes,
24785 ++ * we keep a budget/3 margin of safety before declaring a
24786 ++ * process slow.
24787 ++ */
24788 ++ return expected > (4 * bfqq->entity.budget) / 3;
24789 ++}
24790 ++
24791 ++/**
24792 ++ * bfq_bfqq_expire - expire a queue.
24793 ++ * @bfqd: device owning the queue.
24794 ++ * @bfqq: the queue to expire.
24795 ++ * @compensate: if true, compensate for the time spent idling.
24796 ++ * @reason: the reason causing the expiration.
24797 ++ *
24798 ++ *
24799 ++ * If the process associated to the queue is slow (i.e., seeky), or in
24800 ++ * case of budget timeout, or, finally, if it is async, we
24801 ++ * artificially charge it an entire budget (independently of the
24802 ++ * actual service it received). As a consequence, the queue will get
24803 ++ * higher timestamps than the correct ones upon reactivation, and
24804 ++ * hence it will be rescheduled as if it had received more service
24805 ++ * than what it actually received. In the end, this class of processes
24806 ++ * will receive less service in proportion to how slowly they consume
24807 ++ * their budgets (and hence how seriously they tend to lower the
24808 ++ * throughput).
24809 ++ *
24810 ++ * In contrast, when a queue expires because it has been idling for
24811 ++ * too much or because it exhausted its budget, we do not touch the
24812 ++ * amount of service it has received. Hence when the queue will be
24813 ++ * reactivated and its timestamps updated, the latter will be in sync
24814 ++ * with the actual service received by the queue until expiration.
24815 ++ *
24816 ++ * Charging a full budget to the first type of queues and the exact
24817 ++ * service to the others has the effect of using the WF2Q+ policy to
24818 ++ * schedule the former on a timeslice basis, without violating the
24819 ++ * service domain guarantees of the latter.
24820 ++ */
24821 ++static void bfq_bfqq_expire(struct bfq_data *bfqd,
24822 ++ struct bfq_queue *bfqq,
24823 ++ int compensate,
24824 ++ enum bfqq_expiration reason)
24825 ++{
24826 ++ int slow;
24827 ++ BUG_ON(bfqq != bfqd->active_queue);
24828 ++
24829 ++ /* Update disk peak rate for autotuning and check whether the
24830 ++ * process is slow (see bfq_update_peak_rate).
24831 ++ */
24832 ++ slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
24833 ++
24834 ++ /*
24835 ++ * As above explained, 'punish' slow (i.e., seeky), timed-out
24836 ++ * and async queues, to favor sequential sync workloads.
24837 ++ *
24838 ++ * Processes doing IO in the slower disk zones will tend to be
24839 ++ * slow(er) even if not seeky. Hence, since the estimated peak
24840 ++ * rate is actually an average over the disk surface, these
24841 ++ * processes may timeout just for bad luck. To avoid punishing
24842 ++ * them we do not charge a full budget to a process that
24843 ++ * succeeded in consuming at least 2/3 of its budget.
24844 ++ */
24845 ++ if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
24846 ++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3))
24847 ++ bfq_bfqq_charge_full_budget(bfqq);
24848 ++
24849 ++ if (bfqd->low_latency && bfqq->raising_coeff == 1)
24850 ++ bfqq->last_rais_start_finish = jiffies;
24851 ++
24852 ++ if (bfqd->low_latency && bfqd->bfq_raising_max_softrt_rate > 0) {
24853 ++ if(reason != BFQ_BFQQ_BUDGET_TIMEOUT)
24854 ++ bfqq->soft_rt_next_start =
24855 ++ jiffies +
24856 ++ HZ * bfqq->entity.service /
24857 ++ bfqd->bfq_raising_max_softrt_rate;
24858 ++ else
24859 ++ bfqq->soft_rt_next_start = -1; /* infinity */
24860 ++ }
24861 ++ bfq_log_bfqq(bfqd, bfqq,
24862 ++ "expire (%d, slow %d, num_disp %d, idle_win %d)", reason, slow,
24863 ++ bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
24864 ++
24865 ++ /* Increase, decrease or leave budget unchanged according to reason */
24866 ++ __bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
24867 ++ __bfq_bfqq_expire(bfqd, bfqq);
24868 ++}
24869 ++
24870 ++/*
24871 ++ * Budget timeout is not implemented through a dedicated timer, but
24872 ++ * just checked on request arrivals and completions, as well as on
24873 ++ * idle timer expirations.
24874 ++ */
24875 ++static int bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
24876 ++{
24877 ++ if (bfq_bfqq_budget_new(bfqq))
24878 ++ return 0;
24879 ++
24880 ++ if (time_before(jiffies, bfqq->budget_timeout))
24881 ++ return 0;
24882 ++
24883 ++ return 1;
24884 ++}
24885 ++
24886 ++/*
24887 ++ * If we expire a queue that is waiting for the arrival of a new
24888 ++ * request, we may prevent the fictitious timestamp backshifting that
24889 ++ * allows the guarantees of the queue to be preserved (see [1] for
24890 ++ * this tricky aspect). Hence we return true only if this condition
24891 ++ * does not hold, or if the queue is slow enough to deserve only to be
24892 ++ * kicked off for preserving a high throughput.
24893 ++*/
24894 ++static inline int bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
24895 ++{
24896 ++ bfq_log_bfqq(bfqq->bfqd, bfqq,
24897 ++ "may_budget_timeout: wr %d left %d timeout %d",
24898 ++ bfq_bfqq_wait_request(bfqq),
24899 ++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3,
24900 ++ bfq_bfqq_budget_timeout(bfqq));
24901 ++
24902 ++ return (!bfq_bfqq_wait_request(bfqq) ||
24903 ++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3)
24904 ++ &&
24905 ++ bfq_bfqq_budget_timeout(bfqq);
24906 ++}
24907 ++
24908 ++/*
24909 ++ * If the active queue is empty, but it is sync and either of the following
24910 ++ * conditions holds, then: 1) the queue must remain active and cannot be
24911 ++ * expired, and 2) the disk must be idled to wait for the possible arrival
24912 ++ * of a new request for the queue. The conditions are:
24913 ++ * - the device is rotational and not performing NCQ, and the queue has its
24914 ++ * idle window set (in this case, waiting for a new request for the queue
24915 ++ * is likely to boost the disk throughput);
24916 ++ * - the queue is weight-raised (waiting for the request is necessary for
24917 ++ * providing the queue with fairness and latency guarantees).
24918 ++ */
24919 ++static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq,
24920 ++ int budg_timeout)
24921 ++{
24922 ++ struct bfq_data *bfqd = bfqq->bfqd;
24923 ++
24924 ++ return (bfq_bfqq_sync(bfqq) && RB_EMPTY_ROOT(&bfqq->sort_list) &&
24925 ++ bfqd->bfq_slice_idle != 0 &&
24926 ++ ((bfq_bfqq_idle_window(bfqq) && !bfqd->hw_tag &&
24927 ++ !blk_queue_nonrot(bfqd->queue))
24928 ++ || bfqq->raising_coeff > 1) &&
24929 ++ (bfqd->rq_in_driver == 0 ||
24930 ++ budg_timeout ||
24931 ++ bfqq->raising_coeff > 1) &&
24932 ++ !bfq_close_cooperator(bfqd, bfqq) &&
24933 ++ (!bfq_bfqq_coop(bfqq) ||
24934 ++ !bfq_bfqq_some_coop_idle(bfqq)) &&
24935 ++ !bfq_queue_nonrot_noidle(bfqd, bfqq));
24936 ++}
24937 ++
24938 ++/*
24939 ++ * Select a queue for service. If we have a current active queue,
24940 ++ * check whether to continue servicing it, or retrieve and set a new one.
24941 ++ */
24942 ++static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
24943 ++{
24944 ++ struct bfq_queue *bfqq, *new_bfqq = NULL;
24945 ++ struct request *next_rq;
24946 ++ enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
24947 ++ int budg_timeout;
24948 ++
24949 ++ bfqq = bfqd->active_queue;
24950 ++ if (bfqq == NULL)
24951 ++ goto new_queue;
24952 ++
24953 ++ bfq_log_bfqq(bfqd, bfqq, "select_queue: already active queue");
24954 ++
24955 ++ /*
24956 ++ * If another queue has a request waiting within our mean seek
24957 ++ * distance, let it run. The expire code will check for close
24958 ++ * cooperators and put the close queue at the front of the
24959 ++ * service tree. If possible, merge the expiring queue with the
24960 ++ * new bfqq.
24961 ++ */
24962 ++ new_bfqq = bfq_close_cooperator(bfqd, bfqq);
24963 ++ if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
24964 ++ bfq_setup_merge(bfqq, new_bfqq);
24965 ++
24966 ++ budg_timeout = bfq_may_expire_for_budg_timeout(bfqq);
24967 ++ if (budg_timeout &&
24968 ++ !bfq_bfqq_must_idle(bfqq, budg_timeout))
24969 ++ goto expire;
24970 ++
24971 ++ next_rq = bfqq->next_rq;
24972 ++ /*
24973 ++ * If bfqq has requests queued and it has enough budget left to
24974 ++ * serve them, keep the queue, otherwise expire it.
24975 ++ */
24976 ++ if (next_rq != NULL) {
24977 ++ if (bfq_serv_to_charge(next_rq, bfqq) >
24978 ++ bfq_bfqq_budget_left(bfqq)) {
24979 ++ reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
24980 ++ goto expire;
24981 ++ } else {
24982 ++ /*
24983 ++ * The idle timer may be pending because we may not
24984 ++ * disable disk idling even when a new request arrives
24985 ++ */
24986 ++ if (timer_pending(&bfqd->idle_slice_timer)) {
24987 ++ /*
24988 ++ * If we get here: 1) at least a new request
24989 ++ * has arrived but we have not disabled the
24990 ++ * timer because the request was too small,
24991 ++ * 2) then the block layer has unplugged the
24992 ++ * device, causing the dispatch to be invoked.
24993 ++ *
24994 ++ * Since the device is unplugged, now the
24995 ++ * requests are probably large enough to
24996 ++ * provide a reasonable throughput.
24997 ++ * So we disable idling.
24998 ++ */
24999 ++ bfq_clear_bfqq_wait_request(bfqq);
25000 ++ del_timer(&bfqd->idle_slice_timer);
25001 ++ }
25002 ++ if (new_bfqq == NULL)
25003 ++ goto keep_queue;
25004 ++ else
25005 ++ goto expire;
25006 ++ }
25007 ++ }
25008 ++
25009 ++ /*
25010 ++ * No requests pending. If there is no cooperator, and the active
25011 ++ * queue still has requests in flight or is idling for a new request,
25012 ++ * then keep it.
25013 ++ */
25014 ++ if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
25015 ++ (bfqq->dispatched != 0 &&
25016 ++ (bfq_bfqq_idle_window(bfqq) || bfqq->raising_coeff > 1) &&
25017 ++ !bfq_queue_nonrot_noidle(bfqd, bfqq)))) {
25018 ++ bfqq = NULL;
25019 ++ goto keep_queue;
25020 ++ } else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
25021 ++ /*
25022 ++ * Expiring the queue because there is a close cooperator,
25023 ++ * cancel timer.
25024 ++ */
25025 ++ bfq_clear_bfqq_wait_request(bfqq);
25026 ++ del_timer(&bfqd->idle_slice_timer);
25027 ++ }
25028 ++
25029 ++ reason = BFQ_BFQQ_NO_MORE_REQUESTS;
25030 ++expire:
25031 ++ bfq_bfqq_expire(bfqd, bfqq, 0, reason);
25032 ++new_queue:
25033 ++ bfqq = bfq_set_active_queue(bfqd, new_bfqq);
25034 ++ bfq_log(bfqd, "select_queue: new queue %d returned",
25035 ++ bfqq != NULL ? bfqq->pid : 0);
25036 ++keep_queue:
25037 ++ return bfqq;
25038 ++}
25039 ++
25040 ++static void update_raising_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
25041 ++{
25042 ++ if (bfqq->raising_coeff > 1) { /* queue is being boosted */
25043 ++ struct bfq_entity *entity = &bfqq->entity;
25044 ++
25045 ++ bfq_log_bfqq(bfqd, bfqq,
25046 ++ "raising period dur %u/%u msec, "
25047 ++ "old raising coeff %u, w %d(%d)",
25048 ++ jiffies_to_msecs(jiffies -
25049 ++ bfqq->last_rais_start_finish),
25050 ++ jiffies_to_msecs(bfqq->raising_cur_max_time),
25051 ++ bfqq->raising_coeff,
25052 ++ bfqq->entity.weight, bfqq->entity.orig_weight);
25053 ++
25054 ++ BUG_ON(bfqq != bfqd->active_queue && entity->weight !=
25055 ++ entity->orig_weight * bfqq->raising_coeff);
25056 ++ if(entity->ioprio_changed)
25057 ++ bfq_log_bfqq(bfqd, bfqq,
25058 ++ "WARN: pending prio change");
25059 ++ /*
25060 ++ * If too much time has elapsed from the beginning
25061 ++ * of this weight-raising period and process is not soft
25062 ++ * real-time, stop it
25063 ++ */
25064 ++ if (jiffies - bfqq->last_rais_start_finish >
25065 ++ bfqq->raising_cur_max_time) {
25066 ++ int soft_rt = bfqd->bfq_raising_max_softrt_rate > 0 &&
25067 ++ bfqq->soft_rt_next_start < jiffies;
25068 ++
25069 ++ bfqq->last_rais_start_finish = jiffies;
25070 ++ if (soft_rt)
25071 ++ bfqq->raising_cur_max_time =
25072 ++ bfqd->bfq_raising_rt_max_time;
25073 ++ else {
25074 ++ bfq_log_bfqq(bfqd, bfqq,
25075 ++ "wrais ending at %llu msec,"
25076 ++ "rais_max_time %u",
25077 ++ bfqq->last_rais_start_finish,
25078 ++ jiffies_to_msecs(bfqq->
25079 ++ raising_cur_max_time));
25080 ++ bfq_bfqq_end_raising(bfqq);
25081 ++ __bfq_entity_update_weight_prio(
25082 ++ bfq_entity_service_tree(entity),
25083 ++ entity);
25084 ++ }
25085 ++ }
25086 ++ }
25087 ++}
25088 ++
25089 ++/*
25090 ++ * Dispatch one request from bfqq, moving it to the request queue
25091 ++ * dispatch list.
25092 ++ */
25093 ++static int bfq_dispatch_request(struct bfq_data *bfqd,
25094 ++ struct bfq_queue *bfqq)
25095 ++{
25096 ++ int dispatched = 0;
25097 ++ struct request *rq;
25098 ++ unsigned long service_to_charge;
25099 ++
25100 ++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
25101 ++
25102 ++ /* Follow expired path, else get first next available. */
25103 ++ rq = bfq_check_fifo(bfqq);
25104 ++ if (rq == NULL)
25105 ++ rq = bfqq->next_rq;
25106 ++ service_to_charge = bfq_serv_to_charge(rq, bfqq);
25107 ++
25108 ++ if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
25109 ++ /*
25110 ++ * This may happen if the next rq is chosen
25111 ++ * in fifo order instead of sector order.
25112 ++ * The budget is properly dimensioned
25113 ++ * to be always sufficient to serve the next request
25114 ++ * only if it is chosen in sector order. The reason is
25115 ++ * that it would be quite inefficient and little useful
25116 ++ * to always make sure that the budget is large enough
25117 ++ * to serve even the possible next rq in fifo order.
25118 ++ * In fact, requests are seldom served in fifo order.
25119 ++ *
25120 ++ * Expire the queue for budget exhaustion, and
25121 ++ * make sure that the next act_budget is enough
25122 ++ * to serve the next request, even if it comes
25123 ++ * from the fifo expired path.
25124 ++ */
25125 ++ bfqq->next_rq = rq;
25126 ++ /*
25127 ++ * Since this dispatch is failed, make sure that
25128 ++ * a new one will be performed
25129 ++ */
25130 ++ if (!bfqd->rq_in_driver)
25131 ++ bfq_schedule_dispatch(bfqd);
25132 ++ goto expire;
25133 ++ }
25134 ++
25135 ++ /* Finally, insert request into driver dispatch list. */
25136 ++ bfq_bfqq_served(bfqq, service_to_charge);
25137 ++ bfq_dispatch_insert(bfqd->queue, rq);
25138 ++
25139 ++ update_raising_data(bfqd, bfqq);
25140 ++
25141 ++ bfq_log_bfqq(bfqd, bfqq, "dispatched %u sec req (%llu), "
25142 ++ "budg left %lu",
25143 ++ blk_rq_sectors(rq),
25144 ++ (long long unsigned)blk_rq_pos(rq),
25145 ++ bfq_bfqq_budget_left(bfqq));
25146 ++
25147 ++ dispatched++;
25148 ++
25149 ++ if (bfqd->active_bic == NULL) {
25150 ++ atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount);
25151 ++ bfqd->active_bic = RQ_BIC(rq);
25152 ++ }
25153 ++
25154 ++ if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
25155 ++ dispatched >= bfqd->bfq_max_budget_async_rq) ||
25156 ++ bfq_class_idle(bfqq)))
25157 ++ goto expire;
25158 ++
25159 ++ return dispatched;
25160 ++
25161 ++expire:
25162 ++ bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_EXHAUSTED);
25163 ++ return dispatched;
25164 ++}
25165 ++
25166 ++static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq)
25167 ++{
25168 ++ int dispatched = 0;
25169 ++
25170 ++ while (bfqq->next_rq != NULL) {
25171 ++ bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq);
25172 ++ dispatched++;
25173 ++ }
25174 ++
25175 ++ BUG_ON(!list_empty(&bfqq->fifo));
25176 ++ return dispatched;
25177 ++}
25178 ++
25179 ++/*
25180 ++ * Drain our current requests. Used for barriers and when switching
25181 ++ * io schedulers on-the-fly.
25182 ++ */
25183 ++static int bfq_forced_dispatch(struct bfq_data *bfqd)
25184 ++{
25185 ++ struct bfq_queue *bfqq, *n;
25186 ++ struct bfq_service_tree *st;
25187 ++ int dispatched = 0;
25188 ++
25189 ++ bfqq = bfqd->active_queue;
25190 ++ if (bfqq != NULL)
25191 ++ __bfq_bfqq_expire(bfqd, bfqq);
25192 ++
25193 ++ /*
25194 ++ * Loop through classes, and be careful to leave the scheduler
25195 ++ * in a consistent state, as feedback mechanisms and vtime
25196 ++ * updates cannot be disabled during the process.
25197 ++ */
25198 ++ list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) {
25199 ++ st = bfq_entity_service_tree(&bfqq->entity);
25200 ++
25201 ++ dispatched += __bfq_forced_dispatch_bfqq(bfqq);
25202 ++ bfqq->max_budget = bfq_max_budget(bfqd);
25203 ++
25204 ++ bfq_forget_idle(st);
25205 ++ }
25206 ++
25207 ++ BUG_ON(bfqd->busy_queues != 0);
25208 ++
25209 ++ return dispatched;
25210 ++}
25211 ++
25212 ++static int bfq_dispatch_requests(struct request_queue *q, int force)
25213 ++{
25214 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
25215 ++ struct bfq_queue *bfqq;
25216 ++ int max_dispatch;
25217 ++
25218 ++ bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
25219 ++ if (bfqd->busy_queues == 0)
25220 ++ return 0;
25221 ++
25222 ++ if (unlikely(force))
25223 ++ return bfq_forced_dispatch(bfqd);
25224 ++
25225 ++ if((bfqq = bfq_select_queue(bfqd)) == NULL)
25226 ++ return 0;
25227 ++
25228 ++ max_dispatch = bfqd->bfq_quantum;
25229 ++ if (bfq_class_idle(bfqq))
25230 ++ max_dispatch = 1;
25231 ++
25232 ++ if (!bfq_bfqq_sync(bfqq))
25233 ++ max_dispatch = bfqd->bfq_max_budget_async_rq;
25234 ++
25235 ++ if (bfqq->dispatched >= max_dispatch) {
25236 ++ if (bfqd->busy_queues > 1)
25237 ++ return 0;
25238 ++ if (bfqq->dispatched >= 4 * max_dispatch)
25239 ++ return 0;
25240 ++ }
25241 ++
25242 ++ if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
25243 ++ return 0;
25244 ++
25245 ++ bfq_clear_bfqq_wait_request(bfqq);
25246 ++ BUG_ON(timer_pending(&bfqd->idle_slice_timer));
25247 ++
25248 ++ if (! bfq_dispatch_request(bfqd, bfqq))
25249 ++ return 0;
25250 ++
25251 ++ bfq_log_bfqq(bfqd, bfqq, "dispatched one request of %d"
25252 ++ "(max_disp %d)", bfqq->pid, max_dispatch);
25253 ++
25254 ++ return 1;
25255 ++}
25256 ++
25257 ++/*
25258 ++ * Task holds one reference to the queue, dropped when task exits. Each rq
25259 ++ * in-flight on this queue also holds a reference, dropped when rq is freed.
25260 ++ *
25261 ++ * Queue lock must be held here.
25262 ++ */
25263 ++static void bfq_put_queue(struct bfq_queue *bfqq)
25264 ++{
25265 ++ struct bfq_data *bfqd = bfqq->bfqd;
25266 ++
25267 ++ BUG_ON(atomic_read(&bfqq->ref) <= 0);
25268 ++
25269 ++ bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
25270 ++ atomic_read(&bfqq->ref));
25271 ++ if (!atomic_dec_and_test(&bfqq->ref))
25272 ++ return;
25273 ++
25274 ++ BUG_ON(rb_first(&bfqq->sort_list) != NULL);
25275 ++ BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
25276 ++ BUG_ON(bfqq->entity.tree != NULL);
25277 ++ BUG_ON(bfq_bfqq_busy(bfqq));
25278 ++ BUG_ON(bfqd->active_queue == bfqq);
25279 ++
25280 ++ bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
25281 ++
25282 ++ kmem_cache_free(bfq_pool, bfqq);
25283 ++}
25284 ++
25285 ++static void bfq_put_cooperator(struct bfq_queue *bfqq)
25286 ++{
25287 ++ struct bfq_queue *__bfqq, *next;
25288 ++
25289 ++ /*
25290 ++ * If this queue was scheduled to merge with another queue, be
25291 ++ * sure to drop the reference taken on that queue (and others in
25292 ++ * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs.
25293 ++ */
25294 ++ __bfqq = bfqq->new_bfqq;
25295 ++ while (__bfqq) {
25296 ++ if (__bfqq == bfqq) {
25297 ++ WARN(1, "bfqq->new_bfqq loop detected.\n");
25298 ++ break;
25299 ++ }
25300 ++ next = __bfqq->new_bfqq;
25301 ++ bfq_put_queue(__bfqq);
25302 ++ __bfqq = next;
25303 ++ }
25304 ++}
25305 ++
25306 ++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
25307 ++{
25308 ++ if (bfqq == bfqd->active_queue) {
25309 ++ __bfq_bfqq_expire(bfqd, bfqq);
25310 ++ bfq_schedule_dispatch(bfqd);
25311 ++ }
25312 ++
25313 ++ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
25314 ++ atomic_read(&bfqq->ref));
25315 ++
25316 ++ bfq_put_cooperator(bfqq);
25317 ++
25318 ++ bfq_put_queue(bfqq);
25319 ++}
25320 ++
25321 ++static void bfq_init_icq(struct io_cq *icq)
25322 ++{
25323 ++ struct bfq_io_cq *bic = icq_to_bic(icq);
25324 ++
25325 ++ bic->ttime.last_end_request = jiffies;
25326 ++}
25327 ++
25328 ++static void bfq_exit_icq(struct io_cq *icq)
25329 ++{
25330 ++ struct bfq_io_cq *bic = icq_to_bic(icq);
25331 ++ struct bfq_data *bfqd = bic_to_bfqd(bic);
25332 ++
25333 ++ if (bic->bfqq[BLK_RW_ASYNC]) {
25334 ++ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
25335 ++ bic->bfqq[BLK_RW_ASYNC] = NULL;
25336 ++ }
25337 ++
25338 ++ if (bic->bfqq[BLK_RW_SYNC]) {
25339 ++ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
25340 ++ bic->bfqq[BLK_RW_SYNC] = NULL;
25341 ++ }
25342 ++}
25343 ++
25344 ++/*
25345 ++ * Update the entity prio values; note that the new values will not
25346 ++ * be used until the next (re)activation.
25347 ++ */
25348 ++static void bfq_init_prio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
25349 ++{
25350 ++ struct task_struct *tsk = current;
25351 ++ int ioprio_class;
25352 ++
25353 ++ if (!bfq_bfqq_prio_changed(bfqq))
25354 ++ return;
25355 ++
25356 ++ ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
25357 ++ switch (ioprio_class) {
25358 ++ default:
25359 ++ printk(KERN_ERR "bfq: bad prio %x\n", ioprio_class);
25360 ++ case IOPRIO_CLASS_NONE:
25361 ++ /*
25362 ++ * No prio set, inherit CPU scheduling settings.
25363 ++ */
25364 ++ bfqq->entity.new_ioprio = task_nice_ioprio(tsk);
25365 ++ bfqq->entity.new_ioprio_class = task_nice_ioclass(tsk);
25366 ++ break;
25367 ++ case IOPRIO_CLASS_RT:
25368 ++ bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
25369 ++ bfqq->entity.new_ioprio_class = IOPRIO_CLASS_RT;
25370 ++ break;
25371 ++ case IOPRIO_CLASS_BE:
25372 ++ bfqq->entity.new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
25373 ++ bfqq->entity.new_ioprio_class = IOPRIO_CLASS_BE;
25374 ++ break;
25375 ++ case IOPRIO_CLASS_IDLE:
25376 ++ bfqq->entity.new_ioprio_class = IOPRIO_CLASS_IDLE;
25377 ++ bfqq->entity.new_ioprio = 7;
25378 ++ bfq_clear_bfqq_idle_window(bfqq);
25379 ++ break;
25380 ++ }
25381 ++
25382 ++ bfqq->entity.ioprio_changed = 1;
25383 ++
25384 ++ /*
25385 ++ * Keep track of original prio settings in case we have to temporarily
25386 ++ * elevate the priority of this queue.
25387 ++ */
25388 ++ bfqq->org_ioprio = bfqq->entity.new_ioprio;
25389 ++ bfq_clear_bfqq_prio_changed(bfqq);
25390 ++}
25391 ++
25392 ++static void bfq_changed_ioprio(struct bfq_io_cq *bic)
25393 ++{
25394 ++ struct bfq_data *bfqd;
25395 ++ struct bfq_queue *bfqq, *new_bfqq;
25396 ++ struct bfq_group *bfqg;
25397 ++ unsigned long uninitialized_var(flags);
25398 ++ int ioprio = bic->icq.ioc->ioprio;
25399 ++
25400 ++ bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data), &flags);
25401 ++ /*
25402 ++ * This condition may trigger on a newly created bic, be sure to drop the
25403 ++ * lock before returning.
25404 ++ */
25405 ++ if (unlikely(bfqd == NULL) || likely(bic->ioprio == ioprio))
25406 ++ goto out;
25407 ++
25408 ++ bfqq = bic->bfqq[BLK_RW_ASYNC];
25409 ++ if (bfqq != NULL) {
25410 ++ bfqg = container_of(bfqq->entity.sched_data, struct bfq_group,
25411 ++ sched_data);
25412 ++ new_bfqq = bfq_get_queue(bfqd, bfqg, BLK_RW_ASYNC, bic,
25413 ++ GFP_ATOMIC);
25414 ++ if (new_bfqq != NULL) {
25415 ++ bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
25416 ++ bfq_log_bfqq(bfqd, bfqq,
25417 ++ "changed_ioprio: bfqq %p %d",
25418 ++ bfqq, atomic_read(&bfqq->ref));
25419 ++ bfq_put_queue(bfqq);
25420 ++ }
25421 ++ }
25422 ++
25423 ++ bfqq = bic->bfqq[BLK_RW_SYNC];
25424 ++ if (bfqq != NULL)
25425 ++ bfq_mark_bfqq_prio_changed(bfqq);
25426 ++
25427 ++ bic->ioprio = ioprio;
25428 ++
25429 ++out:
25430 ++ bfq_put_bfqd_unlock(bfqd, &flags);
25431 ++}
25432 ++
25433 ++static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
25434 ++ pid_t pid, int is_sync)
25435 ++{
25436 ++ RB_CLEAR_NODE(&bfqq->entity.rb_node);
25437 ++ INIT_LIST_HEAD(&bfqq->fifo);
25438 ++
25439 ++ atomic_set(&bfqq->ref, 0);
25440 ++ bfqq->bfqd = bfqd;
25441 ++
25442 ++ bfq_mark_bfqq_prio_changed(bfqq);
25443 ++
25444 ++ if (is_sync) {
25445 ++ if (!bfq_class_idle(bfqq))
25446 ++ bfq_mark_bfqq_idle_window(bfqq);
25447 ++ bfq_mark_bfqq_sync(bfqq);
25448 ++ }
25449 ++
25450 ++ /* Tentative initial value to trade off between thr and lat */
25451 ++ bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3;
25452 ++ bfqq->pid = pid;
25453 ++
25454 ++ bfqq->raising_coeff = 1;
25455 ++ bfqq->last_rais_start_finish = 0;
25456 ++ bfqq->soft_rt_next_start = -1;
25457 ++}
25458 ++
25459 ++static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
25460 ++ struct bfq_group *bfqg,
25461 ++ int is_sync,
25462 ++ struct bfq_io_cq *bic,
25463 ++ gfp_t gfp_mask)
25464 ++{
25465 ++ struct bfq_queue *bfqq, *new_bfqq = NULL;
25466 ++
25467 ++retry:
25468 ++ /* bic always exists here */
25469 ++ bfqq = bic_to_bfqq(bic, is_sync);
25470 ++
25471 ++ /*
25472 ++ * Always try a new alloc if we fall back to the OOM bfqq
25473 ++ * originally, since it should just be a temporary situation.
25474 ++ */
25475 ++ if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
25476 ++ bfqq = NULL;
25477 ++ if (new_bfqq != NULL) {
25478 ++ bfqq = new_bfqq;
25479 ++ new_bfqq = NULL;
25480 ++ } else if (gfp_mask & __GFP_WAIT) {
25481 ++ spin_unlock_irq(bfqd->queue->queue_lock);
25482 ++ new_bfqq = kmem_cache_alloc_node(bfq_pool,
25483 ++ gfp_mask | __GFP_ZERO,
25484 ++ bfqd->queue->node);
25485 ++ spin_lock_irq(bfqd->queue->queue_lock);
25486 ++ if (new_bfqq != NULL)
25487 ++ goto retry;
25488 ++ } else {
25489 ++ bfqq = kmem_cache_alloc_node(bfq_pool,
25490 ++ gfp_mask | __GFP_ZERO,
25491 ++ bfqd->queue->node);
25492 ++ }
25493 ++
25494 ++ if (bfqq != NULL) {
25495 ++ bfq_init_bfqq(bfqd, bfqq, current->pid, is_sync);
25496 ++ bfq_log_bfqq(bfqd, bfqq, "allocated");
25497 ++ } else {
25498 ++ bfqq = &bfqd->oom_bfqq;
25499 ++ bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
25500 ++ }
25501 ++
25502 ++ bfq_init_prio_data(bfqq, bic);
25503 ++ bfq_init_entity(&bfqq->entity, bfqg);
25504 ++ }
25505 ++
25506 ++ if (new_bfqq != NULL)
25507 ++ kmem_cache_free(bfq_pool, new_bfqq);
25508 ++
25509 ++ return bfqq;
25510 ++}
25511 ++
25512 ++static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
25513 ++ struct bfq_group *bfqg,
25514 ++ int ioprio_class, int ioprio)
25515 ++{
25516 ++ switch (ioprio_class) {
25517 ++ case IOPRIO_CLASS_RT:
25518 ++ return &bfqg->async_bfqq[0][ioprio];
25519 ++ case IOPRIO_CLASS_NONE:
25520 ++ ioprio = IOPRIO_NORM;
25521 ++ /* fall through */
25522 ++ case IOPRIO_CLASS_BE:
25523 ++ return &bfqg->async_bfqq[1][ioprio];
25524 ++ case IOPRIO_CLASS_IDLE:
25525 ++ return &bfqg->async_idle_bfqq;
25526 ++ default:
25527 ++ BUG();
25528 ++ }
25529 ++}
25530 ++
25531 ++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
25532 ++ struct bfq_group *bfqg, int is_sync,
25533 ++ struct bfq_io_cq *bic, gfp_t gfp_mask)
25534 ++{
25535 ++ const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
25536 ++ const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
25537 ++ struct bfq_queue **async_bfqq = NULL;
25538 ++ struct bfq_queue *bfqq = NULL;
25539 ++
25540 ++ if (!is_sync) {
25541 ++ async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
25542 ++ ioprio);
25543 ++ bfqq = *async_bfqq;
25544 ++ }
25545 ++
25546 ++ if (bfqq == NULL)
25547 ++ bfqq = bfq_find_alloc_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
25548 ++
25549 ++ /*
25550 ++ * Pin the queue now that it's allocated, scheduler exit will prune it.
25551 ++ */
25552 ++ if (!is_sync && *async_bfqq == NULL) {
25553 ++ atomic_inc(&bfqq->ref);
25554 ++ bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
25555 ++ bfqq, atomic_read(&bfqq->ref));
25556 ++ *async_bfqq = bfqq;
25557 ++ }
25558 ++
25559 ++ atomic_inc(&bfqq->ref);
25560 ++ bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
25561 ++ atomic_read(&bfqq->ref));
25562 ++ return bfqq;
25563 ++}
25564 ++
25565 ++static void bfq_update_io_thinktime(struct bfq_data *bfqd,
25566 ++ struct bfq_io_cq *bic)
25567 ++{
25568 ++ unsigned long elapsed = jiffies - bic->ttime.last_end_request;
25569 ++ unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
25570 ++
25571 ++ bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
25572 ++ bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
25573 ++ bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) / bic->ttime.ttime_samples;
25574 ++}
25575 ++
25576 ++static void bfq_update_io_seektime(struct bfq_data *bfqd,
25577 ++ struct bfq_queue *bfqq,
25578 ++ struct request *rq)
25579 ++{
25580 ++ sector_t sdist;
25581 ++ u64 total;
25582 ++
25583 ++ if (bfqq->last_request_pos < blk_rq_pos(rq))
25584 ++ sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
25585 ++ else
25586 ++ sdist = bfqq->last_request_pos - blk_rq_pos(rq);
25587 ++
25588 ++ /*
25589 ++ * Don't allow the seek distance to get too large from the
25590 ++ * odd fragment, pagein, etc.
25591 ++ */
25592 ++ if (bfqq->seek_samples == 0) /* first request, not really a seek */
25593 ++ sdist = 0;
25594 ++ else if (bfqq->seek_samples <= 60) /* second & third seek */
25595 ++ sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
25596 ++ else
25597 ++ sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
25598 ++
25599 ++ bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
25600 ++ bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
25601 ++ total = bfqq->seek_total + (bfqq->seek_samples/2);
25602 ++ do_div(total, bfqq->seek_samples);
25603 ++ if (bfq_bfqq_coop(bfqq)) {
25604 ++ /*
25605 ++ * If the mean seektime increases for a (non-seeky) shared
25606 ++ * queue, some cooperator is likely to be idling too much.
25607 ++ * On the contrary, if it decreases, some cooperator has
25608 ++ * probably waked up.
25609 ++ *
25610 ++ */
25611 ++ if ((sector_t)total < bfqq->seek_mean)
25612 ++ bfq_mark_bfqq_some_coop_idle(bfqq) ;
25613 ++ else if ((sector_t)total > bfqq->seek_mean)
25614 ++ bfq_clear_bfqq_some_coop_idle(bfqq) ;
25615 ++ }
25616 ++ bfqq->seek_mean = (sector_t)total;
25617 ++
25618 ++ bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
25619 ++ (u64)bfqq->seek_mean);
25620 ++}
25621 ++
25622 ++/*
25623 ++ * Disable idle window if the process thinks too long or seeks so much that
25624 ++ * it doesn't matter.
25625 ++ */
25626 ++static void bfq_update_idle_window(struct bfq_data *bfqd,
25627 ++ struct bfq_queue *bfqq,
25628 ++ struct bfq_io_cq *bic)
25629 ++{
25630 ++ int enable_idle;
25631 ++
25632 ++ /* Don't idle for async or idle io prio class. */
25633 ++ if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
25634 ++ return;
25635 ++
25636 ++ enable_idle = bfq_bfqq_idle_window(bfqq);
25637 ++
25638 ++ if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
25639 ++ bfqd->bfq_slice_idle == 0 ||
25640 ++ (bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
25641 ++ bfqq->raising_coeff == 1))
25642 ++ enable_idle = 0;
25643 ++ else if (bfq_sample_valid(bic->ttime.ttime_samples)) {
25644 ++ if (bic->ttime.ttime_mean > bfqd->bfq_slice_idle &&
25645 ++ bfqq->raising_coeff == 1)
25646 ++ enable_idle = 0;
25647 ++ else
25648 ++ enable_idle = 1;
25649 ++ }
25650 ++ bfq_log_bfqq(bfqd, bfqq, "update_idle_window: enable_idle %d",
25651 ++ enable_idle);
25652 ++
25653 ++ if (enable_idle)
25654 ++ bfq_mark_bfqq_idle_window(bfqq);
25655 ++ else
25656 ++ bfq_clear_bfqq_idle_window(bfqq);
25657 ++}
25658 ++
25659 ++/*
25660 ++ * Called when a new fs request (rq) is added to bfqq. Check if there's
25661 ++ * something we should do about it.
25662 ++ */
25663 ++static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
25664 ++ struct request *rq)
25665 ++{
25666 ++ struct bfq_io_cq *bic = RQ_BIC(rq);
25667 ++
25668 ++ if (rq->cmd_flags & REQ_META)
25669 ++ bfqq->meta_pending++;
25670 ++
25671 ++ bfq_update_io_thinktime(bfqd, bic);
25672 ++ bfq_update_io_seektime(bfqd, bfqq, rq);
25673 ++ if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
25674 ++ !BFQQ_SEEKY(bfqq))
25675 ++ bfq_update_idle_window(bfqd, bfqq, bic);
25676 ++
25677 ++ bfq_log_bfqq(bfqd, bfqq,
25678 ++ "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
25679 ++ bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
25680 ++ (long long unsigned)bfqq->seek_mean);
25681 ++
25682 ++ bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
25683 ++
25684 ++ if (bfqq == bfqd->active_queue) {
25685 ++ /*
25686 ++ * If there is just this request queued and the request
25687 ++ * is small, just exit.
25688 ++ * In this way, if the disk is being idled to wait for a new
25689 ++ * request from the active queue, we avoid unplugging the
25690 ++ * device now.
25691 ++ *
25692 ++ * By doing so, we spare the disk to be committed
25693 ++ * to serve just a small request. On the contrary, we wait for
25694 ++ * the block layer to decide when to unplug the device:
25695 ++ * hopefully, new requests will be merged to this
25696 ++ * one quickly, then the device will be unplugged
25697 ++ * and larger requests will be dispatched.
25698 ++ */
25699 ++ if (bfqq->queued[rq_is_sync(rq)] == 1 &&
25700 ++ blk_rq_sectors(rq) < 32) {
25701 ++ return;
25702 ++ }
25703 ++ if (bfq_bfqq_wait_request(bfqq)) {
25704 ++ /*
25705 ++ * If we are waiting for a request for this queue, let
25706 ++ * it rip immediately and flag that we must not expire
25707 ++ * this queue just now.
25708 ++ */
25709 ++ bfq_clear_bfqq_wait_request(bfqq);
25710 ++ del_timer(&bfqd->idle_slice_timer);
25711 ++ /*
25712 ++ * Here we can safely expire the queue, in
25713 ++ * case of budget timeout, without wasting
25714 ++ * guarantees
25715 ++ */
25716 ++ if (bfq_bfqq_budget_timeout(bfqq))
25717 ++ bfq_bfqq_expire(bfqd, bfqq, 0,
25718 ++ BFQ_BFQQ_BUDGET_TIMEOUT);
25719 ++ __blk_run_queue(bfqd->queue);
25720 ++ }
25721 ++ }
25722 ++}
25723 ++
25724 ++static void bfq_insert_request(struct request_queue *q, struct request *rq)
25725 ++{
25726 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
25727 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
25728 ++
25729 ++ assert_spin_locked(bfqd->queue->queue_lock);
25730 ++ bfq_init_prio_data(bfqq, RQ_BIC(rq));
25731 ++
25732 ++ bfq_add_rq_rb(rq);
25733 ++
25734 ++ rq_set_fifo_time(rq, jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)]);
25735 ++ list_add_tail(&rq->queuelist, &bfqq->fifo);
25736 ++
25737 ++ bfq_rq_enqueued(bfqd, bfqq, rq);
25738 ++}
25739 ++
25740 ++static void bfq_update_hw_tag(struct bfq_data *bfqd)
25741 ++{
25742 ++ bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
25743 ++ bfqd->rq_in_driver);
25744 ++
25745 ++ if (bfqd->hw_tag == 1)
25746 ++ return;
25747 ++
25748 ++ /*
25749 ++ * This sample is valid if the number of outstanding requests
25750 ++ * is large enough to allow a queueing behavior. Note that the
25751 ++ * sum is not exact, as it's not taking into account deactivated
25752 ++ * requests.
25753 ++ */
25754 ++ if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD)
25755 ++ return;
25756 ++
25757 ++ if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
25758 ++ return;
25759 ++
25760 ++ bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
25761 ++ bfqd->max_rq_in_driver = 0;
25762 ++ bfqd->hw_tag_samples = 0;
25763 ++}
25764 ++
25765 ++static void bfq_completed_request(struct request_queue *q, struct request *rq)
25766 ++{
25767 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
25768 ++ struct bfq_data *bfqd = bfqq->bfqd;
25769 ++ const int sync = rq_is_sync(rq);
25770 ++
25771 ++ bfq_log_bfqq(bfqd, bfqq, "completed %u sects req (%d)",
25772 ++ blk_rq_sectors(rq), sync);
25773 ++
25774 ++ bfq_update_hw_tag(bfqd);
25775 ++
25776 ++ WARN_ON(!bfqd->rq_in_driver);
25777 ++ WARN_ON(!bfqq->dispatched);
25778 ++ bfqd->rq_in_driver--;
25779 ++ bfqq->dispatched--;
25780 ++
25781 ++ if (bfq_bfqq_sync(bfqq))
25782 ++ bfqd->sync_flight--;
25783 ++
25784 ++ if (sync)
25785 ++ RQ_BIC(rq)->ttime.last_end_request = jiffies;
25786 ++
25787 ++ /*
25788 ++ * If this is the active queue, check if it needs to be expired,
25789 ++ * or if we want to idle in case it has no pending requests.
25790 ++ */
25791 ++ if (bfqd->active_queue == bfqq) {
25792 ++ int budg_timeout = bfq_may_expire_for_budg_timeout(bfqq);
25793 ++ if (bfq_bfqq_budget_new(bfqq))
25794 ++ bfq_set_budget_timeout(bfqd);
25795 ++
25796 ++ /* Idling is disabled also for cooperation issues:
25797 ++ * 1) there is a close cooperator for the queue, or
25798 ++ * 2) the queue is shared and some cooperator is likely
25799 ++ * to be idle (in this case, by not arming the idle timer,
25800 ++ * we try to slow down the queue, to prevent the zones
25801 ++ * of the disk accessed by the active cooperators to become
25802 ++ * too distant from the zone that will be accessed by the
25803 ++ * currently idle cooperators)
25804 ++ */
25805 ++ if (bfq_bfqq_must_idle(bfqq, budg_timeout))
25806 ++ bfq_arm_slice_timer(bfqd);
25807 ++ else if (budg_timeout)
25808 ++ bfq_bfqq_expire(bfqd, bfqq, 0, BFQ_BFQQ_BUDGET_TIMEOUT);
25809 ++ }
25810 ++
25811 ++ if (!bfqd->rq_in_driver)
25812 ++ bfq_schedule_dispatch(bfqd);
25813 ++}
25814 ++
25815 ++static inline int __bfq_may_queue(struct bfq_queue *bfqq)
25816 ++{
25817 ++ if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) {
25818 ++ bfq_clear_bfqq_must_alloc(bfqq);
25819 ++ return ELV_MQUEUE_MUST;
25820 ++ }
25821 ++
25822 ++ return ELV_MQUEUE_MAY;
25823 ++}
25824 ++
25825 ++static int bfq_may_queue(struct request_queue *q, int rw)
25826 ++{
25827 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
25828 ++ struct task_struct *tsk = current;
25829 ++ struct bfq_io_cq *bic;
25830 ++ struct bfq_queue *bfqq;
25831 ++
25832 ++ /*
25833 ++ * Don't force setup of a queue from here, as a call to may_queue
25834 ++ * does not necessarily imply that a request actually will be queued.
25835 ++ * So just lookup a possibly existing queue, or return 'may queue'
25836 ++ * if that fails.
25837 ++ */
25838 ++ bic = bfq_bic_lookup(bfqd, tsk->io_context);
25839 ++ if (bic == NULL)
25840 ++ return ELV_MQUEUE_MAY;
25841 ++
25842 ++ bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
25843 ++ if (bfqq != NULL) {
25844 ++ bfq_init_prio_data(bfqq, bic);
25845 ++
25846 ++ return __bfq_may_queue(bfqq);
25847 ++ }
25848 ++
25849 ++ return ELV_MQUEUE_MAY;
25850 ++}
25851 ++
25852 ++/*
25853 ++ * Queue lock held here.
25854 ++ */
25855 ++static void bfq_put_request(struct request *rq)
25856 ++{
25857 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
25858 ++
25859 ++ if (bfqq != NULL) {
25860 ++ const int rw = rq_data_dir(rq);
25861 ++
25862 ++ BUG_ON(!bfqq->allocated[rw]);
25863 ++ bfqq->allocated[rw]--;
25864 ++
25865 ++ rq->elv.priv[0] = NULL;
25866 ++ rq->elv.priv[1] = NULL;
25867 ++
25868 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
25869 ++ bfqq, atomic_read(&bfqq->ref));
25870 ++ bfq_put_queue(bfqq);
25871 ++ }
25872 ++}
25873 ++
25874 ++static struct bfq_queue *
25875 ++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
25876 ++ struct bfq_queue *bfqq)
25877 ++{
25878 ++ bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
25879 ++ (long unsigned)bfqq->new_bfqq->pid);
25880 ++ bic_set_bfqq(bic, bfqq->new_bfqq, 1);
25881 ++ bfq_mark_bfqq_coop(bfqq->new_bfqq);
25882 ++ bfq_put_queue(bfqq);
25883 ++ return bic_to_bfqq(bic, 1);
25884 ++}
25885 ++
25886 ++/*
25887 ++ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
25888 ++ * was the last process referring to said bfqq.
25889 ++ */
25890 ++static struct bfq_queue *
25891 ++bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
25892 ++{
25893 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
25894 ++ if (bfqq_process_refs(bfqq) == 1) {
25895 ++ bfqq->pid = current->pid;
25896 ++ bfq_clear_bfqq_some_coop_idle(bfqq);
25897 ++ bfq_clear_bfqq_coop(bfqq);
25898 ++ bfq_clear_bfqq_split_coop(bfqq);
25899 ++ return bfqq;
25900 ++ }
25901 ++
25902 ++ bic_set_bfqq(bic, NULL, 1);
25903 ++
25904 ++ bfq_put_cooperator(bfqq);
25905 ++
25906 ++ bfq_put_queue(bfqq);
25907 ++ return NULL;
25908 ++}
25909 ++
25910 ++/*
25911 ++ * Allocate bfq data structures associated with this request.
25912 ++ */
25913 ++static int bfq_set_request(struct request_queue *q, struct request *rq,
25914 ++ struct bio *bio, gfp_t gfp_mask)
25915 ++{
25916 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
25917 ++ struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq);
25918 ++ const int rw = rq_data_dir(rq);
25919 ++ const int is_sync = rq_is_sync(rq);
25920 ++ struct bfq_queue *bfqq;
25921 ++ struct bfq_group *bfqg;
25922 ++ unsigned long flags;
25923 ++
25924 ++ might_sleep_if(gfp_mask & __GFP_WAIT);
25925 ++
25926 ++ bfq_changed_ioprio(bic);
25927 ++
25928 ++ spin_lock_irqsave(q->queue_lock, flags);
25929 ++
25930 ++ if (bic == NULL)
25931 ++ goto queue_fail;
25932 ++
25933 ++ bfqg = bfq_bic_update_cgroup(bic);
25934 ++
25935 ++new_queue:
25936 ++ bfqq = bic_to_bfqq(bic, is_sync);
25937 ++ if (bfqq == NULL || bfqq == &bfqd->oom_bfqq) {
25938 ++ bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
25939 ++ bic_set_bfqq(bic, bfqq, is_sync);
25940 ++ } else {
25941 ++ /*
25942 ++ * If the queue was seeky for too long, break it apart.
25943 ++ */
25944 ++ if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
25945 ++ bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
25946 ++ bfqq = bfq_split_bfqq(bic, bfqq);
25947 ++ if (!bfqq)
25948 ++ goto new_queue;
25949 ++ }
25950 ++
25951 ++ /*
25952 ++ * Check to see if this queue is scheduled to merge with
25953 ++ * another closely cooperating queue. The merging of queues
25954 ++ * happens here as it must be done in process context.
25955 ++ * The reference on new_bfqq was taken in merge_bfqqs.
25956 ++ */
25957 ++ if (bfqq->new_bfqq != NULL)
25958 ++ bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
25959 ++ }
25960 ++
25961 ++ bfqq->allocated[rw]++;
25962 ++ atomic_inc(&bfqq->ref);
25963 ++ bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
25964 ++ atomic_read(&bfqq->ref));
25965 ++
25966 ++ rq->elv.priv[0] = bic;
25967 ++ rq->elv.priv[1] = bfqq;
25968 ++
25969 ++ spin_unlock_irqrestore(q->queue_lock, flags);
25970 ++
25971 ++ return 0;
25972 ++
25973 ++queue_fail:
25974 ++ bfq_schedule_dispatch(bfqd);
25975 ++ spin_unlock_irqrestore(q->queue_lock, flags);
25976 ++
25977 ++ return 1;
25978 ++}
25979 ++
25980 ++static void bfq_kick_queue(struct work_struct *work)
25981 ++{
25982 ++ struct bfq_data *bfqd =
25983 ++ container_of(work, struct bfq_data, unplug_work);
25984 ++ struct request_queue *q = bfqd->queue;
25985 ++
25986 ++ spin_lock_irq(q->queue_lock);
25987 ++ __blk_run_queue(q);
25988 ++ spin_unlock_irq(q->queue_lock);
25989 ++}
25990 ++
25991 ++/*
25992 ++ * Handler of the expiration of the timer running if the active_queue
25993 ++ * is idling inside its time slice.
25994 ++ */
25995 ++static void bfq_idle_slice_timer(unsigned long data)
25996 ++{
25997 ++ struct bfq_data *bfqd = (struct bfq_data *)data;
25998 ++ struct bfq_queue *bfqq;
25999 ++ unsigned long flags;
26000 ++ enum bfqq_expiration reason;
26001 ++
26002 ++ spin_lock_irqsave(bfqd->queue->queue_lock, flags);
26003 ++
26004 ++ bfqq = bfqd->active_queue;
26005 ++ /*
26006 ++ * Theoretical race here: active_queue can be NULL or different
26007 ++ * from the queue that was idling if the timer handler spins on
26008 ++ * the queue_lock and a new request arrives for the current
26009 ++ * queue and there is a full dispatch cycle that changes the
26010 ++ * active_queue. This can hardly happen, but in the worst case
26011 ++ * we just expire a queue too early.
26012 ++ */
26013 ++ if (bfqq != NULL) {
26014 ++ bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
26015 ++ if (bfq_bfqq_budget_timeout(bfqq))
26016 ++ /*
26017 ++ * Also here the queue can be safely expired
26018 ++ * for budget timeout without wasting
26019 ++ * guarantees
26020 ++ */
26021 ++ reason = BFQ_BFQQ_BUDGET_TIMEOUT;
26022 ++ else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0)
26023 ++ /*
26024 ++ * The queue may not be empty upon timer expiration,
26025 ++ * because we may not disable the timer when the first
26026 ++ * request of the active queue arrives during
26027 ++ * disk idling
26028 ++ */
26029 ++ reason = BFQ_BFQQ_TOO_IDLE;
26030 ++ else
26031 ++ goto schedule_dispatch;
26032 ++
26033 ++ bfq_bfqq_expire(bfqd, bfqq, 1, reason);
26034 ++ }
26035 ++
26036 ++schedule_dispatch:
26037 ++ bfq_schedule_dispatch(bfqd);
26038 ++
26039 ++ spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
26040 ++}
26041 ++
26042 ++static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
26043 ++{
26044 ++ del_timer_sync(&bfqd->idle_slice_timer);
26045 ++ cancel_work_sync(&bfqd->unplug_work);
26046 ++}
26047 ++
26048 ++static inline void __bfq_put_async_bfqq(struct bfq_data *bfqd,
26049 ++ struct bfq_queue **bfqq_ptr)
26050 ++{
26051 ++ struct bfq_group *root_group = bfqd->root_group;
26052 ++ struct bfq_queue *bfqq = *bfqq_ptr;
26053 ++
26054 ++ bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
26055 ++ if (bfqq != NULL) {
26056 ++ bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
26057 ++ bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
26058 ++ bfqq, atomic_read(&bfqq->ref));
26059 ++ bfq_put_queue(bfqq);
26060 ++ *bfqq_ptr = NULL;
26061 ++ }
26062 ++}
26063 ++
26064 ++/*
26065 ++ * Release all the bfqg references to its async queues. If we are
26066 ++ * deallocating the group these queues may still contain requests, so
26067 ++ * we reparent them to the root cgroup (i.e., the only one that will
26068 ++ * exist for sure untill all the requests on a device are gone).
26069 ++ */
26070 ++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
26071 ++{
26072 ++ int i, j;
26073 ++
26074 ++ for (i = 0; i < 2; i++)
26075 ++ for (j = 0; j < IOPRIO_BE_NR; j++)
26076 ++ __bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
26077 ++
26078 ++ __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
26079 ++}
26080 ++
26081 ++static void bfq_exit_queue(struct elevator_queue *e)
26082 ++{
26083 ++ struct bfq_data *bfqd = e->elevator_data;
26084 ++ struct request_queue *q = bfqd->queue;
26085 ++ struct bfq_queue *bfqq, *n;
26086 ++
26087 ++ bfq_shutdown_timer_wq(bfqd);
26088 ++
26089 ++ spin_lock_irq(q->queue_lock);
26090 ++
26091 ++ BUG_ON(bfqd->active_queue != NULL);
26092 ++ list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
26093 ++ bfq_deactivate_bfqq(bfqd, bfqq, 0);
26094 ++
26095 ++ bfq_disconnect_groups(bfqd);
26096 ++ spin_unlock_irq(q->queue_lock);
26097 ++
26098 ++ bfq_shutdown_timer_wq(bfqd);
26099 ++
26100 ++ synchronize_rcu();
26101 ++
26102 ++ BUG_ON(timer_pending(&bfqd->idle_slice_timer));
26103 ++
26104 ++ bfq_free_root_group(bfqd);
26105 ++ kfree(bfqd);
26106 ++}
26107 ++
26108 ++static int bfq_init_queue(struct request_queue *q)
26109 ++{
26110 ++ struct bfq_group *bfqg;
26111 ++ struct bfq_data *bfqd;
26112 ++
26113 ++ bfqd = kmalloc_node(sizeof(*bfqd), GFP_KERNEL | __GFP_ZERO, q->node);
26114 ++ if (bfqd == NULL)
26115 ++ return -ENOMEM;
26116 ++
26117 ++ /*
26118 ++ * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
26119 ++ * Grab a permanent reference to it, so that the normal code flow
26120 ++ * will not attempt to free it.
26121 ++ */
26122 ++ bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, 1, 0);
26123 ++ atomic_inc(&bfqd->oom_bfqq.ref);
26124 ++
26125 ++ bfqd->queue = q;
26126 ++ q->elevator->elevator_data = bfqd;
26127 ++
26128 ++ bfqg = bfq_alloc_root_group(bfqd, q->node);
26129 ++ if (bfqg == NULL) {
26130 ++ kfree(bfqd);
26131 ++ return -ENOMEM;
26132 ++ }
26133 ++
26134 ++ bfqd->root_group = bfqg;
26135 ++
26136 ++ init_timer(&bfqd->idle_slice_timer);
26137 ++ bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
26138 ++ bfqd->idle_slice_timer.data = (unsigned long)bfqd;
26139 ++
26140 ++ bfqd->rq_pos_tree = RB_ROOT;
26141 ++
26142 ++ INIT_WORK(&bfqd->unplug_work, bfq_kick_queue);
26143 ++
26144 ++ INIT_LIST_HEAD(&bfqd->active_list);
26145 ++ INIT_LIST_HEAD(&bfqd->idle_list);
26146 ++
26147 ++ bfqd->hw_tag = -1;
26148 ++
26149 ++ bfqd->bfq_max_budget = bfq_default_max_budget;
26150 ++
26151 ++ bfqd->bfq_quantum = bfq_quantum;
26152 ++ bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
26153 ++ bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
26154 ++ bfqd->bfq_back_max = bfq_back_max;
26155 ++ bfqd->bfq_back_penalty = bfq_back_penalty;
26156 ++ bfqd->bfq_slice_idle = bfq_slice_idle;
26157 ++ bfqd->bfq_class_idle_last_service = 0;
26158 ++ bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
26159 ++ bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
26160 ++ bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
26161 ++
26162 ++ bfqd->low_latency = true;
26163 ++
26164 ++ bfqd->bfq_raising_coeff = 20;
26165 ++ bfqd->bfq_raising_rt_max_time = msecs_to_jiffies(300);
26166 ++ bfqd->bfq_raising_max_time = 0;
26167 ++ bfqd->bfq_raising_min_idle_time = msecs_to_jiffies(2000);
26168 ++ bfqd->bfq_raising_min_inter_arr_async = msecs_to_jiffies(500);
26169 ++ bfqd->bfq_raising_max_softrt_rate = 7000;
26170 ++
26171 ++ /* Initially estimate the device's peak rate as the reference rate */
26172 ++ if (blk_queue_nonrot(bfqd->queue)) {
26173 ++ bfqd->RT_prod = R_nonrot * T_nonrot;
26174 ++ bfqd->peak_rate = R_nonrot;
26175 ++ } else {
26176 ++ bfqd->RT_prod = R_rot * T_rot;
26177 ++ bfqd->peak_rate = R_rot;
26178 ++ }
26179 ++
26180 ++ return 0;
26181 ++}
26182 ++
26183 ++static void bfq_slab_kill(void)
26184 ++{
26185 ++ if (bfq_pool != NULL)
26186 ++ kmem_cache_destroy(bfq_pool);
26187 ++}
26188 ++
26189 ++static int __init bfq_slab_setup(void)
26190 ++{
26191 ++ bfq_pool = KMEM_CACHE(bfq_queue, 0);
26192 ++ if (bfq_pool == NULL)
26193 ++ return -ENOMEM;
26194 ++ return 0;
26195 ++}
26196 ++
26197 ++static ssize_t bfq_var_show(unsigned int var, char *page)
26198 ++{
26199 ++ return sprintf(page, "%d\n", var);
26200 ++}
26201 ++
26202 ++static ssize_t bfq_var_store(unsigned long *var, const char *page, size_t count)
26203 ++{
26204 ++ unsigned long new_val;
26205 ++ int ret = strict_strtoul(page, 10, &new_val);
26206 ++
26207 ++ if (ret == 0)
26208 ++ *var = new_val;
26209 ++
26210 ++ return count;
26211 ++}
26212 ++
26213 ++static ssize_t bfq_raising_max_time_show(struct elevator_queue *e, char *page)
26214 ++{
26215 ++ struct bfq_data *bfqd = e->elevator_data;
26216 ++ return sprintf(page, "%d\n", bfqd->bfq_raising_max_time > 0 ?
26217 ++ jiffies_to_msecs(bfqd->bfq_raising_max_time) :
26218 ++ jiffies_to_msecs(bfq_wrais_duration(bfqd)));
26219 ++}
26220 ++
26221 ++static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
26222 ++{
26223 ++ struct bfq_queue *bfqq;
26224 ++ struct bfq_data *bfqd = e->elevator_data;
26225 ++ ssize_t num_char = 0;
26226 ++
26227 ++ num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n",
26228 ++ bfqd->queued);
26229 ++
26230 ++ spin_lock_irq(bfqd->queue->queue_lock);
26231 ++
26232 ++ num_char += sprintf(page + num_char, "Active:\n");
26233 ++ list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) {
26234 ++ num_char += sprintf(page + num_char,
26235 ++ "pid%d: weight %hu, nr_queued %d %d,"
26236 ++ " dur %d/%u\n",
26237 ++ bfqq->pid,
26238 ++ bfqq->entity.weight,
26239 ++ bfqq->queued[0],
26240 ++ bfqq->queued[1],
26241 ++ jiffies_to_msecs(jiffies -
26242 ++ bfqq->last_rais_start_finish),
26243 ++ jiffies_to_msecs(bfqq->raising_cur_max_time));
26244 ++ }
26245 ++
26246 ++ num_char += sprintf(page + num_char, "Idle:\n");
26247 ++ list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) {
26248 ++ num_char += sprintf(page + num_char,
26249 ++ "pid%d: weight %hu, dur %d/%u\n",
26250 ++ bfqq->pid,
26251 ++ bfqq->entity.weight,
26252 ++ jiffies_to_msecs(jiffies -
26253 ++ bfqq->last_rais_start_finish),
26254 ++ jiffies_to_msecs(bfqq->raising_cur_max_time));
26255 ++ }
26256 ++
26257 ++ spin_unlock_irq(bfqd->queue->queue_lock);
26258 ++
26259 ++ return num_char;
26260 ++}
26261 ++
26262 ++#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \
26263 ++static ssize_t __FUNC(struct elevator_queue *e, char *page) \
26264 ++{ \
26265 ++ struct bfq_data *bfqd = e->elevator_data; \
26266 ++ unsigned int __data = __VAR; \
26267 ++ if (__CONV) \
26268 ++ __data = jiffies_to_msecs(__data); \
26269 ++ return bfq_var_show(__data, (page)); \
26270 ++}
26271 ++SHOW_FUNCTION(bfq_quantum_show, bfqd->bfq_quantum, 0);
26272 ++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
26273 ++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
26274 ++SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
26275 ++SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
26276 ++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
26277 ++SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
26278 ++SHOW_FUNCTION(bfq_max_budget_async_rq_show, bfqd->bfq_max_budget_async_rq, 0);
26279 ++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
26280 ++SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
26281 ++SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
26282 ++SHOW_FUNCTION(bfq_raising_coeff_show, bfqd->bfq_raising_coeff, 0);
26283 ++SHOW_FUNCTION(bfq_raising_rt_max_time_show, bfqd->bfq_raising_rt_max_time, 1);
26284 ++SHOW_FUNCTION(bfq_raising_min_idle_time_show, bfqd->bfq_raising_min_idle_time,
26285 ++ 1);
26286 ++SHOW_FUNCTION(bfq_raising_min_inter_arr_async_show,
26287 ++ bfqd->bfq_raising_min_inter_arr_async,
26288 ++ 1);
26289 ++SHOW_FUNCTION(bfq_raising_max_softrt_rate_show,
26290 ++ bfqd->bfq_raising_max_softrt_rate, 0);
26291 ++#undef SHOW_FUNCTION
26292 ++
26293 ++#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \
26294 ++static ssize_t \
26295 ++__FUNC(struct elevator_queue *e, const char *page, size_t count) \
26296 ++{ \
26297 ++ struct bfq_data *bfqd = e->elevator_data; \
26298 ++ unsigned long uninitialized_var(__data); \
26299 ++ int ret = bfq_var_store(&__data, (page), count); \
26300 ++ if (__data < (MIN)) \
26301 ++ __data = (MIN); \
26302 ++ else if (__data > (MAX)) \
26303 ++ __data = (MAX); \
26304 ++ if (__CONV) \
26305 ++ *(__PTR) = msecs_to_jiffies(__data); \
26306 ++ else \
26307 ++ *(__PTR) = __data; \
26308 ++ return ret; \
26309 ++}
26310 ++STORE_FUNCTION(bfq_quantum_store, &bfqd->bfq_quantum, 1, INT_MAX, 0);
26311 ++STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
26312 ++ INT_MAX, 1);
26313 ++STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
26314 ++ INT_MAX, 1);
26315 ++STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
26316 ++STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
26317 ++ INT_MAX, 0);
26318 ++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
26319 ++STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
26320 ++ 1, INT_MAX, 0);
26321 ++STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
26322 ++ INT_MAX, 1);
26323 ++STORE_FUNCTION(bfq_raising_coeff_store, &bfqd->bfq_raising_coeff, 1,
26324 ++ INT_MAX, 0);
26325 ++STORE_FUNCTION(bfq_raising_max_time_store, &bfqd->bfq_raising_max_time, 0,
26326 ++ INT_MAX, 1);
26327 ++STORE_FUNCTION(bfq_raising_rt_max_time_store, &bfqd->bfq_raising_rt_max_time, 0,
26328 ++ INT_MAX, 1);
26329 ++STORE_FUNCTION(bfq_raising_min_idle_time_store,
26330 ++ &bfqd->bfq_raising_min_idle_time, 0, INT_MAX, 1);
26331 ++STORE_FUNCTION(bfq_raising_min_inter_arr_async_store,
26332 ++ &bfqd->bfq_raising_min_inter_arr_async, 0, INT_MAX, 1);
26333 ++STORE_FUNCTION(bfq_raising_max_softrt_rate_store,
26334 ++ &bfqd->bfq_raising_max_softrt_rate, 0, INT_MAX, 0);
26335 ++#undef STORE_FUNCTION
26336 ++
26337 ++/* do nothing for the moment */
26338 ++static ssize_t bfq_weights_store(struct elevator_queue *e,
26339 ++ const char *page, size_t count)
26340 ++{
26341 ++ return count;
26342 ++}
26343 ++
26344 ++static inline unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
26345 ++{
26346 ++ u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
26347 ++
26348 ++ if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
26349 ++ return bfq_calc_max_budget(bfqd->peak_rate, timeout);
26350 ++ else
26351 ++ return bfq_default_max_budget;
26352 ++}
26353 ++
26354 ++static ssize_t bfq_max_budget_store(struct elevator_queue *e,
26355 ++ const char *page, size_t count)
26356 ++{
26357 ++ struct bfq_data *bfqd = e->elevator_data;
26358 ++ unsigned long uninitialized_var(__data);
26359 ++ int ret = bfq_var_store(&__data, (page), count);
26360 ++
26361 ++ if (__data == 0)
26362 ++ bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
26363 ++ else {
26364 ++ if (__data > INT_MAX)
26365 ++ __data = INT_MAX;
26366 ++ bfqd->bfq_max_budget = __data;
26367 ++ }
26368 ++
26369 ++ bfqd->bfq_user_max_budget = __data;
26370 ++
26371 ++ return ret;
26372 ++}
26373 ++
26374 ++static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
26375 ++ const char *page, size_t count)
26376 ++{
26377 ++ struct bfq_data *bfqd = e->elevator_data;
26378 ++ unsigned long uninitialized_var(__data);
26379 ++ int ret = bfq_var_store(&__data, (page), count);
26380 ++
26381 ++ if (__data < 1)
26382 ++ __data = 1;
26383 ++ else if (__data > INT_MAX)
26384 ++ __data = INT_MAX;
26385 ++
26386 ++ bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
26387 ++ if (bfqd->bfq_user_max_budget == 0)
26388 ++ bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
26389 ++
26390 ++ return ret;
26391 ++}
26392 ++
26393 ++static ssize_t bfq_low_latency_store(struct elevator_queue *e,
26394 ++ const char *page, size_t count)
26395 ++{
26396 ++ struct bfq_data *bfqd = e->elevator_data;
26397 ++ unsigned long uninitialized_var(__data);
26398 ++ int ret = bfq_var_store(&__data, (page), count);
26399 ++
26400 ++ if (__data > 1)
26401 ++ __data = 1;
26402 ++ if (__data == 0 && bfqd->low_latency != 0)
26403 ++ bfq_end_raising(bfqd);
26404 ++ bfqd->low_latency = __data;
26405 ++
26406 ++ return ret;
26407 ++}
26408 ++
26409 ++#define BFQ_ATTR(name) \
26410 ++ __ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
26411 ++
26412 ++static struct elv_fs_entry bfq_attrs[] = {
26413 ++ BFQ_ATTR(quantum),
26414 ++ BFQ_ATTR(fifo_expire_sync),
26415 ++ BFQ_ATTR(fifo_expire_async),
26416 ++ BFQ_ATTR(back_seek_max),
26417 ++ BFQ_ATTR(back_seek_penalty),
26418 ++ BFQ_ATTR(slice_idle),
26419 ++ BFQ_ATTR(max_budget),
26420 ++ BFQ_ATTR(max_budget_async_rq),
26421 ++ BFQ_ATTR(timeout_sync),
26422 ++ BFQ_ATTR(timeout_async),
26423 ++ BFQ_ATTR(low_latency),
26424 ++ BFQ_ATTR(raising_coeff),
26425 ++ BFQ_ATTR(raising_max_time),
26426 ++ BFQ_ATTR(raising_rt_max_time),
26427 ++ BFQ_ATTR(raising_min_idle_time),
26428 ++ BFQ_ATTR(raising_min_inter_arr_async),
26429 ++ BFQ_ATTR(raising_max_softrt_rate),
26430 ++ BFQ_ATTR(weights),
26431 ++ __ATTR_NULL
26432 ++};
26433 ++
26434 ++static struct elevator_type iosched_bfq = {
26435 ++ .ops = {
26436 ++ .elevator_merge_fn = bfq_merge,
26437 ++ .elevator_merged_fn = bfq_merged_request,
26438 ++ .elevator_merge_req_fn = bfq_merged_requests,
26439 ++ .elevator_allow_merge_fn = bfq_allow_merge,
26440 ++ .elevator_dispatch_fn = bfq_dispatch_requests,
26441 ++ .elevator_add_req_fn = bfq_insert_request,
26442 ++ .elevator_activate_req_fn = bfq_activate_request,
26443 ++ .elevator_deactivate_req_fn = bfq_deactivate_request,
26444 ++ .elevator_completed_req_fn = bfq_completed_request,
26445 ++ .elevator_former_req_fn = elv_rb_former_request,
26446 ++ .elevator_latter_req_fn = elv_rb_latter_request,
26447 ++ .elevator_init_icq_fn = bfq_init_icq,
26448 ++ .elevator_exit_icq_fn = bfq_exit_icq,
26449 ++ .elevator_set_req_fn = bfq_set_request,
26450 ++ .elevator_put_req_fn = bfq_put_request,
26451 ++ .elevator_may_queue_fn = bfq_may_queue,
26452 ++ .elevator_init_fn = bfq_init_queue,
26453 ++ .elevator_exit_fn = bfq_exit_queue,
26454 ++ },
26455 ++ .icq_size = sizeof(struct bfq_io_cq),
26456 ++ .icq_align = __alignof__(struct bfq_io_cq),
26457 ++ .elevator_attrs = bfq_attrs,
26458 ++ .elevator_name = "bfq",
26459 ++ .elevator_owner = THIS_MODULE,
26460 ++};
26461 ++
26462 ++static int __init bfq_init(void)
26463 ++{
26464 ++ /*
26465 ++ * Can be 0 on HZ < 1000 setups.
26466 ++ */
26467 ++ if (bfq_slice_idle == 0)
26468 ++ bfq_slice_idle = 1;
26469 ++
26470 ++ if (bfq_timeout_async == 0)
26471 ++ bfq_timeout_async = 1;
26472 ++
26473 ++ if (bfq_slab_setup())
26474 ++ return -ENOMEM;
26475 ++
26476 ++ elv_register(&iosched_bfq);
26477 ++
26478 ++ return 0;
26479 ++}
26480 ++
26481 ++static void __exit bfq_exit(void)
26482 ++{
26483 ++ elv_unregister(&iosched_bfq);
26484 ++ bfq_slab_kill();
26485 ++}
26486 ++
26487 ++module_init(bfq_init);
26488 ++module_exit(bfq_exit);
26489 ++
26490 ++MODULE_AUTHOR("Fabio Checconi, Paolo Valente");
26491 ++MODULE_LICENSE("GPL");
26492 ++MODULE_DESCRIPTION("Budget Fair Queueing IO scheduler");
26493 +diff --git a/block/bfq-sched.c b/block/bfq-sched.c
26494 +new file mode 100644
26495 +index 0000000..03f8061
26496 +--- /dev/null
26497 ++++ b/block/bfq-sched.c
26498 +@@ -0,0 +1,1072 @@
26499 ++/*
26500 ++ * BFQ: Hierarchical B-WF2Q+ scheduler.
26501 ++ *
26502 ++ * Based on ideas and code from CFQ:
26503 ++ * Copyright (C) 2003 Jens Axboe <axboe@××××××.dk>
26504 ++ *
26505 ++ * Copyright (C) 2008 Fabio Checconi <fabio@×××××××××××××.it>
26506 ++ * Paolo Valente <paolo.valente@×××××××.it>
26507 ++ *
26508 ++ * Copyright (C) 2010 Paolo Valente <paolo.valente@×××××××.it>
26509 ++ */
26510 ++
26511 ++#ifdef CONFIG_CGROUP_BFQIO
26512 ++#define for_each_entity(entity) \
26513 ++ for (; entity != NULL; entity = entity->parent)
26514 ++
26515 ++#define for_each_entity_safe(entity, parent) \
26516 ++ for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
26517 ++
26518 ++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
26519 ++ int extract,
26520 ++ struct bfq_data *bfqd);
26521 ++
26522 ++static inline void bfq_update_budget(struct bfq_entity *next_active)
26523 ++{
26524 ++ struct bfq_entity *bfqg_entity;
26525 ++ struct bfq_group *bfqg;
26526 ++ struct bfq_sched_data *group_sd;
26527 ++
26528 ++ BUG_ON(next_active == NULL);
26529 ++
26530 ++ group_sd = next_active->sched_data;
26531 ++
26532 ++ bfqg = container_of(group_sd, struct bfq_group, sched_data);
26533 ++ /*
26534 ++ * bfq_group's my_entity field is not NULL only if the group
26535 ++ * is not the root group. We must not touch the root entity
26536 ++ * as it must never become an active entity.
26537 ++ */
26538 ++ bfqg_entity = bfqg->my_entity;
26539 ++ if (bfqg_entity != NULL)
26540 ++ bfqg_entity->budget = next_active->budget;
26541 ++}
26542 ++
26543 ++static int bfq_update_next_active(struct bfq_sched_data *sd)
26544 ++{
26545 ++ struct bfq_entity *next_active;
26546 ++
26547 ++ if (sd->active_entity != NULL)
26548 ++ /* will update/requeue at the end of service */
26549 ++ return 0;
26550 ++
26551 ++ /*
26552 ++ * NOTE: this can be improved in many ways, such as returning
26553 ++ * 1 (and thus propagating upwards the update) only when the
26554 ++ * budget changes, or caching the bfqq that will be scheduled
26555 ++ * next from this subtree. By now we worry more about
26556 ++ * correctness than about performance...
26557 ++ */
26558 ++ next_active = bfq_lookup_next_entity(sd, 0, NULL);
26559 ++ sd->next_active = next_active;
26560 ++
26561 ++ if (next_active != NULL)
26562 ++ bfq_update_budget(next_active);
26563 ++
26564 ++ return 1;
26565 ++}
26566 ++
26567 ++static inline void bfq_check_next_active(struct bfq_sched_data *sd,
26568 ++ struct bfq_entity *entity)
26569 ++{
26570 ++ BUG_ON(sd->next_active != entity);
26571 ++}
26572 ++#else
26573 ++#define for_each_entity(entity) \
26574 ++ for (; entity != NULL; entity = NULL)
26575 ++
26576 ++#define for_each_entity_safe(entity, parent) \
26577 ++ for (parent = NULL; entity != NULL; entity = parent)
26578 ++
26579 ++static inline int bfq_update_next_active(struct bfq_sched_data *sd)
26580 ++{
26581 ++ return 0;
26582 ++}
26583 ++
26584 ++static inline void bfq_check_next_active(struct bfq_sched_data *sd,
26585 ++ struct bfq_entity *entity)
26586 ++{
26587 ++}
26588 ++
26589 ++static inline void bfq_update_budget(struct bfq_entity *next_active)
26590 ++{
26591 ++}
26592 ++#endif
26593 ++
26594 ++/*
26595 ++ * Shift for timestamp calculations. This actually limits the maximum
26596 ++ * service allowed in one timestamp delta (small shift values increase it),
26597 ++ * the maximum total weight that can be used for the queues in the system
26598 ++ * (big shift values increase it), and the period of virtual time wraparounds.
26599 ++ */
26600 ++#define WFQ_SERVICE_SHIFT 22
26601 ++
26602 ++/**
26603 ++ * bfq_gt - compare two timestamps.
26604 ++ * @a: first ts.
26605 ++ * @b: second ts.
26606 ++ *
26607 ++ * Return @a > @b, dealing with wrapping correctly.
26608 ++ */
26609 ++static inline int bfq_gt(u64 a, u64 b)
26610 ++{
26611 ++ return (s64)(a - b) > 0;
26612 ++}
26613 ++
26614 ++static inline struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
26615 ++{
26616 ++ struct bfq_queue *bfqq = NULL;
26617 ++
26618 ++ BUG_ON(entity == NULL);
26619 ++
26620 ++ if (entity->my_sched_data == NULL)
26621 ++ bfqq = container_of(entity, struct bfq_queue, entity);
26622 ++
26623 ++ return bfqq;
26624 ++}
26625 ++
26626 ++
26627 ++/**
26628 ++ * bfq_delta - map service into the virtual time domain.
26629 ++ * @service: amount of service.
26630 ++ * @weight: scale factor (weight of an entity or weight sum).
26631 ++ */
26632 ++static inline u64 bfq_delta(unsigned long service,
26633 ++ unsigned long weight)
26634 ++{
26635 ++ u64 d = (u64)service << WFQ_SERVICE_SHIFT;
26636 ++
26637 ++ do_div(d, weight);
26638 ++ return d;
26639 ++}
26640 ++
26641 ++/**
26642 ++ * bfq_calc_finish - assign the finish time to an entity.
26643 ++ * @entity: the entity to act upon.
26644 ++ * @service: the service to be charged to the entity.
26645 ++ */
26646 ++static inline void bfq_calc_finish(struct bfq_entity *entity,
26647 ++ unsigned long service)
26648 ++{
26649 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26650 ++
26651 ++ BUG_ON(entity->weight == 0);
26652 ++
26653 ++ entity->finish = entity->start +
26654 ++ bfq_delta(service, entity->weight);
26655 ++
26656 ++ if (bfqq != NULL) {
26657 ++ bfq_log_bfqq(bfqq->bfqd, bfqq,
26658 ++ "calc_finish: serv %lu, w %d",
26659 ++ service, entity->weight);
26660 ++ bfq_log_bfqq(bfqq->bfqd, bfqq,
26661 ++ "calc_finish: start %llu, finish %llu, delta %llu",
26662 ++ entity->start, entity->finish,
26663 ++ bfq_delta(service, entity->weight));
26664 ++ }
26665 ++}
26666 ++
26667 ++/**
26668 ++ * bfq_entity_of - get an entity from a node.
26669 ++ * @node: the node field of the entity.
26670 ++ *
26671 ++ * Convert a node pointer to the relative entity. This is used only
26672 ++ * to simplify the logic of some functions and not as the generic
26673 ++ * conversion mechanism because, e.g., in the tree walking functions,
26674 ++ * the check for a %NULL value would be redundant.
26675 ++ */
26676 ++static inline struct bfq_entity *bfq_entity_of(struct rb_node *node)
26677 ++{
26678 ++ struct bfq_entity *entity = NULL;
26679 ++
26680 ++ if (node != NULL)
26681 ++ entity = rb_entry(node, struct bfq_entity, rb_node);
26682 ++
26683 ++ return entity;
26684 ++}
26685 ++
26686 ++/**
26687 ++ * bfq_extract - remove an entity from a tree.
26688 ++ * @root: the tree root.
26689 ++ * @entity: the entity to remove.
26690 ++ */
26691 ++static inline void bfq_extract(struct rb_root *root,
26692 ++ struct bfq_entity *entity)
26693 ++{
26694 ++ BUG_ON(entity->tree != root);
26695 ++
26696 ++ entity->tree = NULL;
26697 ++ rb_erase(&entity->rb_node, root);
26698 ++}
26699 ++
26700 ++/**
26701 ++ * bfq_idle_extract - extract an entity from the idle tree.
26702 ++ * @st: the service tree of the owning @entity.
26703 ++ * @entity: the entity being removed.
26704 ++ */
26705 ++static void bfq_idle_extract(struct bfq_service_tree *st,
26706 ++ struct bfq_entity *entity)
26707 ++{
26708 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26709 ++ struct rb_node *next;
26710 ++
26711 ++ BUG_ON(entity->tree != &st->idle);
26712 ++
26713 ++ if (entity == st->first_idle) {
26714 ++ next = rb_next(&entity->rb_node);
26715 ++ st->first_idle = bfq_entity_of(next);
26716 ++ }
26717 ++
26718 ++ if (entity == st->last_idle) {
26719 ++ next = rb_prev(&entity->rb_node);
26720 ++ st->last_idle = bfq_entity_of(next);
26721 ++ }
26722 ++
26723 ++ bfq_extract(&st->idle, entity);
26724 ++
26725 ++ if (bfqq != NULL)
26726 ++ list_del(&bfqq->bfqq_list);
26727 ++}
26728 ++
26729 ++/**
26730 ++ * bfq_insert - generic tree insertion.
26731 ++ * @root: tree root.
26732 ++ * @entity: entity to insert.
26733 ++ *
26734 ++ * This is used for the idle and the active tree, since they are both
26735 ++ * ordered by finish time.
26736 ++ */
26737 ++static void bfq_insert(struct rb_root *root, struct bfq_entity *entity)
26738 ++{
26739 ++ struct bfq_entity *entry;
26740 ++ struct rb_node **node = &root->rb_node;
26741 ++ struct rb_node *parent = NULL;
26742 ++
26743 ++ BUG_ON(entity->tree != NULL);
26744 ++
26745 ++ while (*node != NULL) {
26746 ++ parent = *node;
26747 ++ entry = rb_entry(parent, struct bfq_entity, rb_node);
26748 ++
26749 ++ if (bfq_gt(entry->finish, entity->finish))
26750 ++ node = &parent->rb_left;
26751 ++ else
26752 ++ node = &parent->rb_right;
26753 ++ }
26754 ++
26755 ++ rb_link_node(&entity->rb_node, parent, node);
26756 ++ rb_insert_color(&entity->rb_node, root);
26757 ++
26758 ++ entity->tree = root;
26759 ++}
26760 ++
26761 ++/**
26762 ++ * bfq_update_min - update the min_start field of a entity.
26763 ++ * @entity: the entity to update.
26764 ++ * @node: one of its children.
26765 ++ *
26766 ++ * This function is called when @entity may store an invalid value for
26767 ++ * min_start due to updates to the active tree. The function assumes
26768 ++ * that the subtree rooted at @node (which may be its left or its right
26769 ++ * child) has a valid min_start value.
26770 ++ */
26771 ++static inline void bfq_update_min(struct bfq_entity *entity,
26772 ++ struct rb_node *node)
26773 ++{
26774 ++ struct bfq_entity *child;
26775 ++
26776 ++ if (node != NULL) {
26777 ++ child = rb_entry(node, struct bfq_entity, rb_node);
26778 ++ if (bfq_gt(entity->min_start, child->min_start))
26779 ++ entity->min_start = child->min_start;
26780 ++ }
26781 ++}
26782 ++
26783 ++/**
26784 ++ * bfq_update_active_node - recalculate min_start.
26785 ++ * @node: the node to update.
26786 ++ *
26787 ++ * @node may have changed position or one of its children may have moved,
26788 ++ * this function updates its min_start value. The left and right subtrees
26789 ++ * are assumed to hold a correct min_start value.
26790 ++ */
26791 ++static inline void bfq_update_active_node(struct rb_node *node)
26792 ++{
26793 ++ struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
26794 ++
26795 ++ entity->min_start = entity->start;
26796 ++ bfq_update_min(entity, node->rb_right);
26797 ++ bfq_update_min(entity, node->rb_left);
26798 ++}
26799 ++
26800 ++/**
26801 ++ * bfq_update_active_tree - update min_start for the whole active tree.
26802 ++ * @node: the starting node.
26803 ++ *
26804 ++ * @node must be the deepest modified node after an update. This function
26805 ++ * updates its min_start using the values held by its children, assuming
26806 ++ * that they did not change, and then updates all the nodes that may have
26807 ++ * changed in the path to the root. The only nodes that may have changed
26808 ++ * are the ones in the path or their siblings.
26809 ++ */
26810 ++static void bfq_update_active_tree(struct rb_node *node)
26811 ++{
26812 ++ struct rb_node *parent;
26813 ++
26814 ++up:
26815 ++ bfq_update_active_node(node);
26816 ++
26817 ++ parent = rb_parent(node);
26818 ++ if (parent == NULL)
26819 ++ return;
26820 ++
26821 ++ if (node == parent->rb_left && parent->rb_right != NULL)
26822 ++ bfq_update_active_node(parent->rb_right);
26823 ++ else if (parent->rb_left != NULL)
26824 ++ bfq_update_active_node(parent->rb_left);
26825 ++
26826 ++ node = parent;
26827 ++ goto up;
26828 ++}
26829 ++
26830 ++/**
26831 ++ * bfq_active_insert - insert an entity in the active tree of its group/device.
26832 ++ * @st: the service tree of the entity.
26833 ++ * @entity: the entity being inserted.
26834 ++ *
26835 ++ * The active tree is ordered by finish time, but an extra key is kept
26836 ++ * per each node, containing the minimum value for the start times of
26837 ++ * its children (and the node itself), so it's possible to search for
26838 ++ * the eligible node with the lowest finish time in logarithmic time.
26839 ++ */
26840 ++static void bfq_active_insert(struct bfq_service_tree *st,
26841 ++ struct bfq_entity *entity)
26842 ++{
26843 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26844 ++ struct rb_node *node = &entity->rb_node;
26845 ++
26846 ++ bfq_insert(&st->active, entity);
26847 ++
26848 ++ if (node->rb_left != NULL)
26849 ++ node = node->rb_left;
26850 ++ else if (node->rb_right != NULL)
26851 ++ node = node->rb_right;
26852 ++
26853 ++ bfq_update_active_tree(node);
26854 ++
26855 ++ if (bfqq != NULL)
26856 ++ list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
26857 ++}
26858 ++
26859 ++/**
26860 ++ * bfq_ioprio_to_weight - calc a weight from an ioprio.
26861 ++ * @ioprio: the ioprio value to convert.
26862 ++ */
26863 ++static unsigned short bfq_ioprio_to_weight(int ioprio)
26864 ++{
26865 ++ WARN_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
26866 ++ return IOPRIO_BE_NR - ioprio;
26867 ++}
26868 ++
26869 ++/**
26870 ++ * bfq_weight_to_ioprio - calc an ioprio from a weight.
26871 ++ * @weight: the weight value to convert.
26872 ++ *
26873 ++ * To preserve as mush as possible the old only-ioprio user interface,
26874 ++ * 0 is used as an escape ioprio value for weights (numerically) equal or
26875 ++ * larger than IOPRIO_BE_NR
26876 ++ */
26877 ++static unsigned short bfq_weight_to_ioprio(int weight)
26878 ++{
26879 ++ WARN_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT);
26880 ++ return IOPRIO_BE_NR - weight < 0 ? 0 : IOPRIO_BE_NR - weight;
26881 ++}
26882 ++
26883 ++static inline void bfq_get_entity(struct bfq_entity *entity)
26884 ++{
26885 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26886 ++ struct bfq_sched_data *sd;
26887 ++
26888 ++ if (bfqq != NULL) {
26889 ++ sd = entity->sched_data;
26890 ++ atomic_inc(&bfqq->ref);
26891 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
26892 ++ bfqq, atomic_read(&bfqq->ref));
26893 ++ }
26894 ++}
26895 ++
26896 ++/**
26897 ++ * bfq_find_deepest - find the deepest node that an extraction can modify.
26898 ++ * @node: the node being removed.
26899 ++ *
26900 ++ * Do the first step of an extraction in an rb tree, looking for the
26901 ++ * node that will replace @node, and returning the deepest node that
26902 ++ * the following modifications to the tree can touch. If @node is the
26903 ++ * last node in the tree return %NULL.
26904 ++ */
26905 ++static struct rb_node *bfq_find_deepest(struct rb_node *node)
26906 ++{
26907 ++ struct rb_node *deepest;
26908 ++
26909 ++ if (node->rb_right == NULL && node->rb_left == NULL)
26910 ++ deepest = rb_parent(node);
26911 ++ else if (node->rb_right == NULL)
26912 ++ deepest = node->rb_left;
26913 ++ else if (node->rb_left == NULL)
26914 ++ deepest = node->rb_right;
26915 ++ else {
26916 ++ deepest = rb_next(node);
26917 ++ if (deepest->rb_right != NULL)
26918 ++ deepest = deepest->rb_right;
26919 ++ else if (rb_parent(deepest) != node)
26920 ++ deepest = rb_parent(deepest);
26921 ++ }
26922 ++
26923 ++ return deepest;
26924 ++}
26925 ++
26926 ++/**
26927 ++ * bfq_active_extract - remove an entity from the active tree.
26928 ++ * @st: the service_tree containing the tree.
26929 ++ * @entity: the entity being removed.
26930 ++ */
26931 ++static void bfq_active_extract(struct bfq_service_tree *st,
26932 ++ struct bfq_entity *entity)
26933 ++{
26934 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26935 ++ struct rb_node *node;
26936 ++
26937 ++ node = bfq_find_deepest(&entity->rb_node);
26938 ++ bfq_extract(&st->active, entity);
26939 ++
26940 ++ if (node != NULL)
26941 ++ bfq_update_active_tree(node);
26942 ++
26943 ++ if (bfqq != NULL)
26944 ++ list_del(&bfqq->bfqq_list);
26945 ++}
26946 ++
26947 ++/**
26948 ++ * bfq_idle_insert - insert an entity into the idle tree.
26949 ++ * @st: the service tree containing the tree.
26950 ++ * @entity: the entity to insert.
26951 ++ */
26952 ++static void bfq_idle_insert(struct bfq_service_tree *st,
26953 ++ struct bfq_entity *entity)
26954 ++{
26955 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26956 ++ struct bfq_entity *first_idle = st->first_idle;
26957 ++ struct bfq_entity *last_idle = st->last_idle;
26958 ++
26959 ++ if (first_idle == NULL || bfq_gt(first_idle->finish, entity->finish))
26960 ++ st->first_idle = entity;
26961 ++ if (last_idle == NULL || bfq_gt(entity->finish, last_idle->finish))
26962 ++ st->last_idle = entity;
26963 ++
26964 ++ bfq_insert(&st->idle, entity);
26965 ++
26966 ++ if (bfqq != NULL)
26967 ++ list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list);
26968 ++}
26969 ++
26970 ++/**
26971 ++ * bfq_forget_entity - remove an entity from the wfq trees.
26972 ++ * @st: the service tree.
26973 ++ * @entity: the entity being removed.
26974 ++ *
26975 ++ * Update the device status and forget everything about @entity, putting
26976 ++ * the device reference to it, if it is a queue. Entities belonging to
26977 ++ * groups are not refcounted.
26978 ++ */
26979 ++static void bfq_forget_entity(struct bfq_service_tree *st,
26980 ++ struct bfq_entity *entity)
26981 ++{
26982 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
26983 ++ struct bfq_sched_data *sd;
26984 ++
26985 ++ BUG_ON(!entity->on_st);
26986 ++
26987 ++ entity->on_st = 0;
26988 ++ st->wsum -= entity->weight;
26989 ++ if (bfqq != NULL) {
26990 ++ sd = entity->sched_data;
26991 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
26992 ++ bfqq, atomic_read(&bfqq->ref));
26993 ++ bfq_put_queue(bfqq);
26994 ++ }
26995 ++}
26996 ++
26997 ++/**
26998 ++ * bfq_put_idle_entity - release the idle tree ref of an entity.
26999 ++ * @st: service tree for the entity.
27000 ++ * @entity: the entity being released.
27001 ++ */
27002 ++static void bfq_put_idle_entity(struct bfq_service_tree *st,
27003 ++ struct bfq_entity *entity)
27004 ++{
27005 ++ bfq_idle_extract(st, entity);
27006 ++ bfq_forget_entity(st, entity);
27007 ++}
27008 ++
27009 ++/**
27010 ++ * bfq_forget_idle - update the idle tree if necessary.
27011 ++ * @st: the service tree to act upon.
27012 ++ *
27013 ++ * To preserve the global O(log N) complexity we only remove one entry here;
27014 ++ * as the idle tree will not grow indefinitely this can be done safely.
27015 ++ */
27016 ++static void bfq_forget_idle(struct bfq_service_tree *st)
27017 ++{
27018 ++ struct bfq_entity *first_idle = st->first_idle;
27019 ++ struct bfq_entity *last_idle = st->last_idle;
27020 ++
27021 ++ if (RB_EMPTY_ROOT(&st->active) && last_idle != NULL &&
27022 ++ !bfq_gt(last_idle->finish, st->vtime)) {
27023 ++ /*
27024 ++ * Forget the whole idle tree, increasing the vtime past
27025 ++ * the last finish time of idle entities.
27026 ++ */
27027 ++ st->vtime = last_idle->finish;
27028 ++ }
27029 ++
27030 ++ if (first_idle != NULL && !bfq_gt(first_idle->finish, st->vtime))
27031 ++ bfq_put_idle_entity(st, first_idle);
27032 ++}
27033 ++
27034 ++static struct bfq_service_tree *
27035 ++__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
27036 ++ struct bfq_entity *entity)
27037 ++{
27038 ++ struct bfq_service_tree *new_st = old_st;
27039 ++
27040 ++ if (entity->ioprio_changed) {
27041 ++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
27042 ++
27043 ++ BUG_ON(old_st->wsum < entity->weight);
27044 ++ old_st->wsum -= entity->weight;
27045 ++
27046 ++ if (entity->new_weight != entity->orig_weight) {
27047 ++ entity->orig_weight = entity->new_weight;
27048 ++ entity->ioprio =
27049 ++ bfq_weight_to_ioprio(entity->orig_weight);
27050 ++ } else if (entity->new_ioprio != entity->ioprio) {
27051 ++ entity->ioprio = entity->new_ioprio;
27052 ++ entity->orig_weight =
27053 ++ bfq_ioprio_to_weight(entity->ioprio);
27054 ++ } else
27055 ++ entity->new_weight = entity->orig_weight =
27056 ++ bfq_ioprio_to_weight(entity->ioprio);
27057 ++
27058 ++ entity->ioprio_class = entity->new_ioprio_class;
27059 ++ entity->ioprio_changed = 0;
27060 ++
27061 ++ /*
27062 ++ * NOTE: here we may be changing the weight too early,
27063 ++ * this will cause unfairness. The correct approach
27064 ++ * would have required additional complexity to defer
27065 ++ * weight changes to the proper time instants (i.e.,
27066 ++ * when entity->finish <= old_st->vtime).
27067 ++ */
27068 ++ new_st = bfq_entity_service_tree(entity);
27069 ++ entity->weight = entity->orig_weight *
27070 ++ (bfqq != NULL ? bfqq->raising_coeff : 1);
27071 ++ new_st->wsum += entity->weight;
27072 ++
27073 ++ if (new_st != old_st)
27074 ++ entity->start = new_st->vtime;
27075 ++ }
27076 ++
27077 ++ return new_st;
27078 ++}
27079 ++
27080 ++/**
27081 ++ * bfq_bfqq_served - update the scheduler status after selection for service.
27082 ++ * @bfqq: the queue being served.
27083 ++ * @served: bytes to transfer.
27084 ++ *
27085 ++ * NOTE: this can be optimized, as the timestamps of upper level entities
27086 ++ * are synchronized every time a new bfqq is selected for service. By now,
27087 ++ * we keep it to better check consistency.
27088 ++ */
27089 ++static void bfq_bfqq_served(struct bfq_queue *bfqq, unsigned long served)
27090 ++{
27091 ++ struct bfq_entity *entity = &bfqq->entity;
27092 ++ struct bfq_service_tree *st;
27093 ++
27094 ++ for_each_entity(entity) {
27095 ++ st = bfq_entity_service_tree(entity);
27096 ++
27097 ++ entity->service += served;
27098 ++ BUG_ON(entity->service > entity->budget);
27099 ++ BUG_ON(st->wsum == 0);
27100 ++
27101 ++ st->vtime += bfq_delta(served, st->wsum);
27102 ++ bfq_forget_idle(st);
27103 ++ }
27104 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %lu secs", served);
27105 ++}
27106 ++
27107 ++/**
27108 ++ * bfq_bfqq_charge_full_budget - set the service to the entity budget.
27109 ++ * @bfqq: the queue that needs a service update.
27110 ++ *
27111 ++ * When it's not possible to be fair in the service domain, because
27112 ++ * a queue is not consuming its budget fast enough (the meaning of
27113 ++ * fast depends on the timeout parameter), we charge it a full
27114 ++ * budget. In this way we should obtain a sort of time-domain
27115 ++ * fairness among all the seeky/slow queues.
27116 ++ */
27117 ++static inline void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
27118 ++{
27119 ++ struct bfq_entity *entity = &bfqq->entity;
27120 ++
27121 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
27122 ++
27123 ++ bfq_bfqq_served(bfqq, entity->budget - entity->service);
27124 ++}
27125 ++
27126 ++/**
27127 ++ * __bfq_activate_entity - activate an entity.
27128 ++ * @entity: the entity being activated.
27129 ++ *
27130 ++ * Called whenever an entity is activated, i.e., it is not active and one
27131 ++ * of its children receives a new request, or has to be reactivated due to
27132 ++ * budget exhaustion. It uses the current budget of the entity (and the
27133 ++ * service received if @entity is active) of the queue to calculate its
27134 ++ * timestamps.
27135 ++ */
27136 ++static void __bfq_activate_entity(struct bfq_entity *entity)
27137 ++{
27138 ++ struct bfq_sched_data *sd = entity->sched_data;
27139 ++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
27140 ++
27141 ++ if (entity == sd->active_entity) {
27142 ++ BUG_ON(entity->tree != NULL);
27143 ++ /*
27144 ++ * If we are requeueing the current entity we have
27145 ++ * to take care of not charging to it service it has
27146 ++ * not received.
27147 ++ */
27148 ++ bfq_calc_finish(entity, entity->service);
27149 ++ entity->start = entity->finish;
27150 ++ sd->active_entity = NULL;
27151 ++ } else if (entity->tree == &st->active) {
27152 ++ /*
27153 ++ * Requeueing an entity due to a change of some
27154 ++ * next_active entity below it. We reuse the old
27155 ++ * start time.
27156 ++ */
27157 ++ bfq_active_extract(st, entity);
27158 ++ } else if (entity->tree == &st->idle) {
27159 ++ /*
27160 ++ * Must be on the idle tree, bfq_idle_extract() will
27161 ++ * check for that.
27162 ++ */
27163 ++ bfq_idle_extract(st, entity);
27164 ++ entity->start = bfq_gt(st->vtime, entity->finish) ?
27165 ++ st->vtime : entity->finish;
27166 ++ } else {
27167 ++ /*
27168 ++ * The finish time of the entity may be invalid, and
27169 ++ * it is in the past for sure, otherwise the queue
27170 ++ * would have been on the idle tree.
27171 ++ */
27172 ++ entity->start = st->vtime;
27173 ++ st->wsum += entity->weight;
27174 ++ bfq_get_entity(entity);
27175 ++
27176 ++ BUG_ON(entity->on_st);
27177 ++ entity->on_st = 1;
27178 ++ }
27179 ++
27180 ++ st = __bfq_entity_update_weight_prio(st, entity);
27181 ++ bfq_calc_finish(entity, entity->budget);
27182 ++ bfq_active_insert(st, entity);
27183 ++}
27184 ++
27185 ++/**
27186 ++ * bfq_activate_entity - activate an entity and its ancestors if necessary.
27187 ++ * @entity: the entity to activate.
27188 ++ *
27189 ++ * Activate @entity and all the entities on the path from it to the root.
27190 ++ */
27191 ++static void bfq_activate_entity(struct bfq_entity *entity)
27192 ++{
27193 ++ struct bfq_sched_data *sd;
27194 ++
27195 ++ for_each_entity(entity) {
27196 ++ __bfq_activate_entity(entity);
27197 ++
27198 ++ sd = entity->sched_data;
27199 ++ if (!bfq_update_next_active(sd))
27200 ++ /*
27201 ++ * No need to propagate the activation to the
27202 ++ * upper entities, as they will be updated when
27203 ++ * the active entity is rescheduled.
27204 ++ */
27205 ++ break;
27206 ++ }
27207 ++}
27208 ++
27209 ++/**
27210 ++ * __bfq_deactivate_entity - deactivate an entity from its service tree.
27211 ++ * @entity: the entity to deactivate.
27212 ++ * @requeue: if false, the entity will not be put into the idle tree.
27213 ++ *
27214 ++ * Deactivate an entity, independently from its previous state. If the
27215 ++ * entity was not on a service tree just return, otherwise if it is on
27216 ++ * any scheduler tree, extract it from that tree, and if necessary
27217 ++ * and if the caller did not specify @requeue, put it on the idle tree.
27218 ++ *
27219 ++ * Return %1 if the caller should update the entity hierarchy, i.e.,
27220 ++ * if the entity was under service or if it was the next_active for
27221 ++ * its sched_data; return %0 otherwise.
27222 ++ */
27223 ++static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
27224 ++{
27225 ++ struct bfq_sched_data *sd = entity->sched_data;
27226 ++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
27227 ++ int was_active = entity == sd->active_entity;
27228 ++ int ret = 0;
27229 ++
27230 ++ if (!entity->on_st)
27231 ++ return 0;
27232 ++
27233 ++ BUG_ON(was_active && entity->tree != NULL);
27234 ++
27235 ++ if (was_active) {
27236 ++ bfq_calc_finish(entity, entity->service);
27237 ++ sd->active_entity = NULL;
27238 ++ } else if (entity->tree == &st->active)
27239 ++ bfq_active_extract(st, entity);
27240 ++ else if (entity->tree == &st->idle)
27241 ++ bfq_idle_extract(st, entity);
27242 ++ else if (entity->tree != NULL)
27243 ++ BUG();
27244 ++
27245 ++ if (was_active || sd->next_active == entity)
27246 ++ ret = bfq_update_next_active(sd);
27247 ++
27248 ++ if (!requeue || !bfq_gt(entity->finish, st->vtime))
27249 ++ bfq_forget_entity(st, entity);
27250 ++ else
27251 ++ bfq_idle_insert(st, entity);
27252 ++
27253 ++ BUG_ON(sd->active_entity == entity);
27254 ++ BUG_ON(sd->next_active == entity);
27255 ++
27256 ++ return ret;
27257 ++}
27258 ++
27259 ++/**
27260 ++ * bfq_deactivate_entity - deactivate an entity.
27261 ++ * @entity: the entity to deactivate.
27262 ++ * @requeue: true if the entity can be put on the idle tree
27263 ++ */
27264 ++static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
27265 ++{
27266 ++ struct bfq_sched_data *sd;
27267 ++ struct bfq_entity *parent;
27268 ++
27269 ++ for_each_entity_safe(entity, parent) {
27270 ++ sd = entity->sched_data;
27271 ++
27272 ++ if (!__bfq_deactivate_entity(entity, requeue))
27273 ++ /*
27274 ++ * The parent entity is still backlogged, and
27275 ++ * we don't need to update it as it is still
27276 ++ * under service.
27277 ++ */
27278 ++ break;
27279 ++
27280 ++ if (sd->next_active != NULL)
27281 ++ /*
27282 ++ * The parent entity is still backlogged and
27283 ++ * the budgets on the path towards the root
27284 ++ * need to be updated.
27285 ++ */
27286 ++ goto update;
27287 ++
27288 ++ /*
27289 ++ * If we reach there the parent is no more backlogged and
27290 ++ * we want to propagate the dequeue upwards.
27291 ++ */
27292 ++ requeue = 1;
27293 ++ }
27294 ++
27295 ++ return;
27296 ++
27297 ++update:
27298 ++ entity = parent;
27299 ++ for_each_entity(entity) {
27300 ++ __bfq_activate_entity(entity);
27301 ++
27302 ++ sd = entity->sched_data;
27303 ++ if (!bfq_update_next_active(sd))
27304 ++ break;
27305 ++ }
27306 ++}
27307 ++
27308 ++/**
27309 ++ * bfq_update_vtime - update vtime if necessary.
27310 ++ * @st: the service tree to act upon.
27311 ++ *
27312 ++ * If necessary update the service tree vtime to have at least one
27313 ++ * eligible entity, skipping to its start time. Assumes that the
27314 ++ * active tree of the device is not empty.
27315 ++ *
27316 ++ * NOTE: this hierarchical implementation updates vtimes quite often,
27317 ++ * we may end up with reactivated tasks getting timestamps after a
27318 ++ * vtime skip done because we needed a ->first_active entity on some
27319 ++ * intermediate node.
27320 ++ */
27321 ++static void bfq_update_vtime(struct bfq_service_tree *st)
27322 ++{
27323 ++ struct bfq_entity *entry;
27324 ++ struct rb_node *node = st->active.rb_node;
27325 ++
27326 ++ entry = rb_entry(node, struct bfq_entity, rb_node);
27327 ++ if (bfq_gt(entry->min_start, st->vtime)) {
27328 ++ st->vtime = entry->min_start;
27329 ++ bfq_forget_idle(st);
27330 ++ }
27331 ++}
27332 ++
27333 ++/**
27334 ++ * bfq_first_active - find the eligible entity with the smallest finish time
27335 ++ * @st: the service tree to select from.
27336 ++ *
27337 ++ * This function searches the first schedulable entity, starting from the
27338 ++ * root of the tree and going on the left every time on this side there is
27339 ++ * a subtree with at least one eligible (start >= vtime) entity. The path
27340 ++ * on the right is followed only if a) the left subtree contains no eligible
27341 ++ * entities and b) no eligible entity has been found yet.
27342 ++ */
27343 ++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
27344 ++{
27345 ++ struct bfq_entity *entry, *first = NULL;
27346 ++ struct rb_node *node = st->active.rb_node;
27347 ++
27348 ++ while (node != NULL) {
27349 ++ entry = rb_entry(node, struct bfq_entity, rb_node);
27350 ++left:
27351 ++ if (!bfq_gt(entry->start, st->vtime))
27352 ++ first = entry;
27353 ++
27354 ++ BUG_ON(bfq_gt(entry->min_start, st->vtime));
27355 ++
27356 ++ if (node->rb_left != NULL) {
27357 ++ entry = rb_entry(node->rb_left,
27358 ++ struct bfq_entity, rb_node);
27359 ++ if (!bfq_gt(entry->min_start, st->vtime)) {
27360 ++ node = node->rb_left;
27361 ++ goto left;
27362 ++ }
27363 ++ }
27364 ++ if (first != NULL)
27365 ++ break;
27366 ++ node = node->rb_right;
27367 ++ }
27368 ++
27369 ++ BUG_ON(first == NULL && !RB_EMPTY_ROOT(&st->active));
27370 ++ return first;
27371 ++}
27372 ++
27373 ++/**
27374 ++ * __bfq_lookup_next_entity - return the first eligible entity in @st.
27375 ++ * @st: the service tree.
27376 ++ *
27377 ++ * Update the virtual time in @st and return the first eligible entity
27378 ++ * it contains.
27379 ++ */
27380 ++static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
27381 ++ bool force)
27382 ++{
27383 ++ struct bfq_entity *entity, *new_next_active = NULL;
27384 ++
27385 ++ if (RB_EMPTY_ROOT(&st->active))
27386 ++ return NULL;
27387 ++
27388 ++ bfq_update_vtime(st);
27389 ++ entity = bfq_first_active_entity(st);
27390 ++ BUG_ON(bfq_gt(entity->start, st->vtime));
27391 ++
27392 ++ /*
27393 ++ * If the chosen entity does not match with the sched_data's
27394 ++ * next_active and we are forcedly serving the IDLE priority
27395 ++ * class tree, bubble up budget update.
27396 ++ */
27397 ++ if (unlikely(force && entity != entity->sched_data->next_active)) {
27398 ++ new_next_active = entity;
27399 ++ for_each_entity(new_next_active)
27400 ++ bfq_update_budget(new_next_active);
27401 ++ }
27402 ++
27403 ++ return entity;
27404 ++}
27405 ++
27406 ++/**
27407 ++ * bfq_lookup_next_entity - return the first eligible entity in @sd.
27408 ++ * @sd: the sched_data.
27409 ++ * @extract: if true the returned entity will be also extracted from @sd.
27410 ++ *
27411 ++ * NOTE: since we cache the next_active entity at each level of the
27412 ++ * hierarchy, the complexity of the lookup can be decreased with
27413 ++ * absolutely no effort just returning the cached next_active value;
27414 ++ * we prefer to do full lookups to test the consistency of * the data
27415 ++ * structures.
27416 ++ */
27417 ++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
27418 ++ int extract,
27419 ++ struct bfq_data *bfqd)
27420 ++{
27421 ++ struct bfq_service_tree *st = sd->service_tree;
27422 ++ struct bfq_entity *entity;
27423 ++ int i=0;
27424 ++
27425 ++ BUG_ON(sd->active_entity != NULL);
27426 ++
27427 ++ if (bfqd != NULL &&
27428 ++ jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
27429 ++ entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1, true);
27430 ++ if (entity != NULL) {
27431 ++ i = BFQ_IOPRIO_CLASSES - 1;
27432 ++ bfqd->bfq_class_idle_last_service = jiffies;
27433 ++ sd->next_active = entity;
27434 ++ }
27435 ++ }
27436 ++ for (; i < BFQ_IOPRIO_CLASSES; i++) {
27437 ++ entity = __bfq_lookup_next_entity(st + i, false);
27438 ++ if (entity != NULL) {
27439 ++ if (extract) {
27440 ++ bfq_check_next_active(sd, entity);
27441 ++ bfq_active_extract(st + i, entity);
27442 ++ sd->active_entity = entity;
27443 ++ sd->next_active = NULL;
27444 ++ }
27445 ++ break;
27446 ++ }
27447 ++ }
27448 ++
27449 ++ return entity;
27450 ++}
27451 ++
27452 ++/*
27453 ++ * Get next queue for service.
27454 ++ */
27455 ++static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
27456 ++{
27457 ++ struct bfq_entity *entity = NULL;
27458 ++ struct bfq_sched_data *sd;
27459 ++ struct bfq_queue *bfqq;
27460 ++
27461 ++ BUG_ON(bfqd->active_queue != NULL);
27462 ++
27463 ++ if (bfqd->busy_queues == 0)
27464 ++ return NULL;
27465 ++
27466 ++ sd = &bfqd->root_group->sched_data;
27467 ++ for (; sd != NULL; sd = entity->my_sched_data) {
27468 ++ entity = bfq_lookup_next_entity(sd, 1, bfqd);
27469 ++ BUG_ON(entity == NULL);
27470 ++ entity->service = 0;
27471 ++ }
27472 ++
27473 ++ bfqq = bfq_entity_to_bfqq(entity);
27474 ++ BUG_ON(bfqq == NULL);
27475 ++
27476 ++ return bfqq;
27477 ++}
27478 ++
27479 ++/*
27480 ++ * Forced extraction of the given queue.
27481 ++ */
27482 ++static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
27483 ++ struct bfq_queue *bfqq)
27484 ++{
27485 ++ struct bfq_entity *entity;
27486 ++ struct bfq_sched_data *sd;
27487 ++
27488 ++ BUG_ON(bfqd->active_queue != NULL);
27489 ++
27490 ++ entity = &bfqq->entity;
27491 ++ /*
27492 ++ * Bubble up extraction/update from the leaf to the root.
27493 ++ */
27494 ++ for_each_entity(entity) {
27495 ++ sd = entity->sched_data;
27496 ++ bfq_update_budget(entity);
27497 ++ bfq_update_vtime(bfq_entity_service_tree(entity));
27498 ++ bfq_active_extract(bfq_entity_service_tree(entity), entity);
27499 ++ sd->active_entity = entity;
27500 ++ sd->next_active = NULL;
27501 ++ entity->service = 0;
27502 ++ }
27503 ++
27504 ++ return;
27505 ++}
27506 ++
27507 ++static void __bfq_bfqd_reset_active(struct bfq_data *bfqd)
27508 ++{
27509 ++ if (bfqd->active_bic != NULL) {
27510 ++ put_io_context(bfqd->active_bic->icq.ioc);
27511 ++ bfqd->active_bic = NULL;
27512 ++ }
27513 ++
27514 ++ bfqd->active_queue = NULL;
27515 ++ del_timer(&bfqd->idle_slice_timer);
27516 ++}
27517 ++
27518 ++static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
27519 ++ int requeue)
27520 ++{
27521 ++ struct bfq_entity *entity = &bfqq->entity;
27522 ++
27523 ++ if (bfqq == bfqd->active_queue)
27524 ++ __bfq_bfqd_reset_active(bfqd);
27525 ++
27526 ++ bfq_deactivate_entity(entity, requeue);
27527 ++}
27528 ++
27529 ++static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
27530 ++{
27531 ++ struct bfq_entity *entity = &bfqq->entity;
27532 ++
27533 ++ bfq_activate_entity(entity);
27534 ++}
27535 ++
27536 ++/*
27537 ++ * Called when the bfqq no longer has requests pending, remove it from
27538 ++ * the service tree.
27539 ++ */
27540 ++static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
27541 ++ int requeue)
27542 ++{
27543 ++ BUG_ON(!bfq_bfqq_busy(bfqq));
27544 ++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
27545 ++
27546 ++ bfq_log_bfqq(bfqd, bfqq, "del from busy");
27547 ++
27548 ++ bfq_clear_bfqq_busy(bfqq);
27549 ++
27550 ++ BUG_ON(bfqd->busy_queues == 0);
27551 ++ bfqd->busy_queues--;
27552 ++
27553 ++ bfq_deactivate_bfqq(bfqd, bfqq, requeue);
27554 ++}
27555 ++
27556 ++/*
27557 ++ * Called when an inactive queue receives a new request.
27558 ++ */
27559 ++static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
27560 ++{
27561 ++ BUG_ON(bfq_bfqq_busy(bfqq));
27562 ++ BUG_ON(bfqq == bfqd->active_queue);
27563 ++
27564 ++ bfq_log_bfqq(bfqd, bfqq, "add to busy");
27565 ++
27566 ++ bfq_activate_bfqq(bfqd, bfqq);
27567 ++
27568 ++ bfq_mark_bfqq_busy(bfqq);
27569 ++ bfqd->busy_queues++;
27570 ++}
27571 +diff --git a/block/bfq.h b/block/bfq.h
27572 +new file mode 100644
27573 +index 0000000..48ecde9
27574 +--- /dev/null
27575 ++++ b/block/bfq.h
27576 +@@ -0,0 +1,603 @@
27577 ++/*
27578 ++ * BFQ-v6r2 for 3.10.0: data structures and common functions prototypes.
27579 ++ *
27580 ++ * Based on ideas and code from CFQ:
27581 ++ * Copyright (C) 2003 Jens Axboe <axboe@××××××.dk>
27582 ++ *
27583 ++ * Copyright (C) 2008 Fabio Checconi <fabio@×××××××××××××.it>
27584 ++ * Paolo Valente <paolo.valente@×××××××.it>
27585 ++ *
27586 ++ * Copyright (C) 2010 Paolo Valente <paolo.valente@×××××××.it>
27587 ++ */
27588 ++
27589 ++#ifndef _BFQ_H
27590 ++#define _BFQ_H
27591 ++
27592 ++#include <linux/blktrace_api.h>
27593 ++#include <linux/hrtimer.h>
27594 ++#include <linux/ioprio.h>
27595 ++#include <linux/rbtree.h>
27596 ++
27597 ++#define BFQ_IOPRIO_CLASSES 3
27598 ++#define BFQ_CL_IDLE_TIMEOUT HZ/5
27599 ++
27600 ++#define BFQ_MIN_WEIGHT 1
27601 ++#define BFQ_MAX_WEIGHT 1000
27602 ++
27603 ++#define BFQ_DEFAULT_GRP_WEIGHT 10
27604 ++#define BFQ_DEFAULT_GRP_IOPRIO 0
27605 ++#define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE
27606 ++
27607 ++struct bfq_entity;
27608 ++
27609 ++/**
27610 ++ * struct bfq_service_tree - per ioprio_class service tree.
27611 ++ * @active: tree for active entities (i.e., those backlogged).
27612 ++ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
27613 ++ * @first_idle: idle entity with minimum F_i.
27614 ++ * @last_idle: idle entity with maximum F_i.
27615 ++ * @vtime: scheduler virtual time.
27616 ++ * @wsum: scheduler weight sum; active and idle entities contribute to it.
27617 ++ *
27618 ++ * Each service tree represents a B-WF2Q+ scheduler on its own. Each
27619 ++ * ioprio_class has its own independent scheduler, and so its own
27620 ++ * bfq_service_tree. All the fields are protected by the queue lock
27621 ++ * of the containing bfqd.
27622 ++ */
27623 ++struct bfq_service_tree {
27624 ++ struct rb_root active;
27625 ++ struct rb_root idle;
27626 ++
27627 ++ struct bfq_entity *first_idle;
27628 ++ struct bfq_entity *last_idle;
27629 ++
27630 ++ u64 vtime;
27631 ++ unsigned long wsum;
27632 ++};
27633 ++
27634 ++/**
27635 ++ * struct bfq_sched_data - multi-class scheduler.
27636 ++ * @active_entity: entity under service.
27637 ++ * @next_active: head-of-the-line entity in the scheduler.
27638 ++ * @service_tree: array of service trees, one per ioprio_class.
27639 ++ *
27640 ++ * bfq_sched_data is the basic scheduler queue. It supports three
27641 ++ * ioprio_classes, and can be used either as a toplevel queue or as
27642 ++ * an intermediate queue on a hierarchical setup.
27643 ++ * @next_active points to the active entity of the sched_data service
27644 ++ * trees that will be scheduled next.
27645 ++ *
27646 ++ * The supported ioprio_classes are the same as in CFQ, in descending
27647 ++ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
27648 ++ * Requests from higher priority queues are served before all the
27649 ++ * requests from lower priority queues; among requests of the same
27650 ++ * queue requests are served according to B-WF2Q+.
27651 ++ * All the fields are protected by the queue lock of the containing bfqd.
27652 ++ */
27653 ++struct bfq_sched_data {
27654 ++ struct bfq_entity *active_entity;
27655 ++ struct bfq_entity *next_active;
27656 ++ struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
27657 ++};
27658 ++
27659 ++/**
27660 ++ * struct bfq_entity - schedulable entity.
27661 ++ * @rb_node: service_tree member.
27662 ++ * @on_st: flag, true if the entity is on a tree (either the active or
27663 ++ * the idle one of its service_tree).
27664 ++ * @finish: B-WF2Q+ finish timestamp (aka F_i).
27665 ++ * @start: B-WF2Q+ start timestamp (aka S_i).
27666 ++ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
27667 ++ * @min_start: minimum start time of the (active) subtree rooted at
27668 ++ * this entity; used for O(log N) lookups into active trees.
27669 ++ * @service: service received during the last round of service.
27670 ++ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
27671 ++ * @weight: weight of the queue
27672 ++ * @parent: parent entity, for hierarchical scheduling.
27673 ++ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
27674 ++ * associated scheduler queue, %NULL on leaf nodes.
27675 ++ * @sched_data: the scheduler queue this entity belongs to.
27676 ++ * @ioprio: the ioprio in use.
27677 ++ * @new_weight: when a weight change is requested, the new weight value.
27678 ++ * @orig_weight: original weight, used to implement weight boosting
27679 ++ * @new_ioprio: when an ioprio change is requested, the new ioprio value.
27680 ++ * @ioprio_class: the ioprio_class in use.
27681 ++ * @new_ioprio_class: when an ioprio_class change is requested, the new
27682 ++ * ioprio_class value.
27683 ++ * @ioprio_changed: flag, true when the user requested a weight, ioprio or
27684 ++ * ioprio_class change.
27685 ++ *
27686 ++ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
27687 ++ * cgroup hierarchy) or a bfq_group into the upper level scheduler. Each
27688 ++ * entity belongs to the sched_data of the parent group in the cgroup
27689 ++ * hierarchy. Non-leaf entities have also their own sched_data, stored
27690 ++ * in @my_sched_data.
27691 ++ *
27692 ++ * Each entity stores independently its priority values; this would
27693 ++ * allow different weights on different devices, but this
27694 ++ * functionality is not exported to userspace by now. Priorities and
27695 ++ * weights are updated lazily, first storing the new values into the
27696 ++ * new_* fields, then setting the @ioprio_changed flag. As soon as
27697 ++ * there is a transition in the entity state that allows the priority
27698 ++ * update to take place the effective and the requested priority
27699 ++ * values are synchronized.
27700 ++ *
27701 ++ * Unless cgroups are used, the weight value is calculated from the
27702 ++ * ioprio to export the same interface as CFQ. When dealing with
27703 ++ * ``well-behaved'' queues (i.e., queues that do not spend too much
27704 ++ * time to consume their budget and have true sequential behavior, and
27705 ++ * when there are no external factors breaking anticipation) the
27706 ++ * relative weights at each level of the cgroups hierarchy should be
27707 ++ * guaranteed. All the fields are protected by the queue lock of the
27708 ++ * containing bfqd.
27709 ++ */
27710 ++struct bfq_entity {
27711 ++ struct rb_node rb_node;
27712 ++
27713 ++ int on_st;
27714 ++
27715 ++ u64 finish;
27716 ++ u64 start;
27717 ++
27718 ++ struct rb_root *tree;
27719 ++
27720 ++ u64 min_start;
27721 ++
27722 ++ unsigned long service, budget;
27723 ++ unsigned short weight, new_weight;
27724 ++ unsigned short orig_weight;
27725 ++
27726 ++ struct bfq_entity *parent;
27727 ++
27728 ++ struct bfq_sched_data *my_sched_data;
27729 ++ struct bfq_sched_data *sched_data;
27730 ++
27731 ++ unsigned short ioprio, new_ioprio;
27732 ++ unsigned short ioprio_class, new_ioprio_class;
27733 ++
27734 ++ int ioprio_changed;
27735 ++};
27736 ++
27737 ++struct bfq_group;
27738 ++
27739 ++/**
27740 ++ * struct bfq_queue - leaf schedulable entity.
27741 ++ * @ref: reference counter.
27742 ++ * @bfqd: parent bfq_data.
27743 ++ * @new_bfqq: shared bfq_queue if queue is cooperating with
27744 ++ * one or more other queues.
27745 ++ * @pos_node: request-position tree member (see bfq_data's @rq_pos_tree).
27746 ++ * @pos_root: request-position tree root (see bfq_data's @rq_pos_tree).
27747 ++ * @sort_list: sorted list of pending requests.
27748 ++ * @next_rq: if fifo isn't expired, next request to serve.
27749 ++ * @queued: nr of requests queued in @sort_list.
27750 ++ * @allocated: currently allocated requests.
27751 ++ * @meta_pending: pending metadata requests.
27752 ++ * @fifo: fifo list of requests in sort_list.
27753 ++ * @entity: entity representing this queue in the scheduler.
27754 ++ * @max_budget: maximum budget allowed from the feedback mechanism.
27755 ++ * @budget_timeout: budget expiration (in jiffies).
27756 ++ * @dispatched: number of requests on the dispatch list or inside driver.
27757 ++ * @org_ioprio: saved ioprio during boosted periods.
27758 ++ * @flags: status flags.
27759 ++ * @bfqq_list: node for active/idle bfqq list inside our bfqd.
27760 ++ * @seek_samples: number of seeks sampled
27761 ++ * @seek_total: sum of the distances of the seeks sampled
27762 ++ * @seek_mean: mean seek distance
27763 ++ * @last_request_pos: position of the last request enqueued
27764 ++ * @pid: pid of the process owning the queue, used for logging purposes.
27765 ++ * @last_rais_start_time: last (idle -> weight-raised) transition attempt
27766 ++ * @raising_cur_max_time: current max raising time for this queue
27767 ++ *
27768 ++ * A bfq_queue is a leaf request queue; it can be associated to an io_context
27769 ++ * or more (if it is an async one). @cgroup holds a reference to the
27770 ++ * cgroup, to be sure that it does not disappear while a bfqq still
27771 ++ * references it (mostly to avoid races between request issuing and task
27772 ++ * migration followed by cgroup distruction).
27773 ++ * All the fields are protected by the queue lock of the containing bfqd.
27774 ++ */
27775 ++struct bfq_queue {
27776 ++ atomic_t ref;
27777 ++ struct bfq_data *bfqd;
27778 ++
27779 ++ /* fields for cooperating queues handling */
27780 ++ struct bfq_queue *new_bfqq;
27781 ++ struct rb_node pos_node;
27782 ++ struct rb_root *pos_root;
27783 ++
27784 ++ struct rb_root sort_list;
27785 ++ struct request *next_rq;
27786 ++ int queued[2];
27787 ++ int allocated[2];
27788 ++ int meta_pending;
27789 ++ struct list_head fifo;
27790 ++
27791 ++ struct bfq_entity entity;
27792 ++
27793 ++ unsigned long max_budget;
27794 ++ unsigned long budget_timeout;
27795 ++
27796 ++ int dispatched;
27797 ++
27798 ++ unsigned short org_ioprio;
27799 ++
27800 ++ unsigned int flags;
27801 ++
27802 ++ struct list_head bfqq_list;
27803 ++
27804 ++ unsigned int seek_samples;
27805 ++ u64 seek_total;
27806 ++ sector_t seek_mean;
27807 ++ sector_t last_request_pos;
27808 ++
27809 ++ pid_t pid;
27810 ++
27811 ++ /* weight-raising fields */
27812 ++ unsigned int raising_cur_max_time;
27813 ++ u64 last_rais_start_finish, soft_rt_next_start;
27814 ++ unsigned int raising_coeff;
27815 ++};
27816 ++
27817 ++/**
27818 ++ * struct bfq_ttime - per process thinktime stats.
27819 ++ * @ttime_total: total process thinktime
27820 ++ * @ttime_samples: number of thinktime samples
27821 ++ * @ttime_mean: average process thinktime
27822 ++ */
27823 ++struct bfq_ttime {
27824 ++ unsigned long last_end_request;
27825 ++
27826 ++ unsigned long ttime_total;
27827 ++ unsigned long ttime_samples;
27828 ++ unsigned long ttime_mean;
27829 ++};
27830 ++
27831 ++/**
27832 ++ * struct bfq_io_cq - per (request_queue, io_context) structure.
27833 ++ * @icq: associated io_cq structure
27834 ++ * @bfqq: array of two process queues, the sync and the async
27835 ++ * @ttime: associated @bfq_ttime struct
27836 ++ */
27837 ++struct bfq_io_cq {
27838 ++ struct io_cq icq; /* must be the first member */
27839 ++ struct bfq_queue *bfqq[2];
27840 ++ struct bfq_ttime ttime;
27841 ++ int ioprio;
27842 ++};
27843 ++
27844 ++/**
27845 ++ * struct bfq_data - per device data structure.
27846 ++ * @queue: request queue for the managed device.
27847 ++ * @root_group: root bfq_group for the device.
27848 ++ * @rq_pos_tree: rbtree sorted by next_request position,
27849 ++ * used when determining if two or more queues
27850 ++ * have interleaving requests (see bfq_close_cooperator).
27851 ++ * @busy_queues: number of bfq_queues containing requests (including the
27852 ++ * queue under service, even if it is idling).
27853 ++ * @queued: number of queued requests.
27854 ++ * @rq_in_driver: number of requests dispatched and waiting for completion.
27855 ++ * @sync_flight: number of sync requests in the driver.
27856 ++ * @max_rq_in_driver: max number of reqs in driver in the last @hw_tag_samples
27857 ++ * completed requests .
27858 ++ * @hw_tag_samples: nr of samples used to calculate hw_tag.
27859 ++ * @hw_tag: flag set to one if the driver is showing a queueing behavior.
27860 ++ * @budgets_assigned: number of budgets assigned.
27861 ++ * @idle_slice_timer: timer set when idling for the next sequential request
27862 ++ * from the queue under service.
27863 ++ * @unplug_work: delayed work to restart dispatching on the request queue.
27864 ++ * @active_queue: bfq_queue under service.
27865 ++ * @active_bic: bfq_io_cq (bic) associated with the @active_queue.
27866 ++ * @last_position: on-disk position of the last served request.
27867 ++ * @last_budget_start: beginning of the last budget.
27868 ++ * @last_idling_start: beginning of the last idle slice.
27869 ++ * @peak_rate: peak transfer rate observed for a budget.
27870 ++ * @peak_rate_samples: number of samples used to calculate @peak_rate.
27871 ++ * @bfq_max_budget: maximum budget allotted to a bfq_queue before rescheduling.
27872 ++ * @group_list: list of all the bfq_groups active on the device.
27873 ++ * @active_list: list of all the bfq_queues active on the device.
27874 ++ * @idle_list: list of all the bfq_queues idle on the device.
27875 ++ * @bfq_quantum: max number of requests dispatched per dispatch round.
27876 ++ * @bfq_fifo_expire: timeout for async/sync requests; when it expires
27877 ++ * requests are served in fifo order.
27878 ++ * @bfq_back_penalty: weight of backward seeks wrt forward ones.
27879 ++ * @bfq_back_max: maximum allowed backward seek.
27880 ++ * @bfq_slice_idle: maximum idling time.
27881 ++ * @bfq_user_max_budget: user-configured max budget value (0 for auto-tuning).
27882 ++ * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
27883 ++ * async queues.
27884 ++ * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
27885 ++ * to prevent seeky queues to impose long latencies to well
27886 ++ * behaved ones (this also implies that seeky queues cannot
27887 ++ * receive guarantees in the service domain; after a timeout
27888 ++ * they are charged for the whole allocated budget, to try
27889 ++ * to preserve a behavior reasonably fair among them, but
27890 ++ * without service-domain guarantees).
27891 ++ * @bfq_raising_coeff: Maximum factor by which the weight of a boosted
27892 ++ * queue is multiplied
27893 ++ * @bfq_raising_max_time: maximum duration of a weight-raising period (jiffies)
27894 ++ * @bfq_raising_rt_max_time: maximum duration for soft real-time processes
27895 ++ * @bfq_raising_min_idle_time: minimum idle period after which weight-raising
27896 ++ * may be reactivated for a queue (in jiffies)
27897 ++ * @bfq_raising_min_inter_arr_async: minimum period between request arrivals
27898 ++ * after which weight-raising may be
27899 ++ * reactivated for an already busy queue
27900 ++ * (in jiffies)
27901 ++ * @bfq_raising_max_softrt_rate: max service-rate for a soft real-time queue,
27902 ++ * sectors per seconds
27903 ++ * @RT_prod: cached value of the product R*T used for computing the maximum
27904 ++ * duration of the weight raising automatically
27905 ++ * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions
27906 ++ *
27907 ++ * All the fields are protected by the @queue lock.
27908 ++ */
27909 ++struct bfq_data {
27910 ++ struct request_queue *queue;
27911 ++
27912 ++ struct bfq_group *root_group;
27913 ++
27914 ++ struct rb_root rq_pos_tree;
27915 ++
27916 ++ int busy_queues;
27917 ++ int queued;
27918 ++ int rq_in_driver;
27919 ++ int sync_flight;
27920 ++
27921 ++ int max_rq_in_driver;
27922 ++ int hw_tag_samples;
27923 ++ int hw_tag;
27924 ++
27925 ++ int budgets_assigned;
27926 ++
27927 ++ struct timer_list idle_slice_timer;
27928 ++ struct work_struct unplug_work;
27929 ++
27930 ++ struct bfq_queue *active_queue;
27931 ++ struct bfq_io_cq *active_bic;
27932 ++
27933 ++ sector_t last_position;
27934 ++
27935 ++ ktime_t last_budget_start;
27936 ++ ktime_t last_idling_start;
27937 ++ int peak_rate_samples;
27938 ++ u64 peak_rate;
27939 ++ unsigned long bfq_max_budget;
27940 ++
27941 ++ struct hlist_head group_list;
27942 ++ struct list_head active_list;
27943 ++ struct list_head idle_list;
27944 ++
27945 ++ unsigned int bfq_quantum;
27946 ++ unsigned int bfq_fifo_expire[2];
27947 ++ unsigned int bfq_back_penalty;
27948 ++ unsigned int bfq_back_max;
27949 ++ unsigned int bfq_slice_idle;
27950 ++ u64 bfq_class_idle_last_service;
27951 ++
27952 ++ unsigned int bfq_user_max_budget;
27953 ++ unsigned int bfq_max_budget_async_rq;
27954 ++ unsigned int bfq_timeout[2];
27955 ++
27956 ++ bool low_latency;
27957 ++
27958 ++ /* parameters of the low_latency heuristics */
27959 ++ unsigned int bfq_raising_coeff;
27960 ++ unsigned int bfq_raising_max_time;
27961 ++ unsigned int bfq_raising_rt_max_time;
27962 ++ unsigned int bfq_raising_min_idle_time;
27963 ++ unsigned int bfq_raising_min_inter_arr_async;
27964 ++ unsigned int bfq_raising_max_softrt_rate;
27965 ++ u64 RT_prod;
27966 ++
27967 ++ struct bfq_queue oom_bfqq;
27968 ++};
27969 ++
27970 ++enum bfqq_state_flags {
27971 ++ BFQ_BFQQ_FLAG_busy = 0, /* has requests or is under service */
27972 ++ BFQ_BFQQ_FLAG_wait_request, /* waiting for a request */
27973 ++ BFQ_BFQQ_FLAG_must_alloc, /* must be allowed rq alloc */
27974 ++ BFQ_BFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */
27975 ++ BFQ_BFQQ_FLAG_idle_window, /* slice idling enabled */
27976 ++ BFQ_BFQQ_FLAG_prio_changed, /* task priority has changed */
27977 ++ BFQ_BFQQ_FLAG_sync, /* synchronous queue */
27978 ++ BFQ_BFQQ_FLAG_budget_new, /* no completion with this budget */
27979 ++ BFQ_BFQQ_FLAG_coop, /* bfqq is shared */
27980 ++ BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be splitted */
27981 ++ BFQ_BFQQ_FLAG_some_coop_idle, /* some cooperator is inactive */
27982 ++};
27983 ++
27984 ++#define BFQ_BFQQ_FNS(name) \
27985 ++static inline void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \
27986 ++{ \
27987 ++ (bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name); \
27988 ++} \
27989 ++static inline void bfq_clear_bfqq_##name(struct bfq_queue *bfqq) \
27990 ++{ \
27991 ++ (bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name); \
27992 ++} \
27993 ++static inline int bfq_bfqq_##name(const struct bfq_queue *bfqq) \
27994 ++{ \
27995 ++ return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0; \
27996 ++}
27997 ++
27998 ++BFQ_BFQQ_FNS(busy);
27999 ++BFQ_BFQQ_FNS(wait_request);
28000 ++BFQ_BFQQ_FNS(must_alloc);
28001 ++BFQ_BFQQ_FNS(fifo_expire);
28002 ++BFQ_BFQQ_FNS(idle_window);
28003 ++BFQ_BFQQ_FNS(prio_changed);
28004 ++BFQ_BFQQ_FNS(sync);
28005 ++BFQ_BFQQ_FNS(budget_new);
28006 ++BFQ_BFQQ_FNS(coop);
28007 ++BFQ_BFQQ_FNS(split_coop);
28008 ++BFQ_BFQQ_FNS(some_coop_idle);
28009 ++#undef BFQ_BFQQ_FNS
28010 ++
28011 ++/* Logging facilities. */
28012 ++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
28013 ++ blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
28014 ++
28015 ++#define bfq_log(bfqd, fmt, args...) \
28016 ++ blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
28017 ++
28018 ++/* Expiration reasons. */
28019 ++enum bfqq_expiration {
28020 ++ BFQ_BFQQ_TOO_IDLE = 0, /* queue has been idling for too long */
28021 ++ BFQ_BFQQ_BUDGET_TIMEOUT, /* budget took too long to be used */
28022 ++ BFQ_BFQQ_BUDGET_EXHAUSTED, /* budget consumed */
28023 ++ BFQ_BFQQ_NO_MORE_REQUESTS, /* the queue has no more requests */
28024 ++};
28025 ++
28026 ++#ifdef CONFIG_CGROUP_BFQIO
28027 ++/**
28028 ++ * struct bfq_group - per (device, cgroup) data structure.
28029 ++ * @entity: schedulable entity to insert into the parent group sched_data.
28030 ++ * @sched_data: own sched_data, to contain child entities (they may be
28031 ++ * both bfq_queues and bfq_groups).
28032 ++ * @group_node: node to be inserted into the bfqio_cgroup->group_data
28033 ++ * list of the containing cgroup's bfqio_cgroup.
28034 ++ * @bfqd_node: node to be inserted into the @bfqd->group_list list
28035 ++ * of the groups active on the same device; used for cleanup.
28036 ++ * @bfqd: the bfq_data for the device this group acts upon.
28037 ++ * @async_bfqq: array of async queues for all the tasks belonging to
28038 ++ * the group, one queue per ioprio value per ioprio_class,
28039 ++ * except for the idle class that has only one queue.
28040 ++ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
28041 ++ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
28042 ++ * to avoid too many special cases during group creation/migration.
28043 ++ *
28044 ++ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
28045 ++ * there is a set of bfq_groups, each one collecting the lower-level
28046 ++ * entities belonging to the group that are acting on the same device.
28047 ++ *
28048 ++ * Locking works as follows:
28049 ++ * o @group_node is protected by the bfqio_cgroup lock, and is accessed
28050 ++ * via RCU from its readers.
28051 ++ * o @bfqd is protected by the queue lock, RCU is used to access it
28052 ++ * from the readers.
28053 ++ * o All the other fields are protected by the @bfqd queue lock.
28054 ++ */
28055 ++struct bfq_group {
28056 ++ struct bfq_entity entity;
28057 ++ struct bfq_sched_data sched_data;
28058 ++
28059 ++ struct hlist_node group_node;
28060 ++ struct hlist_node bfqd_node;
28061 ++
28062 ++ void *bfqd;
28063 ++
28064 ++ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
28065 ++ struct bfq_queue *async_idle_bfqq;
28066 ++
28067 ++ struct bfq_entity *my_entity;
28068 ++};
28069 ++
28070 ++/**
28071 ++ * struct bfqio_cgroup - bfq cgroup data structure.
28072 ++ * @css: subsystem state for bfq in the containing cgroup.
28073 ++ * @weight: cgroup weight.
28074 ++ * @ioprio: cgroup ioprio.
28075 ++ * @ioprio_class: cgroup ioprio_class.
28076 ++ * @lock: spinlock that protects @ioprio, @ioprio_class and @group_data.
28077 ++ * @group_data: list containing the bfq_group belonging to this cgroup.
28078 ++ *
28079 ++ * @group_data is accessed using RCU, with @lock protecting the updates,
28080 ++ * @ioprio and @ioprio_class are protected by @lock.
28081 ++ */
28082 ++struct bfqio_cgroup {
28083 ++ struct cgroup_subsys_state css;
28084 ++
28085 ++ unsigned short weight, ioprio, ioprio_class;
28086 ++
28087 ++ spinlock_t lock;
28088 ++ struct hlist_head group_data;
28089 ++};
28090 ++#else
28091 ++struct bfq_group {
28092 ++ struct bfq_sched_data sched_data;
28093 ++
28094 ++ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
28095 ++ struct bfq_queue *async_idle_bfqq;
28096 ++};
28097 ++#endif
28098 ++
28099 ++static inline struct bfq_service_tree *
28100 ++bfq_entity_service_tree(struct bfq_entity *entity)
28101 ++{
28102 ++ struct bfq_sched_data *sched_data = entity->sched_data;
28103 ++ unsigned int idx = entity->ioprio_class - 1;
28104 ++
28105 ++ BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
28106 ++ BUG_ON(sched_data == NULL);
28107 ++
28108 ++ return sched_data->service_tree + idx;
28109 ++}
28110 ++
28111 ++static inline struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic,
28112 ++ int is_sync)
28113 ++{
28114 ++ return bic->bfqq[!!is_sync];
28115 ++}
28116 ++
28117 ++static inline void bic_set_bfqq(struct bfq_io_cq *bic,
28118 ++ struct bfq_queue *bfqq, int is_sync)
28119 ++{
28120 ++ bic->bfqq[!!is_sync] = bfqq;
28121 ++}
28122 ++
28123 ++static inline struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
28124 ++{
28125 ++ return bic->icq.q->elevator->elevator_data;
28126 ++}
28127 ++
28128 ++/**
28129 ++ * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
28130 ++ * @ptr: a pointer to a bfqd.
28131 ++ * @flags: storage for the flags to be saved.
28132 ++ *
28133 ++ * This function allows bfqg->bfqd to be protected by the
28134 ++ * queue lock of the bfqd they reference; the pointer is dereferenced
28135 ++ * under RCU, so the storage for bfqd is assured to be safe as long
28136 ++ * as the RCU read side critical section does not end. After the
28137 ++ * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
28138 ++ * sure that no other writer accessed it. If we raced with a writer,
28139 ++ * the function returns NULL, with the queue unlocked, otherwise it
28140 ++ * returns the dereferenced pointer, with the queue locked.
28141 ++ */
28142 ++static inline struct bfq_data *bfq_get_bfqd_locked(void **ptr,
28143 ++ unsigned long *flags)
28144 ++{
28145 ++ struct bfq_data *bfqd;
28146 ++
28147 ++ rcu_read_lock();
28148 ++ bfqd = rcu_dereference(*(struct bfq_data **)ptr);
28149 ++
28150 ++ if (bfqd != NULL) {
28151 ++ spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
28152 ++ if (*ptr == bfqd)
28153 ++ goto out;
28154 ++ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
28155 ++ }
28156 ++
28157 ++ bfqd = NULL;
28158 ++out:
28159 ++ rcu_read_unlock();
28160 ++ return bfqd;
28161 ++}
28162 ++
28163 ++static inline void bfq_put_bfqd_unlock(struct bfq_data *bfqd,
28164 ++ unsigned long *flags)
28165 ++{
28166 ++ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
28167 ++}
28168 ++
28169 ++static void bfq_changed_ioprio(struct bfq_io_cq *bic);
28170 ++static void bfq_put_queue(struct bfq_queue *bfqq);
28171 ++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
28172 ++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
28173 ++ struct bfq_group *bfqg, int is_sync,
28174 ++ struct bfq_io_cq *bic, gfp_t gfp_mask);
28175 ++static void bfq_end_raising_async_queues(struct bfq_data *bfqd,
28176 ++ struct bfq_group *bfqg);
28177 ++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
28178 ++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
28179 ++#endif
28180 +diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
28181 +index ffa1d1f..e5e6b0d 100644
28182 +--- a/include/linux/cgroup_subsys.h
28183 ++++ b/include/linux/cgroup_subsys.h
28184 +@@ -85,7 +85,7 @@ SUBSYS(bcache)
28185 +
28186 + /* */
28187 +
28188 +-#ifdef CONFIG_CGROUP_BFQIO
28189 ++#if IS_SUBSYS_ENABLED(CONFIG_CGROUP_BFQIO)
28190 + SUBSYS(bfqio)
28191 + #endif
28192 +
28193 +--
28194 +1.8.1.4
28195 +
28196
28197 Added: genpatches-2.6/trunk/3.10.7/1803_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v6r2-for-3.10.0.patch1
28198 ===================================================================
28199 --- genpatches-2.6/trunk/3.10.7/1803_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v6r2-for-3.10.0.patch1 (rev 0)
28200 +++ genpatches-2.6/trunk/3.10.7/1803_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v6r2-for-3.10.0.patch1 2013-08-29 12:09:12 UTC (rev 2497)
28201 @@ -0,0 +1,1049 @@
28202 +From 9204dcb026a40cd2cb4310fecf788924d0fbec8d Mon Sep 17 00:00:00 2001
28203 +From: Mauro Andreolini <mauro.andreolini@×××××××.it>
28204 +Date: Fri, 14 Jun 2013 13:46:47 +0200
28205 +Subject: [PATCH 3/3] block, bfq: add Early Queue Merge (EQM) to BFQ-v6r2 for
28206 + 3.10.0
28207 +
28208 +A set of processes may happen to perform interleaved reads, i.e., requests
28209 +whose union would give rise to a sequential read pattern. There are two
28210 +typical cases: in the first case, processes read fixed-size chunks of
28211 +data at a fixed distance from each other, while in the second case processes
28212 +may read variable-size chunks at variable distances. The latter case occurs
28213 +for example with KVM, which splits the I/O generated by the guest into
28214 +multiple chunks, and lets these chunks be served by a pool of cooperating
28215 +processes, iteratively assigning the next chunk of I/O to the first
28216 +available process. CFQ uses actual queue merging for the first type of
28217 +processes, whereas it uses preemption to get a sequential read pattern out
28218 +of the read requests performed by the second type of processes. In the end
28219 +it uses two different mechanisms to achieve the same goal: boosting the
28220 +throughput with interleaved I/O.
28221 +
28222 +This patch introduces Early Queue Merge (EQM), a unified mechanism to get a
28223 +sequential read pattern with both types of processes. The main idea is
28224 +checking newly arrived requests against the next request of the active queue
28225 +both in case of actual request insert and in case of request merge. By doing
28226 +so, both the types of processes can be handled by just merging their queues.
28227 +EQM is then simpler and more compact than the pair of mechanisms used in
28228 +CFQ.
28229 +
28230 +Finally, EQM also preserves the typical low-latency properties of BFQ, by
28231 +properly restoring the weight-raising state of a queue when it gets back to
28232 +a non-merged state.
28233 +
28234 +Signed-off-by: Mauro Andreolini <mauro.andreolini@×××××××.it>
28235 +Signed-off-by: Arianna Avanzini <avanzini.arianna@×××××.com>
28236 +Reviewed-by: Paolo Valente <paolo.valente@×××××××.it>
28237 +---
28238 + block/bfq-iosched.c | 653 ++++++++++++++++++++++++++++++++++++----------------
28239 + block/bfq-sched.c | 28 ---
28240 + block/bfq.h | 16 ++
28241 + 3 files changed, 466 insertions(+), 231 deletions(-)
28242 +
28243 +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
28244 +index b230927..bc57923 100644
28245 +--- a/block/bfq-iosched.c
28246 ++++ b/block/bfq-iosched.c
28247 +@@ -444,6 +444,43 @@ static inline unsigned int bfq_wrais_duration(struct bfq_data *bfqd)
28248 + return dur;
28249 + }
28250 +
28251 ++static inline void
28252 ++bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
28253 ++{
28254 ++ if (bic->saved_idle_window)
28255 ++ bfq_mark_bfqq_idle_window(bfqq);
28256 ++ else
28257 ++ bfq_clear_bfqq_idle_window(bfqq);
28258 ++ if (bic->raising_time_left && bfqq->bfqd->low_latency) {
28259 ++ /*
28260 ++ * Start a weight raising period with the duration given by
28261 ++ * the raising_time_left snapshot.
28262 ++ */
28263 ++ bfqq->raising_coeff = bfqq->bfqd->bfq_raising_coeff;
28264 ++ bfqq->raising_cur_max_time = bic->raising_time_left;
28265 ++ bfqq->last_rais_start_finish = jiffies;
28266 ++ }
28267 ++ /*
28268 ++ * Clear raising_time_left to prevent bfq_bfqq_save_state() from
28269 ++ * getting confused about the queue's need of a weight-raising
28270 ++ * period.
28271 ++ */
28272 ++ bic->raising_time_left = 0;
28273 ++}
28274 ++
28275 ++/*
28276 ++ * Must be called with the queue_lock held.
28277 ++ */
28278 ++static int bfqq_process_refs(struct bfq_queue *bfqq)
28279 ++{
28280 ++ int process_refs, io_refs;
28281 ++
28282 ++ io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
28283 ++ process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
28284 ++ BUG_ON(process_refs < 0);
28285 ++ return process_refs;
28286 ++}
28287 ++
28288 + static void bfq_add_rq_rb(struct request *rq)
28289 + {
28290 + struct bfq_queue *bfqq = RQ_BFQQ(rq);
28291 +@@ -483,11 +520,20 @@ static void bfq_add_rq_rb(struct request *rq)
28292 + if (! bfqd->low_latency)
28293 + goto add_bfqq_busy;
28294 +
28295 ++ if (bfq_bfqq_just_split(bfqq))
28296 ++ goto set_ioprio_changed;
28297 ++
28298 + /*
28299 +- * If the queue is not being boosted and has been idle
28300 +- * for enough time, start a weight-raising period
28301 ++ * If the queue:
28302 ++ * - is not being boosted,
28303 ++ * - has been idle for enough time,
28304 ++ * - is not a sync queue or is linked to a bfq_io_cq (it is
28305 ++ * shared "for its nature" or it is not shared and its
28306 ++ * requests have not been redirected to a shared queue)
28307 ++ * start a weight-raising period.
28308 + */
28309 +- if(old_raising_coeff == 1 && (idle_for_long_time || soft_rt)) {
28310 ++ if(old_raising_coeff == 1 && (idle_for_long_time || soft_rt) &&
28311 ++ (!bfq_bfqq_sync(bfqq) || bfqq->bic != NULL)) {
28312 + bfqq->raising_coeff = bfqd->bfq_raising_coeff;
28313 + if (idle_for_long_time)
28314 + bfqq->raising_cur_max_time =
28315 +@@ -517,6 +563,7 @@ static void bfq_add_rq_rb(struct request *rq)
28316 + raising_cur_max_time));
28317 + }
28318 + }
28319 ++set_ioprio_changed:
28320 + if (old_raising_coeff != bfqq->raising_coeff)
28321 + entity->ioprio_changed = 1;
28322 + add_bfqq_busy:
28323 +@@ -695,89 +742,35 @@ static void bfq_end_raising(struct bfq_data *bfqd)
28324 + spin_unlock_irq(bfqd->queue->queue_lock);
28325 + }
28326 +
28327 +-static int bfq_allow_merge(struct request_queue *q, struct request *rq,
28328 +- struct bio *bio)
28329 ++static inline sector_t bfq_io_struct_pos(void *io_struct, bool request)
28330 + {
28331 +- struct bfq_data *bfqd = q->elevator->elevator_data;
28332 +- struct bfq_io_cq *bic;
28333 +- struct bfq_queue *bfqq;
28334 +-
28335 +- /*
28336 +- * Disallow merge of a sync bio into an async request.
28337 +- */
28338 +- if (bfq_bio_sync(bio) && !rq_is_sync(rq))
28339 +- return 0;
28340 +-
28341 +- /*
28342 +- * Lookup the bfqq that this bio will be queued with. Allow
28343 +- * merge only if rq is queued there.
28344 +- * Queue lock is held here.
28345 +- */
28346 +- bic = bfq_bic_lookup(bfqd, current->io_context);
28347 +- if (bic == NULL)
28348 +- return 0;
28349 +-
28350 +- bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
28351 +- return bfqq == RQ_BFQQ(rq);
28352 +-}
28353 +-
28354 +-static void __bfq_set_active_queue(struct bfq_data *bfqd,
28355 +- struct bfq_queue *bfqq)
28356 +-{
28357 +- if (bfqq != NULL) {
28358 +- bfq_mark_bfqq_must_alloc(bfqq);
28359 +- bfq_mark_bfqq_budget_new(bfqq);
28360 +- bfq_clear_bfqq_fifo_expire(bfqq);
28361 +-
28362 +- bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
28363 +-
28364 +- bfq_log_bfqq(bfqd, bfqq, "set_active_queue, cur-budget = %lu",
28365 +- bfqq->entity.budget);
28366 +- }
28367 +-
28368 +- bfqd->active_queue = bfqq;
28369 +-}
28370 +-
28371 +-/*
28372 +- * Get and set a new active queue for service.
28373 +- */
28374 +-static struct bfq_queue *bfq_set_active_queue(struct bfq_data *bfqd,
28375 +- struct bfq_queue *bfqq)
28376 +-{
28377 +- if (!bfqq)
28378 +- bfqq = bfq_get_next_queue(bfqd);
28379 ++ if (request)
28380 ++ return blk_rq_pos(io_struct);
28381 + else
28382 +- bfq_get_next_queue_forced(bfqd, bfqq);
28383 +-
28384 +- __bfq_set_active_queue(bfqd, bfqq);
28385 +- return bfqq;
28386 ++ return ((struct bio *)io_struct)->bi_sector;
28387 + }
28388 +
28389 +-static inline sector_t bfq_dist_from_last(struct bfq_data *bfqd,
28390 +- struct request *rq)
28391 ++static inline sector_t bfq_dist_from(sector_t pos1,
28392 ++ sector_t pos2)
28393 + {
28394 +- if (blk_rq_pos(rq) >= bfqd->last_position)
28395 +- return blk_rq_pos(rq) - bfqd->last_position;
28396 ++ if (pos1 >= pos2)
28397 ++ return pos1 - pos2;
28398 + else
28399 +- return bfqd->last_position - blk_rq_pos(rq);
28400 ++ return pos2 - pos1;
28401 + }
28402 +
28403 +-/*
28404 +- * Return true if bfqq has no request pending and rq is close enough to
28405 +- * bfqd->last_position, or if rq is closer to bfqd->last_position than
28406 +- * bfqq->next_rq
28407 +- */
28408 +-static inline int bfq_rq_close(struct bfq_data *bfqd, struct request *rq)
28409 ++static inline int bfq_rq_close_to_sector(void *io_struct, bool request,
28410 ++ sector_t sector)
28411 + {
28412 +- return bfq_dist_from_last(bfqd, rq) <= BFQQ_SEEK_THR;
28413 ++ return bfq_dist_from(bfq_io_struct_pos(io_struct, request), sector) <=
28414 ++ BFQQ_SEEK_THR;
28415 + }
28416 +
28417 +-static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
28418 ++static struct bfq_queue *bfqq_close(struct bfq_data *bfqd, sector_t sector)
28419 + {
28420 + struct rb_root *root = &bfqd->rq_pos_tree;
28421 + struct rb_node *parent, *node;
28422 + struct bfq_queue *__bfqq;
28423 +- sector_t sector = bfqd->last_position;
28424 +
28425 + if (RB_EMPTY_ROOT(root))
28426 + return NULL;
28427 +@@ -796,7 +789,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
28428 + * position).
28429 + */
28430 + __bfqq = rb_entry(parent, struct bfq_queue, pos_node);
28431 +- if (bfq_rq_close(bfqd, __bfqq->next_rq))
28432 ++ if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
28433 + return __bfqq;
28434 +
28435 + if (blk_rq_pos(__bfqq->next_rq) < sector)
28436 +@@ -807,7 +800,7 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
28437 + return NULL;
28438 +
28439 + __bfqq = rb_entry(node, struct bfq_queue, pos_node);
28440 +- if (bfq_rq_close(bfqd, __bfqq->next_rq))
28441 ++ if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
28442 + return __bfqq;
28443 +
28444 + return NULL;
28445 +@@ -816,14 +809,12 @@ static struct bfq_queue *bfqq_close(struct bfq_data *bfqd)
28446 + /*
28447 + * bfqd - obvious
28448 + * cur_bfqq - passed in so that we don't decide that the current queue
28449 +- * is closely cooperating with itself.
28450 +- *
28451 +- * We are assuming that cur_bfqq has dispatched at least one request,
28452 +- * and that bfqd->last_position reflects a position on the disk associated
28453 +- * with the I/O issued by cur_bfqq.
28454 ++ * is closely cooperating with itself
28455 ++ * sector - used as a reference point to search for a close queue
28456 + */
28457 + static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
28458 +- struct bfq_queue *cur_bfqq)
28459 ++ struct bfq_queue *cur_bfqq,
28460 ++ sector_t sector)
28461 + {
28462 + struct bfq_queue *bfqq;
28463 +
28464 +@@ -843,7 +834,7 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
28465 + * working closely on the same area of the disk. In that case,
28466 + * we can group them together and don't waste time idling.
28467 + */
28468 +- bfqq = bfqq_close(bfqd);
28469 ++ bfqq = bfqq_close(bfqd, sector);
28470 + if (bfqq == NULL || bfqq == cur_bfqq)
28471 + return NULL;
28472 +
28473 +@@ -870,6 +861,275 @@ static struct bfq_queue *bfq_close_cooperator(struct bfq_data *bfqd,
28474 + return bfqq;
28475 + }
28476 +
28477 ++static struct bfq_queue *
28478 ++bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
28479 ++{
28480 ++ int process_refs, new_process_refs;
28481 ++ struct bfq_queue *__bfqq;
28482 ++
28483 ++ /*
28484 ++ * If there are no process references on the new_bfqq, then it is
28485 ++ * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
28486 ++ * may have dropped their last reference (not just their last process
28487 ++ * reference).
28488 ++ */
28489 ++ if (!bfqq_process_refs(new_bfqq))
28490 ++ return NULL;
28491 ++
28492 ++ /* Avoid a circular list and skip interim queue merges. */
28493 ++ while ((__bfqq = new_bfqq->new_bfqq)) {
28494 ++ if (__bfqq == bfqq)
28495 ++ return NULL;
28496 ++ new_bfqq = __bfqq;
28497 ++ }
28498 ++
28499 ++ process_refs = bfqq_process_refs(bfqq);
28500 ++ new_process_refs = bfqq_process_refs(new_bfqq);
28501 ++ /*
28502 ++ * If the process for the bfqq has gone away, there is no
28503 ++ * sense in merging the queues.
28504 ++ */
28505 ++ if (process_refs == 0 || new_process_refs == 0)
28506 ++ return NULL;
28507 ++
28508 ++ bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
28509 ++ new_bfqq->pid);
28510 ++
28511 ++ /*
28512 ++ * Merging is just a redirection: the requests of the process owning
28513 ++ * one of the two queues are redirected to the other queue. The latter
28514 ++ * queue, in its turn, is set as shared if this is the first time that
28515 ++ * the requests of some process are redirected to it.
28516 ++ *
28517 ++ * We redirect bfqq to new_bfqq and not the opposite, because we
28518 ++ * are in the context of the process owning bfqq, hence we have the
28519 ++ * io_cq of this process. So we can immediately configure this io_cq
28520 ++ * to redirect the requests of the process to new_bfqq.
28521 ++ *
28522 ++ * NOTE, even if new_bfqq coincides with the active queue, the io_cq of
28523 ++ * new_bfqq is not available, because, if the active queue is shared,
28524 ++ * bfqd->active_bic may not point to the io_cq of the active queue.
28525 ++ * Redirecting the requests of the process owning bfqq to the currently
28526 ++ * active queue is in any case the best option, as we feed the active queue
28527 ++ * with new requests close to the last request served and, by doing so,
28528 ++ * hopefully increase the throughput.
28529 ++ */
28530 ++ bfqq->new_bfqq = new_bfqq;
28531 ++ atomic_add(process_refs, &new_bfqq->ref);
28532 ++ return new_bfqq;
28533 ++}
28534 ++
28535 ++/*
28536 ++ * Attempt to schedule a merge of bfqq with the currently active queue or
28537 ++ * with a close queue among the scheduled queues.
28538 ++ * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
28539 ++ * structure otherwise.
28540 ++ */
28541 ++static struct bfq_queue *
28542 ++bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
28543 ++ void *io_struct, bool request)
28544 ++{
28545 ++ struct bfq_queue *active_bfqq, *new_bfqq;
28546 ++
28547 ++ if (bfqq->new_bfqq)
28548 ++ return bfqq->new_bfqq;
28549 ++
28550 ++ if (!io_struct)
28551 ++ return NULL;
28552 ++
28553 ++ active_bfqq = bfqd->active_queue;
28554 ++
28555 ++ if (active_bfqq == NULL || active_bfqq == bfqq || !bfqd->active_bic)
28556 ++ goto check_scheduled;
28557 ++
28558 ++ if (bfq_class_idle(active_bfqq) || bfq_class_idle(bfqq))
28559 ++ goto check_scheduled;
28560 ++
28561 ++ if (bfq_class_rt(active_bfqq) != bfq_class_rt(bfqq))
28562 ++ goto check_scheduled;
28563 ++
28564 ++ if (active_bfqq->entity.parent != bfqq->entity.parent)
28565 ++ goto check_scheduled;
28566 ++
28567 ++ if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
28568 ++ bfq_bfqq_sync(active_bfqq) && bfq_bfqq_sync(bfqq))
28569 ++ if ((new_bfqq = bfq_setup_merge(bfqq, active_bfqq)))
28570 ++ return new_bfqq; /* Merge with the active queue */
28571 ++
28572 ++ /*
28573 ++ * Check whether there is a cooperator among currently scheduled
28574 ++ * queues. The only thing we need is that the bio/request is not
28575 ++ * NULL, as we need it to establish whether a cooperator exists.
28576 ++ */
28577 ++check_scheduled:
28578 ++ new_bfqq = bfq_close_cooperator(bfqd, bfqq,
28579 ++ bfq_io_struct_pos(io_struct, request));
28580 ++ if (new_bfqq)
28581 ++ return bfq_setup_merge(bfqq, new_bfqq);
28582 ++
28583 ++ return NULL;
28584 ++}
28585 ++
28586 ++static inline void
28587 ++bfq_bfqq_save_state(struct bfq_queue *bfqq)
28588 ++{
28589 ++ /*
28590 ++ * If bfqq->bic == NULL, the queue is already shared or its requests
28591 ++ * have already been redirected to a shared queue; both idle window
28592 ++ * and weight raising state have already been saved. Do nothing.
28593 ++ */
28594 ++ if (bfqq->bic == NULL)
28595 ++ return;
28596 ++ if (bfqq->bic->raising_time_left)
28597 ++ /*
28598 ++ * This is the queue of a just-started process, and would
28599 ++ * deserve weight raising: we set raising_time_left to the full
28600 ++ * weight-raising duration to trigger weight-raising when and
28601 ++ * if the queue is split and the first request of the queue
28602 ++ * is enqueued.
28603 ++ */
28604 ++ bfqq->bic->raising_time_left = bfq_wrais_duration(bfqq->bfqd);
28605 ++ else if (bfqq->raising_coeff > 1) {
28606 ++ unsigned long wrais_duration =
28607 ++ jiffies - bfqq->last_rais_start_finish;
28608 ++ /*
28609 ++ * It may happen that a queue's weight raising period lasts
28610 ++ * longer than its raising_cur_max_time, as weight raising is
28611 ++ * handled only when a request is enqueued or dispatched (it
28612 ++ * does not use any timer). If the weight raising period is
28613 ++ * about to end, don't save it.
28614 ++ */
28615 ++ if (bfqq->raising_cur_max_time <= wrais_duration)
28616 ++ bfqq->bic->raising_time_left = 0;
28617 ++ else
28618 ++ bfqq->bic->raising_time_left =
28619 ++ bfqq->raising_cur_max_time - wrais_duration;
28620 ++ /*
28621 ++ * The bfq_queue is becoming shared or the requests of the
28622 ++ * process owning the queue are being redirected to a shared
28623 ++ * queue. Stop the weight raising period of the queue, as in
28624 ++ * both cases it should not be owned by an interactive or soft
28625 ++ * real-time application.
28626 ++ */
28627 ++ bfq_bfqq_end_raising(bfqq);
28628 ++ } else
28629 ++ bfqq->bic->raising_time_left = 0;
28630 ++ bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
28631 ++}
28632 ++
28633 ++static inline void
28634 ++bfq_get_bic_reference(struct bfq_queue *bfqq)
28635 ++{
28636 ++ /*
28637 ++ * If bfqq->bic has a non-NULL value, the bic to which it belongs
28638 ++ * is about to begin using a shared bfq_queue.
28639 ++ */
28640 ++ if (bfqq->bic)
28641 ++ atomic_long_inc(&bfqq->bic->icq.ioc->refcount);
28642 ++}
28643 ++
28644 ++static void
28645 ++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
28646 ++ struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
28647 ++{
28648 ++ bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
28649 ++ (long unsigned)new_bfqq->pid);
28650 ++ /* Save weight raising and idle window of the merged queues */
28651 ++ bfq_bfqq_save_state(bfqq);
28652 ++ bfq_bfqq_save_state(new_bfqq);
28653 ++ /*
28654 ++ * Grab a reference to the bic, to prevent it from being destroyed
28655 ++ * before being possibly touched by a bfq_split_bfqq().
28656 ++ */
28657 ++ bfq_get_bic_reference(bfqq);
28658 ++ bfq_get_bic_reference(new_bfqq);
28659 ++ /* Merge queues (that is, let bic redirect its requests to new_bfqq) */
28660 ++ bic_set_bfqq(bic, new_bfqq, 1);
28661 ++ bfq_mark_bfqq_coop(new_bfqq);
28662 ++ /*
28663 ++ * new_bfqq now belongs to at least two bics (it is a shared queue): set
28664 ++ * new_bfqq->bic to NULL. bfqq either:
28665 ++ * - does not belong to any bic any more, and hence bfqq->bic must
28666 ++ * be set to NULL, or
28667 ++ * - is a queue whose owning bics have already been redirected to a
28668 ++ * different queue, hence the queue is destined to not belong to any
28669 ++ * bic soon and bfqq->bic is already NULL (therefore the next
28670 ++ * assignment causes no harm).
28671 ++ */
28672 ++ new_bfqq->bic = NULL;
28673 ++ bfqq->bic = NULL;
28674 ++ bfq_put_queue(bfqq);
28675 ++}
28676 ++
28677 ++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
28678 ++ struct bio *bio)
28679 ++{
28680 ++ struct bfq_data *bfqd = q->elevator->elevator_data;
28681 ++ struct bfq_io_cq *bic;
28682 ++ struct bfq_queue *bfqq, *new_bfqq;
28683 ++
28684 ++ /*
28685 ++ * Disallow merge of a sync bio into an async request.
28686 ++ */
28687 ++ if (bfq_bio_sync(bio) && !rq_is_sync(rq))
28688 ++ return 0;
28689 ++
28690 ++ /*
28691 ++ * Lookup the bfqq that this bio will be queued with. Allow
28692 ++ * merge only if rq is queued there.
28693 ++ * Queue lock is held here.
28694 ++ */
28695 ++ bic = bfq_bic_lookup(bfqd, current->io_context);
28696 ++ if (bic == NULL)
28697 ++ return 0;
28698 ++
28699 ++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
28700 ++ /*
28701 ++ * We take advantage of this function to perform an early merge
28702 ++ * of the queues of possible cooperating processes.
28703 ++ */
28704 ++ if (bfqq != NULL &&
28705 ++ (new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false))) {
28706 ++ bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
28707 ++ /*
28708 ++ * If we get here, the bio will be queued in the shared queue,
28709 ++ * i.e., new_bfqq, so use new_bfqq to decide whether bio and
28710 ++ * rq can be merged.
28711 ++ */
28712 ++ bfqq = new_bfqq;
28713 ++ }
28714 ++
28715 ++ return bfqq == RQ_BFQQ(rq);
28716 ++}
28717 ++
28718 ++static void __bfq_set_active_queue(struct bfq_data *bfqd,
28719 ++ struct bfq_queue *bfqq)
28720 ++{
28721 ++ if (bfqq != NULL) {
28722 ++ bfq_mark_bfqq_must_alloc(bfqq);
28723 ++ bfq_mark_bfqq_budget_new(bfqq);
28724 ++ bfq_clear_bfqq_fifo_expire(bfqq);
28725 ++
28726 ++ bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
28727 ++
28728 ++ bfq_log_bfqq(bfqd, bfqq, "set_active_queue, cur-budget = %lu",
28729 ++ bfqq->entity.budget);
28730 ++ }
28731 ++
28732 ++ bfqd->active_queue = bfqq;
28733 ++}
28734 ++
28735 ++/*
28736 ++ * Get and set a new active queue for service.
28737 ++ */
28738 ++static struct bfq_queue *bfq_set_active_queue(struct bfq_data *bfqd)
28739 ++{
28740 ++ struct bfq_queue *bfqq = bfq_get_next_queue(bfqd);
28741 ++
28742 ++ __bfq_set_active_queue(bfqd, bfqq);
28743 ++ return bfqq;
28744 ++}
28745 ++
28746 + /*
28747 + * If enough samples have been computed, return the current max budget
28748 + * stored in bfqd, which is dynamically updated according to the
28749 +@@ -1017,63 +1277,6 @@ static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
28750 + return rq;
28751 + }
28752 +
28753 +-/*
28754 +- * Must be called with the queue_lock held.
28755 +- */
28756 +-static int bfqq_process_refs(struct bfq_queue *bfqq)
28757 +-{
28758 +- int process_refs, io_refs;
28759 +-
28760 +- io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
28761 +- process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
28762 +- BUG_ON(process_refs < 0);
28763 +- return process_refs;
28764 +-}
28765 +-
28766 +-static void bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
28767 +-{
28768 +- int process_refs, new_process_refs;
28769 +- struct bfq_queue *__bfqq;
28770 +-
28771 +- /*
28772 +- * If there are no process references on the new_bfqq, then it is
28773 +- * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
28774 +- * may have dropped their last reference (not just their last process
28775 +- * reference).
28776 +- */
28777 +- if (!bfqq_process_refs(new_bfqq))
28778 +- return;
28779 +-
28780 +- /* Avoid a circular list and skip interim queue merges. */
28781 +- while ((__bfqq = new_bfqq->new_bfqq)) {
28782 +- if (__bfqq == bfqq)
28783 +- return;
28784 +- new_bfqq = __bfqq;
28785 +- }
28786 +-
28787 +- process_refs = bfqq_process_refs(bfqq);
28788 +- new_process_refs = bfqq_process_refs(new_bfqq);
28789 +- /*
28790 +- * If the process for the bfqq has gone away, there is no
28791 +- * sense in merging the queues.
28792 +- */
28793 +- if (process_refs == 0 || new_process_refs == 0)
28794 +- return;
28795 +-
28796 +- /*
28797 +- * Merge in the direction of the lesser amount of work.
28798 +- */
28799 +- if (new_process_refs >= process_refs) {
28800 +- bfqq->new_bfqq = new_bfqq;
28801 +- atomic_add(process_refs, &new_bfqq->ref);
28802 +- } else {
28803 +- new_bfqq->new_bfqq = bfqq;
28804 +- atomic_add(new_process_refs, &bfqq->ref);
28805 +- }
28806 +- bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
28807 +- new_bfqq->pid);
28808 +-}
28809 +-
28810 + static inline unsigned long bfq_bfqq_budget_left(struct bfq_queue *bfqq)
28811 + {
28812 + struct bfq_entity *entity = &bfqq->entity;
28813 +@@ -1493,6 +1696,14 @@ static inline int bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
28814 + * is likely to boost the disk throughput);
28815 + * - the queue is weight-raised (waiting for the request is necessary for
28816 + * providing the queue with fairness and latency guarantees).
28817 ++ *
28818 ++ * In any case, idling can be disabled for cooperation issues, if
28819 ++ * 1) there is a close cooperator for the queue, or
28820 ++ * 2) the queue is shared and some cooperator is likely to be idle (in this
28821 ++ * case, by not arming the idle timer, we try to slow down the queue, to
28822 ++ * prevent the zones of the disk accessed by the active cooperators to
28823 ++ * become too distant from the zone that will be accessed by the currently
28824 ++ * idle cooperators).
28825 + */
28826 + static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq,
28827 + int budg_timeout)
28828 +@@ -1507,7 +1718,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq,
28829 + (bfqd->rq_in_driver == 0 ||
28830 + budg_timeout ||
28831 + bfqq->raising_coeff > 1) &&
28832 +- !bfq_close_cooperator(bfqd, bfqq) &&
28833 ++ !bfq_close_cooperator(bfqd, bfqq, bfqd->last_position) &&
28834 + (!bfq_bfqq_coop(bfqq) ||
28835 + !bfq_bfqq_some_coop_idle(bfqq)) &&
28836 + !bfq_queue_nonrot_noidle(bfqd, bfqq));
28837 +@@ -1519,7 +1730,7 @@ static inline bool bfq_bfqq_must_idle(struct bfq_queue *bfqq,
28838 + */
28839 + static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
28840 + {
28841 +- struct bfq_queue *bfqq, *new_bfqq = NULL;
28842 ++ struct bfq_queue *bfqq;
28843 + struct request *next_rq;
28844 + enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
28845 + int budg_timeout;
28846 +@@ -1530,17 +1741,6 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
28847 +
28848 + bfq_log_bfqq(bfqd, bfqq, "select_queue: already active queue");
28849 +
28850 +- /*
28851 +- * If another queue has a request waiting within our mean seek
28852 +- * distance, let it run. The expire code will check for close
28853 +- * cooperators and put the close queue at the front of the
28854 +- * service tree. If possible, merge the expiring queue with the
28855 +- * new bfqq.
28856 +- */
28857 +- new_bfqq = bfq_close_cooperator(bfqd, bfqq);
28858 +- if (new_bfqq != NULL && bfqq->new_bfqq == NULL)
28859 +- bfq_setup_merge(bfqq, new_bfqq);
28860 +-
28861 + budg_timeout = bfq_may_expire_for_budg_timeout(bfqq);
28862 + if (budg_timeout &&
28863 + !bfq_bfqq_must_idle(bfqq, budg_timeout))
28864 +@@ -1577,10 +1777,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
28865 + bfq_clear_bfqq_wait_request(bfqq);
28866 + del_timer(&bfqd->idle_slice_timer);
28867 + }
28868 +- if (new_bfqq == NULL)
28869 +- goto keep_queue;
28870 +- else
28871 +- goto expire;
28872 ++ goto keep_queue;
28873 + }
28874 + }
28875 +
28876 +@@ -1589,26 +1786,19 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
28877 + * queue still has requests in flight or is idling for a new request,
28878 + * then keep it.
28879 + */
28880 +- if (new_bfqq == NULL && (timer_pending(&bfqd->idle_slice_timer) ||
28881 ++ if (timer_pending(&bfqd->idle_slice_timer) ||
28882 + (bfqq->dispatched != 0 &&
28883 + (bfq_bfqq_idle_window(bfqq) || bfqq->raising_coeff > 1) &&
28884 +- !bfq_queue_nonrot_noidle(bfqd, bfqq)))) {
28885 ++ !bfq_queue_nonrot_noidle(bfqd, bfqq))) {
28886 + bfqq = NULL;
28887 + goto keep_queue;
28888 +- } else if (new_bfqq != NULL && timer_pending(&bfqd->idle_slice_timer)) {
28889 +- /*
28890 +- * Expiring the queue because there is a close cooperator,
28891 +- * cancel timer.
28892 +- */
28893 +- bfq_clear_bfqq_wait_request(bfqq);
28894 +- del_timer(&bfqd->idle_slice_timer);
28895 + }
28896 +
28897 + reason = BFQ_BFQQ_NO_MORE_REQUESTS;
28898 + expire:
28899 + bfq_bfqq_expire(bfqd, bfqq, 0, reason);
28900 + new_queue:
28901 +- bfqq = bfq_set_active_queue(bfqd, new_bfqq);
28902 ++ bfqq = bfq_set_active_queue(bfqd);
28903 + bfq_log(bfqd, "select_queue: new queue %d returned",
28904 + bfqq != NULL ? bfqq->pid : 0);
28905 + keep_queue:
28906 +@@ -1617,9 +1807,8 @@ keep_queue:
28907 +
28908 + static void update_raising_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
28909 + {
28910 ++ struct bfq_entity *entity = &bfqq->entity;
28911 + if (bfqq->raising_coeff > 1) { /* queue is being boosted */
28912 +- struct bfq_entity *entity = &bfqq->entity;
28913 +-
28914 + bfq_log_bfqq(bfqd, bfqq,
28915 + "raising period dur %u/%u msec, "
28916 + "old raising coeff %u, w %d(%d)",
28917 +@@ -1656,12 +1845,14 @@ static void update_raising_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
28918 + jiffies_to_msecs(bfqq->
28919 + raising_cur_max_time));
28920 + bfq_bfqq_end_raising(bfqq);
28921 +- __bfq_entity_update_weight_prio(
28922 +- bfq_entity_service_tree(entity),
28923 +- entity);
28924 + }
28925 + }
28926 + }
28927 ++ /* Update weight both if it must be raised and if it must be lowered */
28928 ++ if ((entity->weight > entity->orig_weight) != (bfqq->raising_coeff > 1))
28929 ++ __bfq_entity_update_weight_prio(
28930 ++ bfq_entity_service_tree(entity),
28931 ++ entity);
28932 + }
28933 +
28934 + /*
28935 +@@ -1901,6 +2092,25 @@ static void bfq_init_icq(struct io_cq *icq)
28936 + struct bfq_io_cq *bic = icq_to_bic(icq);
28937 +
28938 + bic->ttime.last_end_request = jiffies;
28939 ++ /*
28940 ++ * A newly created bic indicates that the process has just
28941 ++ * started doing I/O, and is probably mapping into memory its
28942 ++ * executable and libraries: it definitely needs weight raising.
28943 ++ * There is however the possibility that the process performs,
28944 ++ * for a while, I/O close to some other process. EQM intercepts
28945 ++ * this behavior and may merge the queue corresponding to the
28946 ++ * process with some other queue, BEFORE the weight of the queue
28947 ++ * is raised. Merged queues are not weight-raised (they are assumed
28948 ++ * to belong to processes that benefit only from high throughput).
28949 ++ * If the merge is basically the consequence of an accident, then
28950 ++ * the queue will be split soon and will get back its old weight.
28951 ++ * It is then important to write down somewhere that this queue
28952 ++ * does need weight raising, even if it did not make it to get its
28953 ++ * weight raised before being merged. To this purpose, we overload
28954 ++ * the field raising_time_left and assign 1 to it, to mark the queue
28955 ++ * as needing weight raising.
28956 ++ */
28957 ++ bic->raising_time_left = 1;
28958 + }
28959 +
28960 + static void bfq_exit_icq(struct io_cq *icq)
28961 +@@ -1914,6 +2124,13 @@ static void bfq_exit_icq(struct io_cq *icq)
28962 + }
28963 +
28964 + if (bic->bfqq[BLK_RW_SYNC]) {
28965 ++ /*
28966 ++ * If the bic is using a shared queue, put the reference
28967 ++ * taken on the io_context when the bic started using a
28968 ++ * shared bfq_queue.
28969 ++ */
28970 ++ if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
28971 ++ put_io_context(icq->ioc);
28972 + bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
28973 + bic->bfqq[BLK_RW_SYNC] = NULL;
28974 + }
28975 +@@ -2211,6 +2428,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
28976 + if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
28977 + return;
28978 +
28979 ++ /* Idle window just restored, statistics are meaningless. */
28980 ++ if (bfq_bfqq_just_split(bfqq))
28981 ++ return;
28982 ++
28983 + enable_idle = bfq_bfqq_idle_window(bfqq);
28984 +
28985 + if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
28986 +@@ -2251,6 +2472,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
28987 + if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
28988 + !BFQQ_SEEKY(bfqq))
28989 + bfq_update_idle_window(bfqd, bfqq, bic);
28990 ++ bfq_clear_bfqq_just_split(bfqq);
28991 +
28992 + bfq_log_bfqq(bfqd, bfqq,
28993 + "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
28994 +@@ -2302,13 +2524,45 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
28995 + static void bfq_insert_request(struct request_queue *q, struct request *rq)
28996 + {
28997 + struct bfq_data *bfqd = q->elevator->elevator_data;
28998 +- struct bfq_queue *bfqq = RQ_BFQQ(rq);
28999 ++ struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
29000 +
29001 + assert_spin_locked(bfqd->queue->queue_lock);
29002 ++
29003 ++ /*
29004 ++ * An unplug may trigger a requeue of a request from the device
29005 ++ * driver: make sure we are in process context while trying to
29006 ++ * merge two bfq_queues.
29007 ++ */
29008 ++ if (!in_interrupt() &&
29009 ++ (new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true))) {
29010 ++ if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
29011 ++ new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1);
29012 ++ /*
29013 ++ * Release the request's reference to the old bfqq
29014 ++ * and make sure one is taken to the shared queue.
29015 ++ */
29016 ++ new_bfqq->allocated[rq_data_dir(rq)]++;
29017 ++ bfqq->allocated[rq_data_dir(rq)]--;
29018 ++ atomic_inc(&new_bfqq->ref);
29019 ++ bfq_put_queue(bfqq);
29020 ++ if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
29021 ++ bfq_merge_bfqqs(bfqd, RQ_BIC(rq), bfqq, new_bfqq);
29022 ++ rq->elv.priv[1] = new_bfqq;
29023 ++ bfqq = new_bfqq;
29024 ++ }
29025 ++
29026 + bfq_init_prio_data(bfqq, RQ_BIC(rq));
29027 +
29028 + bfq_add_rq_rb(rq);
29029 +
29030 ++ /*
29031 ++ * Here a newly-created bfq_queue has already started a weight-raising
29032 ++ * period: clear raising_time_left to prevent bfq_bfqq_save_state()
29033 ++ * from assigning it a full weight-raising period. See the detailed
29034 ++ * comments about this field in bfq_init_icq().
29035 ++ */
29036 ++ if (bfqq->bic != NULL)
29037 ++ bfqq->bic->raising_time_left = 0;
29038 + rq_set_fifo_time(rq, jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)]);
29039 + list_add_tail(&rq->queuelist, &bfqq->fifo);
29040 +
29041 +@@ -2371,15 +2625,6 @@ static void bfq_completed_request(struct request_queue *q, struct request *rq)
29042 + if (bfq_bfqq_budget_new(bfqq))
29043 + bfq_set_budget_timeout(bfqd);
29044 +
29045 +- /* Idling is disabled also for cooperation issues:
29046 +- * 1) there is a close cooperator for the queue, or
29047 +- * 2) the queue is shared and some cooperator is likely
29048 +- * to be idle (in this case, by not arming the idle timer,
29049 +- * we try to slow down the queue, to prevent the zones
29050 +- * of the disk accessed by the active cooperators to become
29051 +- * too distant from the zone that will be accessed by the
29052 +- * currently idle cooperators)
29053 +- */
29054 + if (bfq_bfqq_must_idle(bfqq, budg_timeout))
29055 + bfq_arm_slice_timer(bfqd);
29056 + else if (budg_timeout)
29057 +@@ -2449,18 +2694,6 @@ static void bfq_put_request(struct request *rq)
29058 + }
29059 + }
29060 +
29061 +-static struct bfq_queue *
29062 +-bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
29063 +- struct bfq_queue *bfqq)
29064 +-{
29065 +- bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
29066 +- (long unsigned)bfqq->new_bfqq->pid);
29067 +- bic_set_bfqq(bic, bfqq->new_bfqq, 1);
29068 +- bfq_mark_bfqq_coop(bfqq->new_bfqq);
29069 +- bfq_put_queue(bfqq);
29070 +- return bic_to_bfqq(bic, 1);
29071 +-}
29072 +-
29073 + /*
29074 + * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
29075 + * was the last process referring to said bfqq.
29076 +@@ -2469,6 +2702,9 @@ static struct bfq_queue *
29077 + bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
29078 + {
29079 + bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
29080 ++
29081 ++ put_io_context(bic->icq.ioc);
29082 ++
29083 + if (bfqq_process_refs(bfqq) == 1) {
29084 + bfqq->pid = current->pid;
29085 + bfq_clear_bfqq_some_coop_idle(bfqq);
29086 +@@ -2498,6 +2734,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
29087 + struct bfq_queue *bfqq;
29088 + struct bfq_group *bfqg;
29089 + unsigned long flags;
29090 ++ bool split = false;
29091 +
29092 + might_sleep_if(gfp_mask & __GFP_WAIT);
29093 +
29094 +@@ -2516,24 +2753,14 @@ new_queue:
29095 + bfqq = bfq_get_queue(bfqd, bfqg, is_sync, bic, gfp_mask);
29096 + bic_set_bfqq(bic, bfqq, is_sync);
29097 + } else {
29098 +- /*
29099 +- * If the queue was seeky for too long, break it apart.
29100 +- */
29101 ++ /* If the queue was seeky for too long, break it apart. */
29102 + if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
29103 + bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
29104 + bfqq = bfq_split_bfqq(bic, bfqq);
29105 ++ split = true;
29106 + if (!bfqq)
29107 + goto new_queue;
29108 + }
29109 +-
29110 +- /*
29111 +- * Check to see if this queue is scheduled to merge with
29112 +- * another closely cooperating queue. The merging of queues
29113 +- * happens here as it must be done in process context.
29114 +- * The reference on new_bfqq was taken in merge_bfqqs.
29115 +- */
29116 +- if (bfqq->new_bfqq != NULL)
29117 +- bfqq = bfq_merge_bfqqs(bfqd, bic, bfqq);
29118 + }
29119 +
29120 + bfqq->allocated[rw]++;
29121 +@@ -2544,6 +2771,26 @@ new_queue:
29122 + rq->elv.priv[0] = bic;
29123 + rq->elv.priv[1] = bfqq;
29124 +
29125 ++ /*
29126 ++ * If a bfq_queue has only one process reference, it is owned
29127 ++ * by only one bfq_io_cq: we can set the bic field of the
29128 ++ * bfq_queue to the address of that structure. Also, if the
29129 ++ * queue has just been split, mark a flag so that the
29130 ++ * information is available to the other scheduler hooks.
29131 ++ */
29132 ++ if (bfqq_process_refs(bfqq) == 1) {
29133 ++ bfqq->bic = bic;
29134 ++ if (split) {
29135 ++ bfq_mark_bfqq_just_split(bfqq);
29136 ++ /*
29137 ++ * If the queue has just been split from a shared queue,
29138 ++ * restore the idle window and the possible weight
29139 ++ * raising period.
29140 ++ */
29141 ++ bfq_bfqq_resume_state(bfqq, bic);
29142 ++ }
29143 ++ }
29144 ++
29145 + spin_unlock_irqrestore(q->queue_lock, flags);
29146 +
29147 + return 0;
29148 +diff --git a/block/bfq-sched.c b/block/bfq-sched.c
29149 +index 03f8061..a0edaa2 100644
29150 +--- a/block/bfq-sched.c
29151 ++++ b/block/bfq-sched.c
29152 +@@ -978,34 +978,6 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
29153 + return bfqq;
29154 + }
29155 +
29156 +-/*
29157 +- * Forced extraction of the given queue.
29158 +- */
29159 +-static void bfq_get_next_queue_forced(struct bfq_data *bfqd,
29160 +- struct bfq_queue *bfqq)
29161 +-{
29162 +- struct bfq_entity *entity;
29163 +- struct bfq_sched_data *sd;
29164 +-
29165 +- BUG_ON(bfqd->active_queue != NULL);
29166 +-
29167 +- entity = &bfqq->entity;
29168 +- /*
29169 +- * Bubble up extraction/update from the leaf to the root.
29170 +- */
29171 +- for_each_entity(entity) {
29172 +- sd = entity->sched_data;
29173 +- bfq_update_budget(entity);
29174 +- bfq_update_vtime(bfq_entity_service_tree(entity));
29175 +- bfq_active_extract(bfq_entity_service_tree(entity), entity);
29176 +- sd->active_entity = entity;
29177 +- sd->next_active = NULL;
29178 +- entity->service = 0;
29179 +- }
29180 +-
29181 +- return;
29182 +-}
29183 +-
29184 + static void __bfq_bfqd_reset_active(struct bfq_data *bfqd)
29185 + {
29186 + if (bfqd->active_bic != NULL) {
29187 +diff --git a/block/bfq.h b/block/bfq.h
29188 +index 48ecde9..bb52975 100644
29189 +--- a/block/bfq.h
29190 ++++ b/block/bfq.h
29191 +@@ -188,6 +188,8 @@ struct bfq_group;
29192 + * @pid: pid of the process owning the queue, used for logging purposes.
29193 + * @last_rais_start_time: last (idle -> weight-raised) transition attempt
29194 + * @raising_cur_max_time: current max raising time for this queue
29195 ++ * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
29196 ++ * queue is shared
29197 + *
29198 + * A bfq_queue is a leaf request queue; it can be associated to an io_context
29199 + * or more (if it is an async one). @cgroup holds a reference to the
29200 +@@ -231,6 +233,7 @@ struct bfq_queue {
29201 + sector_t last_request_pos;
29202 +
29203 + pid_t pid;
29204 ++ struct bfq_io_cq *bic;
29205 +
29206 + /* weight-raising fields */
29207 + unsigned int raising_cur_max_time;
29208 +@@ -257,12 +260,23 @@ struct bfq_ttime {
29209 + * @icq: associated io_cq structure
29210 + * @bfqq: array of two process queues, the sync and the async
29211 + * @ttime: associated @bfq_ttime struct
29212 ++ * @raising_time_left: snapshot of the time left before weight raising ends
29213 ++ * for the sync queue associated to this process; this
29214 ++ * snapshot is taken to remember this value while the weight
29215 ++ * raising is suspended because the queue is merged with a
29216 ++ * shared queue, and is used to set @raising_cur_max_time
29217 ++ * when the queue is split from the shared queue and its
29218 ++ * weight is raised again
29219 ++ * @saved_idle_window: same purpose as the previous field for the idle window
29220 + */
29221 + struct bfq_io_cq {
29222 + struct io_cq icq; /* must be the first member */
29223 + struct bfq_queue *bfqq[2];
29224 + struct bfq_ttime ttime;
29225 + int ioprio;
29226 ++
29227 ++ unsigned int raising_time_left;
29228 ++ unsigned int saved_idle_window;
29229 + };
29230 +
29231 + /**
29232 +@@ -403,6 +417,7 @@ enum bfqq_state_flags {
29233 + BFQ_BFQQ_FLAG_coop, /* bfqq is shared */
29234 + BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be splitted */
29235 + BFQ_BFQQ_FLAG_some_coop_idle, /* some cooperator is inactive */
29236 ++ BFQ_BFQQ_FLAG_just_split, /* queue has just been split */
29237 + };
29238 +
29239 + #define BFQ_BFQQ_FNS(name) \
29240 +@@ -430,6 +445,7 @@ BFQ_BFQQ_FNS(budget_new);
29241 + BFQ_BFQQ_FNS(coop);
29242 + BFQ_BFQQ_FNS(split_coop);
29243 + BFQ_BFQQ_FNS(some_coop_idle);
29244 ++BFQ_BFQQ_FNS(just_split);
29245 + #undef BFQ_BFQQ_FNS
29246 +
29247 + /* Logging facilities. */
29248 +--
29249 +1.8.1.4
29250 +
29251
29252 Added: genpatches-2.6/trunk/3.10.7/2400_kcopy-patch-for-infiniband-driver.patch
29253 ===================================================================
29254 --- genpatches-2.6/trunk/3.10.7/2400_kcopy-patch-for-infiniband-driver.patch (rev 0)
29255 +++ genpatches-2.6/trunk/3.10.7/2400_kcopy-patch-for-infiniband-driver.patch 2013-08-29 12:09:12 UTC (rev 2497)
29256 @@ -0,0 +1,731 @@
29257 +From 1f52075d672a9bdd0069b3ea68be266ef5c229bd Mon Sep 17 00:00:00 2001
29258 +From: Alexey Shvetsov <alexxy@g.o>
29259 +Date: Tue, 17 Jan 2012 21:08:49 +0400
29260 +Subject: [PATCH] [kcopy] Add kcopy driver
29261 +
29262 +Add kcopy driver from qlogic to implement zero copy for infiniband psm
29263 +userspace driver
29264 +
29265 +Signed-off-by: Alexey Shvetsov <alexxy@g.o>
29266 +---
29267 + drivers/char/Kconfig | 2 +
29268 + drivers/char/Makefile | 2 +
29269 + drivers/char/kcopy/Kconfig | 17 ++
29270 + drivers/char/kcopy/Makefile | 4 +
29271 + drivers/char/kcopy/kcopy.c | 646 +++++++++++++++++++++++++++++++++++++++++++
29272 + 5 files changed, 671 insertions(+)
29273 + create mode 100644 drivers/char/kcopy/Kconfig
29274 + create mode 100644 drivers/char/kcopy/Makefile
29275 + create mode 100644 drivers/char/kcopy/kcopy.c
29276 +
29277 +diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
29278 +index ee94686..5b81449 100644
29279 +--- a/drivers/char/Kconfig
29280 ++++ b/drivers/char/Kconfig
29281 +@@ -6,6 +6,8 @@ menu "Character devices"
29282 +
29283 + source "drivers/tty/Kconfig"
29284 +
29285 ++source "drivers/char/kcopy/Kconfig"
29286 ++
29287 + config DEVKMEM
29288 + bool "/dev/kmem virtual device support"
29289 + default y
29290 +diff --git a/drivers/char/Makefile b/drivers/char/Makefile
29291 +index 0dc5d7c..be519d6 100644
29292 +--- a/drivers/char/Makefile
29293 ++++ b/drivers/char/Makefile
29294 +@@ -62,3 +62,5 @@
29295 + js-rtc-y = rtc.o
29296 +
29297 + obj-$(CONFIG_TILE_SROM) += tile-srom.o
29298 ++
29299 ++obj-$(CONFIG_KCOPY) += kcopy/
29300 +diff --git a/drivers/char/kcopy/Kconfig b/drivers/char/kcopy/Kconfig
29301 +new file mode 100644
29302 +index 0000000..453ae52
29303 +--- /dev/null
29304 ++++ b/drivers/char/kcopy/Kconfig
29305 +@@ -0,0 +1,17 @@
29306 ++#
29307 ++# KCopy character device configuration
29308 ++#
29309 ++
29310 ++menu "KCopy"
29311 ++
29312 ++config KCOPY
29313 ++ tristate "Memory-to-memory copies using kernel assist"
29314 ++ default m
29315 ++ ---help---
29316 ++ High-performance inter-process memory copies. Can often save a
29317 ++ memory copy to shared memory in the application. Useful at least
29318 ++ for MPI applications where the point-to-point nature of vmsplice
29319 ++ and pipes can be a limiting factor in performance.
29320 ++
29321 ++endmenu
29322 ++
29323 +diff --git a/drivers/char/kcopy/Makefile b/drivers/char/kcopy/Makefile
29324 +new file mode 100644
29325 +index 0000000..9cb269b
29326 +--- /dev/null
29327 ++++ b/drivers/char/kcopy/Makefile
29328 +@@ -0,0 +1,4 @@
29329 ++#
29330 ++# Makefile for the kernel character device drivers.
29331 ++#
29332 ++obj-$(CONFIG_KCOPY) += kcopy.o
29333 +diff --git a/drivers/char/kcopy/kcopy.c b/drivers/char/kcopy/kcopy.c
29334 +new file mode 100644
29335 +index 0000000..a9f915c
29336 +--- /dev/null
29337 ++++ b/drivers/char/kcopy/kcopy.c
29338 +@@ -0,0 +1,646 @@
29339 ++#include <linux/module.h>
29340 ++#include <linux/fs.h>
29341 ++#include <linux/cdev.h>
29342 ++#include <linux/device.h>
29343 ++#include <linux/mutex.h>
29344 ++#include <linux/mman.h>
29345 ++#include <linux/highmem.h>
29346 ++#include <linux/spinlock.h>
29347 ++#include <linux/sched.h>
29348 ++#include <linux/rbtree.h>
29349 ++#include <linux/rcupdate.h>
29350 ++#include <linux/uaccess.h>
29351 ++#include <linux/slab.h>
29352 ++
29353 ++MODULE_LICENSE("GPL");
29354 ++MODULE_AUTHOR("Arthur Jones <arthur.jones@××××××.com>");
29355 ++MODULE_DESCRIPTION("QLogic kcopy driver");
29356 ++
29357 ++#define KCOPY_ABI 1
29358 ++#define KCOPY_MAX_MINORS 64
29359 ++
29360 ++struct kcopy_device {
29361 ++ struct cdev cdev;
29362 ++ struct class *class;
29363 ++ struct device *devp[KCOPY_MAX_MINORS];
29364 ++ dev_t dev;
29365 ++
29366 ++ struct kcopy_file *kf[KCOPY_MAX_MINORS];
29367 ++ struct mutex open_lock;
29368 ++};
29369 ++
29370 ++static struct kcopy_device kcopy_dev;
29371 ++
29372 ++/* per file data / one of these is shared per minor */
29373 ++struct kcopy_file {
29374 ++ int count;
29375 ++
29376 ++ /* pid indexed */
29377 ++ struct rb_root live_map_tree;
29378 ++
29379 ++ struct mutex map_lock;
29380 ++};
29381 ++
29382 ++struct kcopy_map_entry {
29383 ++ int count;
29384 ++ struct task_struct *task;
29385 ++ pid_t pid;
29386 ++ struct kcopy_file *file; /* file backpointer */
29387 ++
29388 ++ struct list_head list; /* free map list */
29389 ++ struct rb_node node; /* live map tree */
29390 ++};
29391 ++
29392 ++#define KCOPY_GET_SYSCALL 1
29393 ++#define KCOPY_PUT_SYSCALL 2
29394 ++#define KCOPY_ABI_SYSCALL 3
29395 ++
29396 ++struct kcopy_syscall {
29397 ++ __u32 tag;
29398 ++ pid_t pid;
29399 ++ __u64 n;
29400 ++ __u64 src;
29401 ++ __u64 dst;
29402 ++};
29403 ++
29404 ++static const void __user *kcopy_syscall_src(const struct kcopy_syscall *ks)
29405 ++{
29406 ++ return (const void __user *) (unsigned long) ks->src;
29407 ++}
29408 ++
29409 ++static void __user *kcopy_syscall_dst(const struct kcopy_syscall *ks)
29410 ++{
29411 ++ return (void __user *) (unsigned long) ks->dst;
29412 ++}
29413 ++
29414 ++static unsigned long kcopy_syscall_n(const struct kcopy_syscall *ks)
29415 ++{
29416 ++ return (unsigned long) ks->n;
29417 ++}
29418 ++
29419 ++static struct kcopy_map_entry *kcopy_create_entry(struct kcopy_file *file)
29420 ++{
29421 ++ struct kcopy_map_entry *kme =
29422 ++ kmalloc(sizeof(struct kcopy_map_entry), GFP_KERNEL);
29423 ++
29424 ++ if (!kme)
29425 ++ return NULL;
29426 ++
29427 ++ kme->count = 1;
29428 ++ kme->file = file;
29429 ++ kme->task = current;
29430 ++ kme->pid = current->tgid;
29431 ++ INIT_LIST_HEAD(&kme->list);
29432 ++
29433 ++ return kme;
29434 ++}
29435 ++
29436 ++static struct kcopy_map_entry *
29437 ++kcopy_lookup_pid(struct rb_root *root, pid_t pid)
29438 ++{
29439 ++ struct rb_node *node = root->rb_node;
29440 ++
29441 ++ while (node) {
29442 ++ struct kcopy_map_entry *kme =
29443 ++ container_of(node, struct kcopy_map_entry, node);
29444 ++
29445 ++ if (pid < kme->pid)
29446 ++ node = node->rb_left;
29447 ++ else if (pid > kme->pid)
29448 ++ node = node->rb_right;
29449 ++ else
29450 ++ return kme;
29451 ++ }
29452 ++
29453 ++ return NULL;
29454 ++}
29455 ++
29456 ++static int kcopy_insert(struct rb_root *root, struct kcopy_map_entry *kme)
29457 ++{
29458 ++ struct rb_node **new = &(root->rb_node);
29459 ++ struct rb_node *parent = NULL;
29460 ++
29461 ++ while (*new) {
29462 ++ struct kcopy_map_entry *tkme =
29463 ++ container_of(*new, struct kcopy_map_entry, node);
29464 ++
29465 ++ parent = *new;
29466 ++ if (kme->pid < tkme->pid)
29467 ++ new = &((*new)->rb_left);
29468 ++ else if (kme->pid > tkme->pid)
29469 ++ new = &((*new)->rb_right);
29470 ++ else {
29471 ++ printk(KERN_INFO "!!! debugging: bad rb tree !!!\n");
29472 ++ return -EINVAL;
29473 ++ }
29474 ++ }
29475 ++
29476 ++ rb_link_node(&kme->node, parent, new);
29477 ++ rb_insert_color(&kme->node, root);
29478 ++
29479 ++ return 0;
29480 ++}
29481 ++
29482 ++static int kcopy_open(struct inode *inode, struct file *filp)
29483 ++{
29484 ++ int ret;
29485 ++ const int minor = iminor(inode);
29486 ++ struct kcopy_file *kf = NULL;
29487 ++ struct kcopy_map_entry *kme;
29488 ++ struct kcopy_map_entry *okme;
29489 ++
29490 ++ if (minor < 0 || minor >= KCOPY_MAX_MINORS)
29491 ++ return -ENODEV;
29492 ++
29493 ++ mutex_lock(&kcopy_dev.open_lock);
29494 ++
29495 ++ if (!kcopy_dev.kf[minor]) {
29496 ++ kf = kmalloc(sizeof(struct kcopy_file), GFP_KERNEL);
29497 ++
29498 ++ if (!kf) {
29499 ++ ret = -ENOMEM;
29500 ++ goto bail;
29501 ++ }
29502 ++
29503 ++ kf->count = 1;
29504 ++ kf->live_map_tree = RB_ROOT;
29505 ++ mutex_init(&kf->map_lock);
29506 ++ kcopy_dev.kf[minor] = kf;
29507 ++ } else {
29508 ++ if (filp->f_flags & O_EXCL) {
29509 ++ ret = -EBUSY;
29510 ++ goto bail;
29511 ++ }
29512 ++ kcopy_dev.kf[minor]->count++;
29513 ++ }
29514 ++
29515 ++ kme = kcopy_create_entry(kcopy_dev.kf[minor]);
29516 ++ if (!kme) {
29517 ++ ret = -ENOMEM;
29518 ++ goto err_free_kf;
29519 ++ }
29520 ++
29521 ++ kf = kcopy_dev.kf[minor];
29522 ++
29523 ++ mutex_lock(&kf->map_lock);
29524 ++
29525 ++ okme = kcopy_lookup_pid(&kf->live_map_tree, kme->pid);
29526 ++ if (okme) {
29527 ++ /* pid already exists... */
29528 ++ okme->count++;
29529 ++ kfree(kme);
29530 ++ kme = okme;
29531 ++ } else
29532 ++ ret = kcopy_insert(&kf->live_map_tree, kme);
29533 ++
29534 ++ mutex_unlock(&kf->map_lock);
29535 ++
29536 ++ filp->private_data = kme;
29537 ++
29538 ++ ret = 0;
29539 ++ goto bail;
29540 ++
29541 ++err_free_kf:
29542 ++ if (kf) {
29543 ++ kcopy_dev.kf[minor] = NULL;
29544 ++ kfree(kf);
29545 ++ }
29546 ++bail:
29547 ++ mutex_unlock(&kcopy_dev.open_lock);
29548 ++ return ret;
29549 ++}
29550 ++
29551 ++static int kcopy_flush(struct file *filp, fl_owner_t id)
29552 ++{
29553 ++ struct kcopy_map_entry *kme = filp->private_data;
29554 ++ struct kcopy_file *kf = kme->file;
29555 ++
29556 ++ if (file_count(filp) == 1) {
29557 ++ mutex_lock(&kf->map_lock);
29558 ++ kme->count--;
29559 ++
29560 ++ if (!kme->count) {
29561 ++ rb_erase(&kme->node, &kf->live_map_tree);
29562 ++ kfree(kme);
29563 ++ }
29564 ++ mutex_unlock(&kf->map_lock);
29565 ++ }
29566 ++
29567 ++ return 0;
29568 ++}
29569 ++
29570 ++static int kcopy_release(struct inode *inode, struct file *filp)
29571 ++{
29572 ++ const int minor = iminor(inode);
29573 ++
29574 ++ mutex_lock(&kcopy_dev.open_lock);
29575 ++ kcopy_dev.kf[minor]->count--;
29576 ++ if (!kcopy_dev.kf[minor]->count) {
29577 ++ kfree(kcopy_dev.kf[minor]);
29578 ++ kcopy_dev.kf[minor] = NULL;
29579 ++ }
29580 ++ mutex_unlock(&kcopy_dev.open_lock);
29581 ++
29582 ++ return 0;
29583 ++}
29584 ++
29585 ++static void kcopy_put_pages(struct page **pages, int npages)
29586 ++{
29587 ++ int j;
29588 ++
29589 ++ for (j = 0; j < npages; j++)
29590 ++ put_page(pages[j]);
29591 ++}
29592 ++
29593 ++static int kcopy_validate_task(struct task_struct *p)
29594 ++{
29595 ++ return p && (uid_eq(current_euid(), task_euid(p)) || uid_eq(current_euid(), task_uid(p)));
29596 ++}
29597 ++
29598 ++static int kcopy_get_pages(struct kcopy_file *kf, pid_t pid,
29599 ++ struct page **pages, void __user *addr,
29600 ++ int write, size_t npages)
29601 ++{
29602 ++ int err;
29603 ++ struct mm_struct *mm;
29604 ++ struct kcopy_map_entry *rkme;
29605 ++
29606 ++ mutex_lock(&kf->map_lock);
29607 ++
29608 ++ rkme = kcopy_lookup_pid(&kf->live_map_tree, pid);
29609 ++ if (!rkme || !kcopy_validate_task(rkme->task)) {
29610 ++ err = -EINVAL;
29611 ++ goto bail_unlock;
29612 ++ }
29613 ++
29614 ++ mm = get_task_mm(rkme->task);
29615 ++ if (unlikely(!mm)) {
29616 ++ err = -ENOMEM;
29617 ++ goto bail_unlock;
29618 ++ }
29619 ++
29620 ++ down_read(&mm->mmap_sem);
29621 ++ err = get_user_pages(rkme->task, mm,
29622 ++ (unsigned long) addr, npages, write, 0,
29623 ++ pages, NULL);
29624 ++
29625 ++ if (err < npages && err > 0) {
29626 ++ kcopy_put_pages(pages, err);
29627 ++ err = -ENOMEM;
29628 ++ } else if (err == npages)
29629 ++ err = 0;
29630 ++
29631 ++ up_read(&mm->mmap_sem);
29632 ++
29633 ++ mmput(mm);
29634 ++
29635 ++bail_unlock:
29636 ++ mutex_unlock(&kf->map_lock);
29637 ++
29638 ++ return err;
29639 ++}
29640 ++
29641 ++static unsigned long kcopy_copy_pages_from_user(void __user *src,
29642 ++ struct page **dpages,
29643 ++ unsigned doff,
29644 ++ unsigned long n)
29645 ++{
29646 ++ struct page *dpage = *dpages;
29647 ++ char *daddr = kmap(dpage);
29648 ++ int ret = 0;
29649 ++
29650 ++ while (1) {
29651 ++ const unsigned long nleft = PAGE_SIZE - doff;
29652 ++ const unsigned long nc = (n < nleft) ? n : nleft;
29653 ++
29654 ++ /* if (copy_from_user(daddr + doff, src, nc)) { */
29655 ++ if (__copy_from_user_nocache(daddr + doff, src, nc)) {
29656 ++ ret = -EFAULT;
29657 ++ goto bail;
29658 ++ }
29659 ++
29660 ++ n -= nc;
29661 ++ if (n == 0)
29662 ++ break;
29663 ++
29664 ++ doff += nc;
29665 ++ doff &= ~PAGE_MASK;
29666 ++ if (doff == 0) {
29667 ++ kunmap(dpage);
29668 ++ dpages++;
29669 ++ dpage = *dpages;
29670 ++ daddr = kmap(dpage);
29671 ++ }
29672 ++
29673 ++ src += nc;
29674 ++ }
29675 ++
29676 ++bail:
29677 ++ kunmap(dpage);
29678 ++
29679 ++ return ret;
29680 ++}
29681 ++
29682 ++static unsigned long kcopy_copy_pages_to_user(void __user *dst,
29683 ++ struct page **spages,
29684 ++ unsigned soff,
29685 ++ unsigned long n)
29686 ++{
29687 ++ struct page *spage = *spages;
29688 ++ const char *saddr = kmap(spage);
29689 ++ int ret = 0;
29690 ++
29691 ++ while (1) {
29692 ++ const unsigned long nleft = PAGE_SIZE - soff;
29693 ++ const unsigned long nc = (n < nleft) ? n : nleft;
29694 ++
29695 ++ if (copy_to_user(dst, saddr + soff, nc)) {
29696 ++ ret = -EFAULT;
29697 ++ goto bail;
29698 ++ }
29699 ++
29700 ++ n -= nc;
29701 ++ if (n == 0)
29702 ++ break;
29703 ++
29704 ++ soff += nc;
29705 ++ soff &= ~PAGE_MASK;
29706 ++ if (soff == 0) {
29707 ++ kunmap(spage);
29708 ++ spages++;
29709 ++ spage = *spages;
29710 ++ saddr = kmap(spage);
29711 ++ }
29712 ++
29713 ++ dst += nc;
29714 ++ }
29715 ++
29716 ++bail:
29717 ++ kunmap(spage);
29718 ++
29719 ++ return ret;
29720 ++}
29721 ++
29722 ++static unsigned long kcopy_copy_to_user(void __user *dst,
29723 ++ struct kcopy_file *kf, pid_t pid,
29724 ++ void __user *src,
29725 ++ unsigned long n)
29726 ++{
29727 ++ struct page **pages;
29728 ++ const int pages_len = PAGE_SIZE / sizeof(struct page *);
29729 ++ int ret = 0;
29730 ++
29731 ++ pages = (struct page **) __get_free_page(GFP_KERNEL);
29732 ++ if (!pages) {
29733 ++ ret = -ENOMEM;
29734 ++ goto bail;
29735 ++ }
29736 ++
29737 ++ while (n) {
29738 ++ const unsigned long soff = (unsigned long) src & ~PAGE_MASK;
29739 ++ const unsigned long spages_left =
29740 ++ (soff + n + PAGE_SIZE - 1) >> PAGE_SHIFT;
29741 ++ const unsigned long spages_cp =
29742 ++ min_t(unsigned long, spages_left, pages_len);
29743 ++ const unsigned long sbytes =
29744 ++ PAGE_SIZE - soff + (spages_cp - 1) * PAGE_SIZE;
29745 ++ const unsigned long nbytes = min_t(unsigned long, sbytes, n);
29746 ++
29747 ++ ret = kcopy_get_pages(kf, pid, pages, src, 0, spages_cp);
29748 ++ if (unlikely(ret))
29749 ++ goto bail_free;
29750 ++
29751 ++ ret = kcopy_copy_pages_to_user(dst, pages, soff, nbytes);
29752 ++ kcopy_put_pages(pages, spages_cp);
29753 ++ if (ret)
29754 ++ goto bail_free;
29755 ++ dst = (char *) dst + nbytes;
29756 ++ src = (char *) src + nbytes;
29757 ++
29758 ++ n -= nbytes;
29759 ++ }
29760 ++
29761 ++bail_free:
29762 ++ free_page((unsigned long) pages);
29763 ++bail:
29764 ++ return ret;
29765 ++}
29766 ++
29767 ++static unsigned long kcopy_copy_from_user(const void __user *src,
29768 ++ struct kcopy_file *kf, pid_t pid,
29769 ++ void __user *dst,
29770 ++ unsigned long n)
29771 ++{
29772 ++ struct page **pages;
29773 ++ const int pages_len = PAGE_SIZE / sizeof(struct page *);
29774 ++ int ret = 0;
29775 ++
29776 ++ pages = (struct page **) __get_free_page(GFP_KERNEL);
29777 ++ if (!pages) {
29778 ++ ret = -ENOMEM;
29779 ++ goto bail;
29780 ++ }
29781 ++
29782 ++ while (n) {
29783 ++ const unsigned long doff = (unsigned long) dst & ~PAGE_MASK;
29784 ++ const unsigned long dpages_left =
29785 ++ (doff + n + PAGE_SIZE - 1) >> PAGE_SHIFT;
29786 ++ const unsigned long dpages_cp =
29787 ++ min_t(unsigned long, dpages_left, pages_len);
29788 ++ const unsigned long dbytes =
29789 ++ PAGE_SIZE - doff + (dpages_cp - 1) * PAGE_SIZE;
29790 ++ const unsigned long nbytes = min_t(unsigned long, dbytes, n);
29791 ++
29792 ++ ret = kcopy_get_pages(kf, pid, pages, dst, 1, dpages_cp);
29793 ++ if (unlikely(ret))
29794 ++ goto bail_free;
29795 ++
29796 ++ ret = kcopy_copy_pages_from_user((void __user *) src,
29797 ++ pages, doff, nbytes);
29798 ++ kcopy_put_pages(pages, dpages_cp);
29799 ++ if (ret)
29800 ++ goto bail_free;
29801 ++
29802 ++ dst = (char *) dst + nbytes;
29803 ++ src = (char *) src + nbytes;
29804 ++
29805 ++ n -= nbytes;
29806 ++ }
29807 ++
29808 ++bail_free:
29809 ++ free_page((unsigned long) pages);
29810 ++bail:
29811 ++ return ret;
29812 ++}
29813 ++
29814 ++static int kcopy_do_get(struct kcopy_map_entry *kme, pid_t pid,
29815 ++ const void __user *src, void __user *dst,
29816 ++ unsigned long n)
29817 ++{
29818 ++ struct kcopy_file *kf = kme->file;
29819 ++ int ret = 0;
29820 ++
29821 ++ if (n == 0) {
29822 ++ ret = -EINVAL;
29823 ++ goto bail;
29824 ++ }
29825 ++
29826 ++ ret = kcopy_copy_to_user(dst, kf, pid, (void __user *) src, n);
29827 ++
29828 ++bail:
29829 ++ return ret;
29830 ++}
29831 ++
29832 ++static int kcopy_do_put(struct kcopy_map_entry *kme, const void __user *src,
29833 ++ pid_t pid, void __user *dst,
29834 ++ unsigned long n)
29835 ++{
29836 ++ struct kcopy_file *kf = kme->file;
29837 ++ int ret = 0;
29838 ++
29839 ++ if (n == 0) {
29840 ++ ret = -EINVAL;
29841 ++ goto bail;
29842 ++ }
29843 ++
29844 ++ ret = kcopy_copy_from_user(src, kf, pid, (void __user *) dst, n);
29845 ++
29846 ++bail:
29847 ++ return ret;
29848 ++}
29849 ++
29850 ++static int kcopy_do_abi(u32 __user *dst)
29851 ++{
29852 ++ u32 val = KCOPY_ABI;
29853 ++ int err;
29854 ++
29855 ++ err = put_user(val, dst);
29856 ++ if (err)
29857 ++ return -EFAULT;
29858 ++
29859 ++ return 0;
29860 ++}
29861 ++
29862 ++ssize_t kcopy_write(struct file *filp, const char __user *data, size_t cnt,
29863 ++ loff_t *o)
29864 ++{
29865 ++ struct kcopy_map_entry *kme = filp->private_data;
29866 ++ struct kcopy_syscall ks;
29867 ++ int err = 0;
29868 ++ const void __user *src;
29869 ++ void __user *dst;
29870 ++ unsigned long n;
29871 ++
29872 ++ if (cnt != sizeof(struct kcopy_syscall)) {
29873 ++ err = -EINVAL;
29874 ++ goto bail;
29875 ++ }
29876 ++
29877 ++ err = copy_from_user(&ks, data, cnt);
29878 ++ if (unlikely(err))
29879 ++ goto bail;
29880 ++
29881 ++ src = kcopy_syscall_src(&ks);
29882 ++ dst = kcopy_syscall_dst(&ks);
29883 ++ n = kcopy_syscall_n(&ks);
29884 ++ if (ks.tag == KCOPY_GET_SYSCALL)
29885 ++ err = kcopy_do_get(kme, ks.pid, src, dst, n);
29886 ++ else if (ks.tag == KCOPY_PUT_SYSCALL)
29887 ++ err = kcopy_do_put(kme, src, ks.pid, dst, n);
29888 ++ else if (ks.tag == KCOPY_ABI_SYSCALL)
29889 ++ err = kcopy_do_abi(dst);
29890 ++ else
29891 ++ err = -EINVAL;
29892 ++
29893 ++bail:
29894 ++ return err ? err : cnt;
29895 ++}
29896 ++
29897 ++static const struct file_operations kcopy_fops = {
29898 ++ .owner = THIS_MODULE,
29899 ++ .open = kcopy_open,
29900 ++ .release = kcopy_release,
29901 ++ .flush = kcopy_flush,
29902 ++ .write = kcopy_write,
29903 ++};
29904 ++
29905 ++static int __init kcopy_init(void)
29906 ++{
29907 ++ int ret;
29908 ++ const char *name = "kcopy";
29909 ++ int i;
29910 ++ int ninit = 0;
29911 ++
29912 ++ mutex_init(&kcopy_dev.open_lock);
29913 ++
29914 ++ ret = alloc_chrdev_region(&kcopy_dev.dev, 0, KCOPY_MAX_MINORS, name);
29915 ++ if (ret)
29916 ++ goto bail;
29917 ++
29918 ++ kcopy_dev.class = class_create(THIS_MODULE, (char *) name);
29919 ++
29920 ++ if (IS_ERR(kcopy_dev.class)) {
29921 ++ ret = PTR_ERR(kcopy_dev.class);
29922 ++ printk(KERN_ERR "kcopy: Could not create "
29923 ++ "device class (err %d)\n", -ret);
29924 ++ goto bail_chrdev;
29925 ++ }
29926 ++
29927 ++ cdev_init(&kcopy_dev.cdev, &kcopy_fops);
29928 ++ ret = cdev_add(&kcopy_dev.cdev, kcopy_dev.dev, KCOPY_MAX_MINORS);
29929 ++ if (ret < 0) {
29930 ++ printk(KERN_ERR "kcopy: Could not add cdev (err %d)\n",
29931 ++ -ret);
29932 ++ goto bail_class;
29933 ++ }
29934 ++
29935 ++ for (i = 0; i < KCOPY_MAX_MINORS; i++) {
29936 ++ char devname[8];
29937 ++ const int minor = MINOR(kcopy_dev.dev) + i;
29938 ++ const dev_t dev = MKDEV(MAJOR(kcopy_dev.dev), minor);
29939 ++
29940 ++ snprintf(devname, sizeof(devname), "kcopy%02d", i);
29941 ++ kcopy_dev.devp[i] =
29942 ++ device_create(kcopy_dev.class, NULL,
29943 ++ dev, NULL, devname);
29944 ++
29945 ++ if (IS_ERR(kcopy_dev.devp[i])) {
29946 ++ ret = PTR_ERR(kcopy_dev.devp[i]);
29947 ++ printk(KERN_ERR "kcopy: Could not create "
29948 ++ "devp %d (err %d)\n", i, -ret);
29949 ++ goto bail_cdev_add;
29950 ++ }
29951 ++
29952 ++ ninit++;
29953 ++ }
29954 ++
29955 ++ ret = 0;
29956 ++ goto bail;
29957 ++
29958 ++bail_cdev_add:
29959 ++ for (i = 0; i < ninit; i++)
29960 ++ device_unregister(kcopy_dev.devp[i]);
29961 ++
29962 ++ cdev_del(&kcopy_dev.cdev);
29963 ++bail_class:
29964 ++ class_destroy(kcopy_dev.class);
29965 ++bail_chrdev:
29966 ++ unregister_chrdev_region(kcopy_dev.dev, KCOPY_MAX_MINORS);
29967 ++bail:
29968 ++ return ret;
29969 ++}
29970 ++
29971 ++static void __exit kcopy_fini(void)
29972 ++{
29973 ++ int i;
29974 ++
29975 ++ for (i = 0; i < KCOPY_MAX_MINORS; i++)
29976 ++ device_unregister(kcopy_dev.devp[i]);
29977 ++
29978 ++ cdev_del(&kcopy_dev.cdev);
29979 ++ class_destroy(kcopy_dev.class);
29980 ++ unregister_chrdev_region(kcopy_dev.dev, KCOPY_MAX_MINORS);
29981 ++}
29982 ++
29983 ++module_init(kcopy_init);
29984 ++module_exit(kcopy_fini);
29985 +--
29986 +1.7.10
29987 +
29988
29989 Added: genpatches-2.6/trunk/3.10.7/2700_ThinkPad-30-brightness-control-fix.patch
29990 ===================================================================
29991 --- genpatches-2.6/trunk/3.10.7/2700_ThinkPad-30-brightness-control-fix.patch (rev 0)
29992 +++ genpatches-2.6/trunk/3.10.7/2700_ThinkPad-30-brightness-control-fix.patch 2013-08-29 12:09:12 UTC (rev 2497)
29993 @@ -0,0 +1,81 @@
29994 +diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
29995 +index cb96296..6c242ed 100644
29996 +--- a/drivers/acpi/blacklist.c
29997 ++++ b/drivers/acpi/blacklist.c
29998 +@@ -193,6 +193,13 @@ static int __init dmi_disable_osi_win7(const struct dmi_system_id *d)
29999 + return 0;
30000 + }
30001 +
30002 ++static int __init dmi_disable_osi_win8(const struct dmi_system_id *d)
30003 ++{
30004 ++ printk(KERN_NOTICE PREFIX "DMI detected: %s\n", d->ident);
30005 ++ acpi_osi_setup("!Windows 2012");
30006 ++ return 0;
30007 ++}
30008 ++
30009 + static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
30010 + {
30011 + .callback = dmi_disable_osi_vista,
30012 +@@ -269,6 +276,61 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
30013 + },
30014 +
30015 + /*
30016 ++ * The following Lenovo models have a broken workaround in the
30017 ++ * acpi_video backlight implementation to meet the Windows 8
30018 ++ * requirement of 101 backlight levels. Reverting to pre-Win8
30019 ++ * behavior fixes the problem.
30020 ++ */
30021 ++ {
30022 ++ .callback = dmi_disable_osi_win8,
30023 ++ .ident = "Lenovo ThinkPad L430",
30024 ++ .matches = {
30025 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
30026 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L430"),
30027 ++ },
30028 ++ },
30029 ++ {
30030 ++ .callback = dmi_disable_osi_win8,
30031 ++ .ident = "Lenovo ThinkPad T430s",
30032 ++ .matches = {
30033 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
30034 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T430s"),
30035 ++ },
30036 ++ },
30037 ++ {
30038 ++ .callback = dmi_disable_osi_win8,
30039 ++ .ident = "Lenovo ThinkPad T530",
30040 ++ .matches = {
30041 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
30042 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T530"),
30043 ++ },
30044 ++ },
30045 ++ {
30046 ++ .callback = dmi_disable_osi_win8,
30047 ++ .ident = "Lenovo ThinkPad W530",
30048 ++ .matches = {
30049 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
30050 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"),
30051 ++ },
30052 ++ },
30053 ++ {
30054 ++ .callback = dmi_disable_osi_win8,
30055 ++ .ident = "Lenovo ThinkPad X1 Carbon",
30056 ++ .matches = {
30057 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
30058 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X1 Carbon"),
30059 ++ },
30060 ++ },
30061 ++ {
30062 ++ .callback = dmi_disable_osi_win8,
30063 ++ .ident = "Lenovo ThinkPad X230",
30064 ++ .matches = {
30065 ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
30066 ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X230"),
30067 ++ },
30068 ++ },
30069 ++
30070 ++ /*
30071 + * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
30072 + * Linux ignores it, except for the machines enumerated below.
30073 + */
30074 +
30075
30076 Added: genpatches-2.6/trunk/3.10.7/2900_dev-root-proc-mount-fix.patch
30077 ===================================================================
30078 --- genpatches-2.6/trunk/3.10.7/2900_dev-root-proc-mount-fix.patch (rev 0)
30079 +++ genpatches-2.6/trunk/3.10.7/2900_dev-root-proc-mount-fix.patch 2013-08-29 12:09:12 UTC (rev 2497)
30080 @@ -0,0 +1,29 @@
30081 +--- a/init/do_mounts.c 2013-01-25 19:11:11.609802424 -0500
30082 ++++ b/init/do_mounts.c 2013-01-25 19:14:20.606053568 -0500
30083 +@@ -461,7 +461,10 @@ void __init change_floppy(char *fmt, ...
30084 + va_start(args, fmt);
30085 + vsprintf(buf, fmt, args);
30086 + va_end(args);
30087 +- fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
30088 ++ if (saved_root_name[0])
30089 ++ fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
30090 ++ else
30091 ++ fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
30092 + if (fd >= 0) {
30093 + sys_ioctl(fd, FDEJECT, 0);
30094 + sys_close(fd);
30095 +@@ -505,7 +508,13 @@ void __init mount_root(void)
30096 + #endif
30097 + #ifdef CONFIG_BLOCK
30098 + create_dev("/dev/root", ROOT_DEV);
30099 +- mount_block_root("/dev/root", root_mountflags);
30100 ++ if (saved_root_name[0]) {
30101 ++ create_dev(saved_root_name, ROOT_DEV);
30102 ++ mount_block_root(saved_root_name, root_mountflags);
30103 ++ } else {
30104 ++ create_dev("/dev/root", ROOT_DEV);
30105 ++ mount_block_root("/dev/root", root_mountflags);
30106 ++ }
30107 + #endif
30108 + }
30109 +
30110
30111 Added: genpatches-2.6/trunk/3.10.7/4200_fbcondecor-0.9.6.patch
30112 ===================================================================
30113 --- genpatches-2.6/trunk/3.10.7/4200_fbcondecor-0.9.6.patch (rev 0)
30114 +++ genpatches-2.6/trunk/3.10.7/4200_fbcondecor-0.9.6.patch 2013-08-29 12:09:12 UTC (rev 2497)
30115 @@ -0,0 +1,2344 @@
30116 +diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
30117 +index 30a7054..9b6a733 100644
30118 +--- a/Documentation/fb/00-INDEX
30119 ++++ b/Documentation/fb/00-INDEX
30120 +@@ -21,6 +21,8 @@ ep93xx-fb.txt
30121 + - info on the driver for EP93xx LCD controller.
30122 + fbcon.txt
30123 + - intro to and usage guide for the framebuffer console (fbcon).
30124 ++fbcondecor.txt
30125 ++ - info on the Framebuffer Console Decoration
30126 + framebuffer.txt
30127 + - introduction to frame buffer devices.
30128 + gxfb.txt
30129 +diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
30130 +new file mode 100644
30131 +index 0000000..15889f3
30132 +--- /dev/null
30133 ++++ b/Documentation/fb/fbcondecor.txt
30134 +@@ -0,0 +1,207 @@
30135 ++What is it?
30136 ++-----------
30137 ++
30138 ++The framebuffer decorations are a kernel feature which allows displaying a
30139 ++background picture on selected consoles.
30140 ++
30141 ++What do I need to get it to work?
30142 ++---------------------------------
30143 ++
30144 ++To get fbcondecor up-and-running you will have to:
30145 ++ 1) get a copy of splashutils [1] or a similar program
30146 ++ 2) get some fbcondecor themes
30147 ++ 3) build the kernel helper program
30148 ++ 4) build your kernel with the FB_CON_DECOR option enabled.
30149 ++
30150 ++To get fbcondecor operational right after fbcon initialization is finished, you
30151 ++will have to include a theme and the kernel helper into your initramfs image.
30152 ++Please refer to splashutils documentation for instructions on how to do that.
30153 ++
30154 ++[1] The splashutils package can be downloaded from:
30155 ++ http://dev.gentoo.org/~spock/projects/splashutils/
30156 ++
30157 ++The userspace helper
30158 ++--------------------
30159 ++
30160 ++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
30161 ++kernel whenever an important event occurs and the kernel needs some kind of
30162 ++job to be carried out. Important events include console switches and video
30163 ++mode switches (the kernel requests background images and configuration
30164 ++parameters for the current console). The fbcondecor helper must be accessible at
30165 ++all times. If it's not, fbcondecor will be switched off automatically.
30166 ++
30167 ++It's possible to set path to the fbcondecor helper by writing it to
30168 ++/proc/sys/kernel/fbcondecor.
30169 ++
30170 ++*****************************************************************************
30171 ++
30172 ++The information below is mostly technical stuff. There's probably no need to
30173 ++read it unless you plan to develop a userspace helper.
30174 ++
30175 ++The fbcondecor protocol
30176 ++-----------------------
30177 ++
30178 ++The fbcondecor protocol defines a communication interface between the kernel and
30179 ++the userspace fbcondecor helper.
30180 ++
30181 ++The kernel side is responsible for:
30182 ++
30183 ++ * rendering console text, using an image as a background (instead of a
30184 ++ standard solid color fbcon uses),
30185 ++ * accepting commands from the user via ioctls on the fbcondecor device,
30186 ++ * calling the userspace helper to set things up as soon as the fb subsystem
30187 ++ is initialized.
30188 ++
30189 ++The userspace helper is responsible for everything else, including parsing
30190 ++configuration files, decompressing the image files whenever the kernel needs
30191 ++it, and communicating with the kernel if necessary.
30192 ++
30193 ++The fbcondecor protocol specifies how communication is done in both ways:
30194 ++kernel->userspace and userspace->helper.
30195 ++
30196 ++Kernel -> Userspace
30197 ++-------------------
30198 ++
30199 ++The kernel communicates with the userspace helper by calling it and specifying
30200 ++the task to be done in a series of arguments.
30201 ++
30202 ++The arguments follow the pattern:
30203 ++<fbcondecor protocol version> <command> <parameters>
30204 ++
30205 ++All commands defined in fbcondecor protocol v2 have the following parameters:
30206 ++ virtual console
30207 ++ framebuffer number
30208 ++ theme
30209 ++
30210 ++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
30211 ++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
30212 ++
30213 ++Fbcondecor protocol v2 specifies the following commands:
30214 ++
30215 ++getpic
30216 ++------
30217 ++ The kernel issues this command to request image data. It's up to the
30218 ++ userspace helper to find a background image appropriate for the specified
30219 ++ theme and the current resolution. The userspace helper should respond by
30220 ++ issuing the FBIOCONDECOR_SETPIC ioctl.
30221 ++
30222 ++init
30223 ++----
30224 ++ The kernel issues this command after the fbcondecor device is created and
30225 ++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
30226 ++ helper should parse the kernel command line (/proc/cmdline) or otherwise
30227 ++ decide whether fbcondecor is to be activated.
30228 ++
30229 ++ To activate fbcondecor on the first console the helper should issue the
30230 ++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
30231 ++ in the above-mentioned order.
30232 ++
30233 ++ When the userspace helper is called in an early phase of the boot process
30234 ++ (right after the initialization of fbcon), no filesystems will be mounted.
30235 ++ The helper program should mount sysfs and then create the appropriate
30236 ++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
30237 ++ current display settings and to be able to communicate with the kernel side.
30238 ++ It should probably also mount the procfs to be able to parse the kernel
30239 ++ command line parameters.
30240 ++
30241 ++ Note that the console sem is not held when the kernel calls fbcondecor_helper
30242 ++ with the 'init' command. The fbcondecor helper should perform all ioctls with
30243 ++ origin set to FBCON_DECOR_IO_ORIG_USER.
30244 ++
30245 ++modechange
30246 ++----------
30247 ++ The kernel issues this command on a mode change. The helper's response should
30248 ++ be similar to the response to the 'init' command. Note that this time the
30249 ++ console sem is held and all ioctls must be performed with origin set to
30250 ++ FBCON_DECOR_IO_ORIG_KERNEL.
30251 ++
30252 ++
30253 ++Userspace -> Kernel
30254 ++-------------------
30255 ++
30256 ++Userspace programs can communicate with fbcondecor via ioctls on the
30257 ++fbcondecor device. These ioctls are to be used by both the userspace helper
30258 ++(called only by the kernel) and userspace configuration tools (run by the users).
30259 ++
30260 ++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
30261 ++when doing the appropriate ioctls. All userspace configuration tools should
30262 ++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
30263 ++field when performing ioctls from the kernel helper will most likely result
30264 ++in a console deadlock.
30265 ++
30266 ++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
30267 ++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
30268 ++the console sem.
30269 ++
30270 ++The framebuffer console decoration provides the following ioctls (all defined in
30271 ++linux/fb.h):
30272 ++
30273 ++FBIOCONDECOR_SETPIC
30274 ++description: loads a background picture for a virtual console
30275 ++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
30276 ++notes:
30277 ++If called for consoles other than the current foreground one, the picture data
30278 ++will be ignored.
30279 ++
30280 ++If the current virtual console is running in a 8-bpp mode, the cmap substruct
30281 ++of fb_image has to be filled appropriately: start should be set to 16 (first
30282 ++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
30283 ++blue should point to valid cmap data. The transp field is ingored. The fields
30284 ++dx, dy, bg_color, fg_color in fb_image are ignored as well.
30285 ++
30286 ++FBIOCONDECOR_SETCFG
30287 ++description: sets the fbcondecor config for a virtual console
30288 ++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
30289 ++notes: The structure has to be filled with valid data.
30290 ++
30291 ++FBIOCONDECOR_GETCFG
30292 ++description: gets the fbcondecor config for a virtual console
30293 ++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
30294 ++
30295 ++FBIOCONDECOR_SETSTATE
30296 ++description: sets the fbcondecor state for a virtual console
30297 ++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
30298 ++ values: 0 = disabled, 1 = enabled.
30299 ++
30300 ++FBIOCONDECOR_GETSTATE
30301 ++description: gets the fbcondecor state for a virtual console
30302 ++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
30303 ++ values: as in FBIOCONDECOR_SETSTATE
30304 ++
30305 ++Info on used structures:
30306 ++
30307 ++Definition of struct vc_decor can be found in linux/console_decor.h. It's
30308 ++heavily commented. Note that the 'theme' field should point to a string
30309 ++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
30310 ++performed, the theme field should point to a char buffer of length
30311 ++FBCON_DECOR_THEME_LEN.
30312 ++
30313 ++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
30314 ++The fields in this struct have the following meaning:
30315 ++
30316 ++vc:
30317 ++Virtual console number.
30318 ++
30319 ++origin:
30320 ++Specifies if the ioctl is performed as a response to a kernel request. The
30321 ++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
30322 ++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
30323 ++avoid console semaphore deadlocks.
30324 ++
30325 ++data:
30326 ++Pointer to a data structure appropriate for the performed ioctl. Type of
30327 ++the data struct is specified in the ioctls description.
30328 ++
30329 ++*****************************************************************************
30330 ++
30331 ++Credit
30332 ++------
30333 ++
30334 ++Original 'bootsplash' project & implementation by:
30335 ++ Volker Poplawski <volker@×××××××××.de>, Stefan Reinauer <stepan@××××.de>,
30336 ++ Steffen Winterfeldt <snwint@××××.de>, Michael Schroeder <mls@××××.de>,
30337 ++ Ken Wimer <wimer@××××.de>.
30338 ++
30339 ++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
30340 ++ Michal Januszewski <spock@g.o>
30341 ++
30342 +diff --git a/drivers/Makefile b/drivers/Makefile
30343 +index 95952c8..b55db6d 100644
30344 +--- a/drivers/Makefile
30345 ++++ b/drivers/Makefile
30346 +@@ -16,4 +16,8 @@ obj-$(CONFIG_PCI) += pci/
30347 + obj-$(CONFIG_PARISC) += parisc/
30348 + obj-$(CONFIG_RAPIDIO) += rapidio/
30349 ++# tty/ comes before char/ so that the VT console is the boot-time
30350 ++# default.
30351 ++obj-y += tty/
30352 ++obj-y += char/
30353 + obj-y += video/
30354 + obj-y += idle/
30355 +@@ -37,11 +41,6 @@ obj-$(CONFIG_XEN) += xen/
30356 + # regulators early, since some subsystems rely on them to initialize
30357 + obj-$(CONFIG_REGULATOR) += regulator/
30358 +
30359 +-# tty/ comes before char/ so that the VT console is the boot-time
30360 +-# default.
30361 +-obj-y += tty/
30362 +-obj-y += char/
30363 +-
30364 + # gpu/ comes after char for AGP vs DRM startup
30365 + obj-y += gpu/
30366 +
30367 +diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
30368 +index a290be5..3a4ca32 100644
30369 +--- a/drivers/video/Kconfig
30370 ++++ b/drivers/video/Kconfig
30371 +@@ -1229,7 +1229,6 @@ config FB_MATROX
30372 + select FB_CFB_FILLRECT
30373 + select FB_CFB_COPYAREA
30374 + select FB_CFB_IMAGEBLIT
30375 +- select FB_TILEBLITTING
30376 + select FB_MACMODES if PPC_PMAC
30377 + ---help---
30378 + Say Y here if you have a Matrox Millennium, Matrox Millennium II,
30379 +diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
30380 +index c2d11fe..1be9de4 100644
30381 +--- a/drivers/video/console/Kconfig
30382 ++++ b/drivers/video/console/Kconfig
30383 +@@ -120,6 +120,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
30384 + such that other users of the framebuffer will remain normally
30385 + oriented.
30386 +
30387 ++config FB_CON_DECOR
30388 ++ bool "Support for the Framebuffer Console Decorations"
30389 ++ depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
30390 ++ default n
30391 ++ ---help---
30392 ++ This option enables support for framebuffer console decorations which
30393 ++ makes it possible to display images in the background of the system
30394 ++ consoles. Note that userspace utilities are necessary in order to take
30395 ++ advantage of these features. Refer to Documentation/fb/fbcondecor.txt
30396 ++ for more information.
30397 ++
30398 ++ If unsure, say N.
30399 ++
30400 + config STI_CONSOLE
30401 + bool "STI text console"
30402 + depends on PARISC
30403 +diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
30404 +index a862e91..86bfcff 100644
30405 +--- a/drivers/video/console/Makefile
30406 ++++ b/drivers/video/console/Makefile
30407 +@@ -34,6 +34,7 @@ obj-$(CONFIG_FRAMEBUFFER_CONSOLE) += fbcon_rotate.o fbcon_cw.o fbcon_ud.o \
30408 + fbcon_ccw.o
30409 + endif
30410 +
30411 ++obj-$(CONFIG_FB_CON_DECOR) += fbcondecor.o cfbcondecor.o
30412 + obj-$(CONFIG_FB_STI) += sticore.o font.o
30413 +
30414 + ifeq ($(CONFIG_USB_SISUSBVGA_CON),y)
30415 +diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
30416 +index 28b1a83..33712c0 100644
30417 +--- a/drivers/video/console/bitblit.c
30418 ++++ b/drivers/video/console/bitblit.c
30419 +@@ -18,6 +18,7 @@
30420 + #include <linux/console.h>
30421 + #include <asm/types.h>
30422 + #include "fbcon.h"
30423 ++#include "fbcondecor.h"
30424 +
30425 + /*
30426 + * Accelerated handlers.
30427 +@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
30428 + area.height = height * vc->vc_font.height;
30429 + area.width = width * vc->vc_font.width;
30430 +
30431 ++ if (fbcon_decor_active(info, vc)) {
30432 ++ area.sx += vc->vc_decor.tx;
30433 ++ area.sy += vc->vc_decor.ty;
30434 ++ area.dx += vc->vc_decor.tx;
30435 ++ area.dy += vc->vc_decor.ty;
30436 ++ }
30437 ++
30438 + info->fbops->fb_copyarea(info, &area);
30439 + }
30440 +
30441 +@@ -380,11 +388,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
30442 + cursor.image.depth = 1;
30443 + cursor.rop = ROP_XOR;
30444 +
30445 +- if (info->fbops->fb_cursor)
30446 +- err = info->fbops->fb_cursor(info, &cursor);
30447 ++ if (fbcon_decor_active(info, vc)) {
30448 ++ fbcon_decor_cursor(info, &cursor);
30449 ++ } else {
30450 ++ if (info->fbops->fb_cursor)
30451 ++ err = info->fbops->fb_cursor(info, &cursor);
30452 +
30453 +- if (err)
30454 +- soft_cursor(info, &cursor);
30455 ++ if (err)
30456 ++ soft_cursor(info, &cursor);
30457 ++ }
30458 +
30459 + ops->cursor_reset = 0;
30460 + }
30461 +diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
30462 +new file mode 100644
30463 +index 0000000..09381d3
30464 +--- /dev/null
30465 ++++ b/drivers/video/console/cfbcondecor.c
30466 +@@ -0,0 +1,471 @@
30467 ++/*
30468 ++ * linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
30469 ++ *
30470 ++ * Copyright (C) 2004 Michal Januszewski <spock@g.o>
30471 ++ *
30472 ++ * Code based upon "Bootdecor" (C) 2001-2003
30473 ++ * Volker Poplawski <volker@×××××××××.de>,
30474 ++ * Stefan Reinauer <stepan@××××.de>,
30475 ++ * Steffen Winterfeldt <snwint@××××.de>,
30476 ++ * Michael Schroeder <mls@××××.de>,
30477 ++ * Ken Wimer <wimer@××××.de>.
30478 ++ *
30479 ++ * This file is subject to the terms and conditions of the GNU General Public
30480 ++ * License. See the file COPYING in the main directory of this archive for
30481 ++ * more details.
30482 ++ */
30483 ++#include <linux/module.h>
30484 ++#include <linux/types.h>
30485 ++#include <linux/fb.h>
30486 ++#include <linux/selection.h>
30487 ++#include <linux/slab.h>
30488 ++#include <linux/vt_kern.h>
30489 ++#include <asm/irq.h>
30490 ++
30491 ++#include "fbcon.h"
30492 ++#include "fbcondecor.h"
30493 ++
30494 ++#define parse_pixel(shift,bpp,type) \
30495 ++ do { \
30496 ++ if (d & (0x80 >> (shift))) \
30497 ++ dd2[(shift)] = fgx; \
30498 ++ else \
30499 ++ dd2[(shift)] = transparent ? *(type *)decor_src : bgx; \
30500 ++ decor_src += (bpp); \
30501 ++ } while (0) \
30502 ++
30503 ++extern int get_color(struct vc_data *vc, struct fb_info *info,
30504 ++ u16 c, int is_fg);
30505 ++
30506 ++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
30507 ++{
30508 ++ int i, j, k;
30509 ++ int minlen = min(min(info->var.red.length, info->var.green.length),
30510 ++ info->var.blue.length);
30511 ++ u32 col;
30512 ++
30513 ++ for (j = i = 0; i < 16; i++) {
30514 ++ k = color_table[i];
30515 ++
30516 ++ col = ((vc->vc_palette[j++] >> (8-minlen))
30517 ++ << info->var.red.offset);
30518 ++ col |= ((vc->vc_palette[j++] >> (8-minlen))
30519 ++ << info->var.green.offset);
30520 ++ col |= ((vc->vc_palette[j++] >> (8-minlen))
30521 ++ << info->var.blue.offset);
30522 ++ ((u32 *)info->pseudo_palette)[k] = col;
30523 ++ }
30524 ++}
30525 ++
30526 ++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
30527 ++ int width, u8* src, u32 fgx, u32 bgx, u8 transparent)
30528 ++{
30529 ++ unsigned int x, y;
30530 ++ u32 dd;
30531 ++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
30532 ++ unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
30533 ++ unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
30534 ++ u16 dd2[4];
30535 ++
30536 ++ u8* decor_src = (u8 *)(info->bgdecor.data + ds);
30537 ++ u8* dst = (u8 *)(info->screen_base + d);
30538 ++
30539 ++ if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
30540 ++ return;
30541 ++
30542 ++ for (y = 0; y < height; y++) {
30543 ++ switch (info->var.bits_per_pixel) {
30544 ++
30545 ++ case 32:
30546 ++ for (x = 0; x < width; x++) {
30547 ++
30548 ++ if ((x & 7) == 0)
30549 ++ d = *src++;
30550 ++ if (d & 0x80)
30551 ++ dd = fgx;
30552 ++ else
30553 ++ dd = transparent ?
30554 ++ *(u32 *)decor_src : bgx;
30555 ++
30556 ++ d <<= 1;
30557 ++ decor_src += 4;
30558 ++ fb_writel(dd, dst);
30559 ++ dst += 4;
30560 ++ }
30561 ++ break;
30562 ++ case 24:
30563 ++ for (x = 0; x < width; x++) {
30564 ++
30565 ++ if ((x & 7) == 0)
30566 ++ d = *src++;
30567 ++ if (d & 0x80)
30568 ++ dd = fgx;
30569 ++ else
30570 ++ dd = transparent ?
30571 ++ (*(u32 *)decor_src & 0xffffff) : bgx;
30572 ++
30573 ++ d <<= 1;
30574 ++ decor_src += 3;
30575 ++#ifdef __LITTLE_ENDIAN
30576 ++ fb_writew(dd & 0xffff, dst);
30577 ++ dst += 2;
30578 ++ fb_writeb((dd >> 16), dst);
30579 ++#else
30580 ++ fb_writew(dd >> 8, dst);
30581 ++ dst += 2;
30582 ++ fb_writeb(dd & 0xff, dst);
30583 ++#endif
30584 ++ dst++;
30585 ++ }
30586 ++ break;
30587 ++ case 16:
30588 ++ for (x = 0; x < width; x += 2) {
30589 ++ if ((x & 7) == 0)
30590 ++ d = *src++;
30591 ++
30592 ++ parse_pixel(0, 2, u16);
30593 ++ parse_pixel(1, 2, u16);
30594 ++#ifdef __LITTLE_ENDIAN
30595 ++ dd = dd2[0] | (dd2[1] << 16);
30596 ++#else
30597 ++ dd = dd2[1] | (dd2[0] << 16);
30598 ++#endif
30599 ++ d <<= 2;
30600 ++ fb_writel(dd, dst);
30601 ++ dst += 4;
30602 ++ }
30603 ++ break;
30604 ++
30605 ++ case 8:
30606 ++ for (x = 0; x < width; x += 4) {
30607 ++ if ((x & 7) == 0)
30608 ++ d = *src++;
30609 ++
30610 ++ parse_pixel(0, 1, u8);
30611 ++ parse_pixel(1, 1, u8);
30612 ++ parse_pixel(2, 1, u8);
30613 ++ parse_pixel(3, 1, u8);
30614 ++
30615 ++#ifdef __LITTLE_ENDIAN
30616 ++ dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
30617 ++#else
30618 ++ dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
30619 ++#endif
30620 ++ d <<= 4;
30621 ++ fb_writel(dd, dst);
30622 ++ dst += 4;
30623 ++ }
30624 ++ }
30625 ++
30626 ++ dst += info->fix.line_length - width * bytespp;
30627 ++ decor_src += (info->var.xres - width) * bytespp;
30628 ++ }
30629 ++}
30630 ++
30631 ++#define cc2cx(a) \
30632 ++ ((info->fix.visual == FB_VISUAL_TRUECOLOR || \
30633 ++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? \
30634 ++ ((u32*)info->pseudo_palette)[a] : a)
30635 ++
30636 ++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
30637 ++ const unsigned short *s, int count, int yy, int xx)
30638 ++{
30639 ++ unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
30640 ++ struct fbcon_ops *ops = info->fbcon_par;
30641 ++ int fg_color, bg_color, transparent;
30642 ++ u8 *src;
30643 ++ u32 bgx, fgx;
30644 ++ u16 c = scr_readw(s);
30645 ++
30646 ++ fg_color = get_color(vc, info, c, 1);
30647 ++ bg_color = get_color(vc, info, c, 0);
30648 ++
30649 ++ /* Don't paint the background image if console is blanked */
30650 ++ transparent = ops->blank_state ? 0 :
30651 ++ (vc->vc_decor.bg_color == bg_color);
30652 ++
30653 ++ xx = xx * vc->vc_font.width + vc->vc_decor.tx;
30654 ++ yy = yy * vc->vc_font.height + vc->vc_decor.ty;
30655 ++
30656 ++ fgx = cc2cx(fg_color);
30657 ++ bgx = cc2cx(bg_color);
30658 ++
30659 ++ while (count--) {
30660 ++ c = scr_readw(s++);
30661 ++ src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
30662 ++ ((vc->vc_font.width + 7) >> 3);
30663 ++
30664 ++ fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
30665 ++ vc->vc_font.width, src, fgx, bgx, transparent);
30666 ++ xx += vc->vc_font.width;
30667 ++ }
30668 ++}
30669 ++
30670 ++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
30671 ++{
30672 ++ int i;
30673 ++ unsigned int dsize, s_pitch;
30674 ++ struct fbcon_ops *ops = info->fbcon_par;
30675 ++ struct vc_data* vc;
30676 ++ u8 *src;
30677 ++
30678 ++ /* we really don't need any cursors while the console is blanked */
30679 ++ if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
30680 ++ return;
30681 ++
30682 ++ vc = vc_cons[ops->currcon].d;
30683 ++
30684 ++ src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
30685 ++ if (!src)
30686 ++ return;
30687 ++
30688 ++ s_pitch = (cursor->image.width + 7) >> 3;
30689 ++ dsize = s_pitch * cursor->image.height;
30690 ++ if (cursor->enable) {
30691 ++ switch (cursor->rop) {
30692 ++ case ROP_XOR:
30693 ++ for (i = 0; i < dsize; i++)
30694 ++ src[i] = cursor->image.data[i] ^ cursor->mask[i];
30695 ++ break;
30696 ++ case ROP_COPY:
30697 ++ default:
30698 ++ for (i = 0; i < dsize; i++)
30699 ++ src[i] = cursor->image.data[i] & cursor->mask[i];
30700 ++ break;
30701 ++ }
30702 ++ } else
30703 ++ memcpy(src, cursor->image.data, dsize);
30704 ++
30705 ++ fbcon_decor_renderc(info,
30706 ++ cursor->image.dy + vc->vc_decor.ty,
30707 ++ cursor->image.dx + vc->vc_decor.tx,
30708 ++ cursor->image.height,
30709 ++ cursor->image.width,
30710 ++ (u8*)src,
30711 ++ cc2cx(cursor->image.fg_color),
30712 ++ cc2cx(cursor->image.bg_color),
30713 ++ cursor->image.bg_color == vc->vc_decor.bg_color);
30714 ++
30715 ++ kfree(src);
30716 ++}
30717 ++
30718 ++static void decorset(u8 *dst, int height, int width, int dstbytes,
30719 ++ u32 bgx, int bpp)
30720 ++{
30721 ++ int i;
30722 ++
30723 ++ if (bpp == 8)
30724 ++ bgx |= bgx << 8;
30725 ++ if (bpp == 16 || bpp == 8)
30726 ++ bgx |= bgx << 16;
30727 ++
30728 ++ while (height-- > 0) {
30729 ++ u8 *p = dst;
30730 ++
30731 ++ switch (bpp) {
30732 ++
30733 ++ case 32:
30734 ++ for (i=0; i < width; i++) {
30735 ++ fb_writel(bgx, p); p += 4;
30736 ++ }
30737 ++ break;
30738 ++ case 24:
30739 ++ for (i=0; i < width; i++) {
30740 ++#ifdef __LITTLE_ENDIAN
30741 ++ fb_writew((bgx & 0xffff),(u16*)p); p += 2;
30742 ++ fb_writeb((bgx >> 16),p++);
30743 ++#else
30744 ++ fb_writew((bgx >> 8),(u16*)p); p += 2;
30745 ++ fb_writeb((bgx & 0xff),p++);
30746 ++#endif
30747 ++ }
30748 ++ case 16:
30749 ++ for (i=0; i < width/4; i++) {
30750 ++ fb_writel(bgx,p); p += 4;
30751 ++ fb_writel(bgx,p); p += 4;
30752 ++ }
30753 ++ if (width & 2) {
30754 ++ fb_writel(bgx,p); p += 4;
30755 ++ }
30756 ++ if (width & 1)
30757 ++ fb_writew(bgx,(u16*)p);
30758 ++ break;
30759 ++ case 8:
30760 ++ for (i=0; i < width/4; i++) {
30761 ++ fb_writel(bgx,p); p += 4;
30762 ++ }
30763 ++
30764 ++ if (width & 2) {
30765 ++ fb_writew(bgx,p); p += 2;
30766 ++ }
30767 ++ if (width & 1)
30768 ++ fb_writeb(bgx,(u8*)p);
30769 ++ break;
30770 ++
30771 ++ }
30772 ++ dst += dstbytes;
30773 ++ }
30774 ++}
30775 ++
30776 ++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
30777 ++ int srclinebytes, int bpp)
30778 ++{
30779 ++ int i;
30780 ++
30781 ++ while (height-- > 0) {
30782 ++ u32 *p = (u32 *)dst;
30783 ++ u32 *q = (u32 *)src;
30784 ++
30785 ++ switch (bpp) {
30786 ++
30787 ++ case 32:
30788 ++ for (i=0; i < width; i++)
30789 ++ fb_writel(*q++, p++);
30790 ++ break;
30791 ++ case 24:
30792 ++ for (i=0; i < (width*3/4); i++)
30793 ++ fb_writel(*q++, p++);
30794 ++ if ((width*3) % 4) {
30795 ++ if (width & 2) {
30796 ++ fb_writeb(*(u8*)q, (u8*)p);
30797 ++ } else if (width & 1) {
30798 ++ fb_writew(*(u16*)q, (u16*)p);
30799 ++ fb_writeb(*(u8*)((u16*)q+1),(u8*)((u16*)p+2));
30800 ++ }
30801 ++ }
30802 ++ break;
30803 ++ case 16:
30804 ++ for (i=0; i < width/4; i++) {
30805 ++ fb_writel(*q++, p++);
30806 ++ fb_writel(*q++, p++);
30807 ++ }
30808 ++ if (width & 2)
30809 ++ fb_writel(*q++, p++);
30810 ++ if (width & 1)
30811 ++ fb_writew(*(u16*)q, (u16*)p);
30812 ++ break;
30813 ++ case 8:
30814 ++ for (i=0; i < width/4; i++)
30815 ++ fb_writel(*q++, p++);
30816 ++
30817 ++ if (width & 2) {
30818 ++ fb_writew(*(u16*)q, (u16*)p);
30819 ++ q = (u32*) ((u16*)q + 1);
30820 ++ p = (u32*) ((u16*)p + 1);
30821 ++ }
30822 ++ if (width & 1)
30823 ++ fb_writeb(*(u8*)q, (u8*)p);
30824 ++ break;
30825 ++ }
30826 ++
30827 ++ dst += linebytes;
30828 ++ src += srclinebytes;
30829 ++ }
30830 ++}
30831 ++
30832 ++static void decorfill(struct fb_info *info, int sy, int sx, int height,
30833 ++ int width)
30834 ++{
30835 ++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
30836 ++ int d = sy * info->fix.line_length + sx * bytespp;
30837 ++ int ds = (sy * info->var.xres + sx) * bytespp;
30838 ++
30839 ++ fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
30840 ++ height, width, info->fix.line_length, info->var.xres * bytespp,
30841 ++ info->var.bits_per_pixel);
30842 ++}
30843 ++
30844 ++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
30845 ++ int height, int width)
30846 ++{
30847 ++ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
30848 ++ struct fbcon_ops *ops = info->fbcon_par;
30849 ++ u8 *dst;
30850 ++ int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
30851 ++
30852 ++ transparent = (vc->vc_decor.bg_color == bg_color);
30853 ++ sy = sy * vc->vc_font.height + vc->vc_decor.ty;
30854 ++ sx = sx * vc->vc_font.width + vc->vc_decor.tx;
30855 ++ height *= vc->vc_font.height;
30856 ++ width *= vc->vc_font.width;
30857 ++
30858 ++ /* Don't paint the background image if console is blanked */
30859 ++ if (transparent && !ops->blank_state) {
30860 ++ decorfill(info, sy, sx, height, width);
30861 ++ } else {
30862 ++ dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
30863 ++ sx * ((info->var.bits_per_pixel + 7) >> 3));
30864 ++ decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
30865 ++ info->var.bits_per_pixel);
30866 ++ }
30867 ++}
30868 ++
30869 ++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
30870 ++ int bottom_only)
30871 ++{
30872 ++ unsigned int tw = vc->vc_cols*vc->vc_font.width;
30873 ++ unsigned int th = vc->vc_rows*vc->vc_font.height;
30874 ++
30875 ++ if (!bottom_only) {
30876 ++ /* top margin */
30877 ++ decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
30878 ++ /* left margin */
30879 ++ decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
30880 ++ /* right margin */
30881 ++ decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
30882 ++ info->var.xres - vc->vc_decor.tx - tw);
30883 ++ }
30884 ++ decorfill(info, vc->vc_decor.ty + th, 0,
30885 ++ info->var.yres - vc->vc_decor.ty - th, info->var.xres);
30886 ++}
30887 ++
30888 ++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
30889 ++ int sx, int dx, int width)
30890 ++{
30891 ++ u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
30892 ++ u16 *s = d + (dx - sx);
30893 ++ u16 *start = d;
30894 ++ u16 *ls = d;
30895 ++ u16 *le = d + width;
30896 ++ u16 c;
30897 ++ int x = dx;
30898 ++ u16 attr = 1;
30899 ++
30900 ++ do {
30901 ++ c = scr_readw(d);
30902 ++ if (attr != (c & 0xff00)) {
30903 ++ attr = c & 0xff00;
30904 ++ if (d > start) {
30905 ++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
30906 ++ x += d - start;
30907 ++ start = d;
30908 ++ }
30909 ++ }
30910 ++ if (s >= ls && s < le && c == scr_readw(s)) {
30911 ++ if (d > start) {
30912 ++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
30913 ++ x += d - start + 1;
30914 ++ start = d + 1;
30915 ++ } else {
30916 ++ x++;
30917 ++ start++;
30918 ++ }
30919 ++ }
30920 ++ s++;
30921 ++ d++;
30922 ++ } while (d < le);
30923 ++ if (d > start)
30924 ++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
30925 ++}
30926 ++
30927 ++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
30928 ++{
30929 ++ if (blank) {
30930 ++ decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
30931 ++ info->fix.line_length, 0, info->var.bits_per_pixel);
30932 ++ } else {
30933 ++ update_screen(vc);
30934 ++ fbcon_decor_clear_margins(vc, info, 0);
30935 ++ }
30936 ++}
30937 ++
30938 +diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
30939 +index 2e471c2..b9679a7e 100644
30940 +--- a/drivers/video/console/fbcon.c
30941 ++++ b/drivers/video/console/fbcon.c
30942 +@@ -26,7 +26,7 @@
30943 + *
30944 + * Hardware cursor support added by Emmanuel Marty (core@×××××××××××.org)
30945 + * Smart redraw scrolling, arbitrary font width support, 512char font support
30946 +- * and software scrollback added by
30947 ++ * and software scrollback added by
30948 + * Jakub Jelinek (jj@×××××××××××.cz)
30949 + *
30950 + * Random hacking by Martin Mares <mj@×××.cz>
30951 +@@ -79,6 +79,7 @@
30952 + #include <asm/irq.h>
30953 +
30954 + #include "fbcon.h"
30955 ++#include "fbcondecor.h"
30956 +
30957 + #ifdef FBCONDEBUG
30958 + # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
30959 +@@ -94,7 +95,7 @@ enum {
30960 +
30961 + static struct display fb_display[MAX_NR_CONSOLES];
30962 +
30963 +-static signed char con2fb_map[MAX_NR_CONSOLES];
30964 ++signed char con2fb_map[MAX_NR_CONSOLES];
30965 + static signed char con2fb_map_boot[MAX_NR_CONSOLES];
30966 +
30967 + static int logo_lines;
30968 +@@ -110,7 +111,7 @@ static int softback_lines;
30969 + /* console mappings */
30970 + static int first_fb_vc;
30971 + static int last_fb_vc = MAX_NR_CONSOLES - 1;
30972 +-static int fbcon_is_default = 1;
30973 ++static int fbcon_is_default = 1;
30974 + static int fbcon_has_exited;
30975 + static int primary_device = -1;
30976 + static int fbcon_has_console_bind;
30977 +@@ -286,7 +287,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
30978 + !vt_force_oops_output(vc);
30979 + }
30980 +
30981 +-static int get_color(struct vc_data *vc, struct fb_info *info,
30982 ++int get_color(struct vc_data *vc, struct fb_info *info,
30983 + u16 c, int is_fg)
30984 + {
30985 + int depth = fb_get_color_depth(&info->var, &info->fix);
30986 +@@ -465,7 +466,7 @@ static int __init fb_console_setup(char *this_opt)
30987 + } else
30988 + return 1;
30989 + }
30990 +-
30991 ++
30992 + if (!strncmp(options, "map:", 4)) {
30993 + options += 4;
30994 + if (*options) {
30995 +@@ -490,8 +491,8 @@ static int __init fb_console_setup(char *this_opt)
30996 + first_fb_vc = 0;
30997 + if (*options++ == '-')
30998 + last_fb_vc = simple_strtoul(options, &options, 10) - 1;
30999 +- fbcon_is_default = 0;
31000 +- }
31001 ++ fbcon_is_default = 0;
31002 ++ }
31003 +
31004 + if (!strncmp(options, "rotate:", 7)) {
31005 + options += 7;
31006 +@@ -552,6 +553,9 @@ static int fbcon_takeover(int show_logo)
31007 + info_idx = -1;
31008 + } else {
31009 + fbcon_has_console_bind = 1;
31010 ++#ifdef CONFIG_FB_CON_DECOR
31011 ++ fbcon_decor_init();
31012 ++#endif
31013 + }
31014 +
31015 + return err;
31016 +@@ -942,7 +946,7 @@ static const char *fbcon_startup(void)
31017 + info = registered_fb[info_idx];
31018 + if (!info)
31019 + return NULL;
31020 +-
31021 ++
31022 + owner = info->fbops->owner;
31023 + if (!try_module_get(owner))
31024 + return NULL;
31025 +@@ -1006,6 +1010,12 @@ static const char *fbcon_startup(void)
31026 + rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
31027 + cols /= vc->vc_font.width;
31028 + rows /= vc->vc_font.height;
31029 ++
31030 ++ if (fbcon_decor_active(info, vc)) {
31031 ++ cols = vc->vc_decor.twidth / vc->vc_font.width;
31032 ++ rows = vc->vc_decor.theight / vc->vc_font.height;
31033 ++ }
31034 ++
31035 + vc_resize(vc, cols, rows);
31036 +
31037 + DPRINTK("mode: %s\n", info->fix.id);
31038 +@@ -1035,7 +1045,7 @@ static void fbcon_init(struct vc_data *vc, int init)
31039 + cap = info->flags;
31040 +
31041 + if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
31042 +- (info->fix.type == FB_TYPE_TEXT))
31043 ++ (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
31044 + logo = 0;
31045 +
31046 + if (var_to_display(p, &info->var, info))
31047 +@@ -1245,6 +1255,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
31048 + if (sy < vc->vc_top && vc->vc_top == logo_lines)
31049 + vc->vc_top = 0;
31050 +
31051 ++ if (fbcon_decor_active(info, vc)) {
31052 ++ fbcon_decor_clear(vc, info, sy, sx, height, width);
31053 ++ return;
31054 ++ }
31055 ++
31056 + /* Split blits that cross physical y_wrap boundary */
31057 +
31058 + y_break = p->vrows - p->yscroll;
31059 +@@ -1264,10 +1279,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
31060 + struct display *p = &fb_display[vc->vc_num];
31061 + struct fbcon_ops *ops = info->fbcon_par;
31062 +
31063 +- if (!fbcon_is_inactive(vc, info))
31064 +- ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
31065 +- get_color(vc, info, scr_readw(s), 1),
31066 +- get_color(vc, info, scr_readw(s), 0));
31067 ++ if (!fbcon_is_inactive(vc, info)) {
31068 ++
31069 ++ if (fbcon_decor_active(info, vc))
31070 ++ fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
31071 ++ else
31072 ++ ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
31073 ++ get_color(vc, info, scr_readw(s), 1),
31074 ++ get_color(vc, info, scr_readw(s), 0));
31075 ++ }
31076 + }
31077 +
31078 + static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
31079 +@@ -1283,8 +1303,13 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
31080 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31081 + struct fbcon_ops *ops = info->fbcon_par;
31082 +
31083 +- if (!fbcon_is_inactive(vc, info))
31084 +- ops->clear_margins(vc, info, bottom_only);
31085 ++ if (!fbcon_is_inactive(vc, info)) {
31086 ++ if (fbcon_decor_active(info, vc)) {
31087 ++ fbcon_decor_clear_margins(vc, info, bottom_only);
31088 ++ } else {
31089 ++ ops->clear_margins(vc, info, bottom_only);
31090 ++ }
31091 ++ }
31092 + }
31093 +
31094 + static void fbcon_cursor(struct vc_data *vc, int mode)
31095 +@@ -1394,7 +1419,7 @@ static __inline__ void ywrap_up(struct vc_data *vc, int count)
31096 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31097 + struct fbcon_ops *ops = info->fbcon_par;
31098 + struct display *p = &fb_display[vc->vc_num];
31099 +-
31100 ++
31101 + p->yscroll += count;
31102 + if (p->yscroll >= p->vrows) /* Deal with wrap */
31103 + p->yscroll -= p->vrows;
31104 +@@ -1413,7 +1438,7 @@ static __inline__ void ywrap_down(struct vc_data *vc, int count)
31105 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31106 + struct fbcon_ops *ops = info->fbcon_par;
31107 + struct display *p = &fb_display[vc->vc_num];
31108 +-
31109 ++
31110 + p->yscroll -= count;
31111 + if (p->yscroll < 0) /* Deal with wrap */
31112 + p->yscroll += p->vrows;
31113 +@@ -1480,7 +1505,7 @@ static __inline__ void ypan_down(struct vc_data *vc, int count)
31114 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31115 + struct display *p = &fb_display[vc->vc_num];
31116 + struct fbcon_ops *ops = info->fbcon_par;
31117 +-
31118 ++
31119 + p->yscroll -= count;
31120 + if (p->yscroll < 0) {
31121 + ops->bmove(vc, info, 0, 0, p->vrows - vc->vc_rows,
31122 +@@ -1804,7 +1829,7 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
31123 + count = vc->vc_rows;
31124 + if (softback_top)
31125 + fbcon_softback_note(vc, t, count);
31126 +- if (logo_shown >= 0)
31127 ++ if (logo_shown >= 0 || fbcon_decor_active(info, vc))
31128 + goto redraw_up;
31129 + switch (p->scrollmode) {
31130 + case SCROLL_MOVE:
31131 +@@ -1897,6 +1922,8 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
31132 + count = vc->vc_rows;
31133 + if (logo_shown >= 0)
31134 + goto redraw_down;
31135 ++ if (fbcon_decor_active(info, vc))
31136 ++ goto redraw_down;
31137 + switch (p->scrollmode) {
31138 + case SCROLL_MOVE:
31139 + fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
31140 +@@ -1989,7 +2016,7 @@ static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx,
31141 + {
31142 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31143 + struct display *p = &fb_display[vc->vc_num];
31144 +-
31145 ++
31146 + if (fbcon_is_inactive(vc, info))
31147 + return;
31148 +
31149 +@@ -2007,7 +2034,7 @@ static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx,
31150 + p->vrows - p->yscroll);
31151 + }
31152 +
31153 +-static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int sx,
31154 ++static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int sx,
31155 + int dy, int dx, int height, int width, u_int y_break)
31156 + {
31157 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31158 +@@ -2045,6 +2072,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
31159 + }
31160 + return;
31161 + }
31162 ++
31163 ++ if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
31164 ++ /* must use slower redraw bmove to keep background pic intact */
31165 ++ fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
31166 ++ return;
31167 ++ }
31168 ++
31169 + ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
31170 + height, width);
31171 + }
31172 +@@ -2096,7 +2130,7 @@ static void updatescrollmode(struct display *p,
31173 + }
31174 + }
31175 +
31176 +-static int fbcon_resize(struct vc_data *vc, unsigned int width,
31177 ++static int fbcon_resize(struct vc_data *vc, unsigned int width,
31178 + unsigned int height, unsigned int user)
31179 + {
31180 + struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
31181 +@@ -2115,8 +2149,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
31182 + var.yres = virt_h * virt_fh;
31183 + x_diff = info->var.xres - var.xres;
31184 + y_diff = info->var.yres - var.yres;
31185 +- if (x_diff < 0 || x_diff > virt_fw ||
31186 +- y_diff < 0 || y_diff > virt_fh) {
31187 ++ if ((x_diff < 0 || x_diff > virt_fw ||
31188 ++ y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
31189 + const struct fb_videomode *mode;
31190 +
31191 + DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
31192 +@@ -2152,6 +2186,21 @@ static int fbcon_switch(struct vc_data *vc)
31193 +
31194 + info = registered_fb[con2fb_map[vc->vc_num]];
31195 + ops = info->fbcon_par;
31196 ++ prev_console = ops->currcon;
31197 ++ if (prev_console != -1)
31198 ++ old_info = registered_fb[con2fb_map[prev_console]];
31199 ++
31200 ++#ifdef CONFIG_FB_CON_DECOR
31201 ++ if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
31202 ++ struct vc_data *vc_curr = vc_cons[prev_console].d;
31203 ++ if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
31204 ++ /* Clear the screen to avoid displaying funky colors during
31205 ++ * palette updates. */
31206 ++ memset((u8*)info->screen_base + info->fix.line_length * info->var.yoffset,
31207 ++ 0, info->var.yres * info->fix.line_length);
31208 ++ }
31209 ++ }
31210 ++#endif
31211 +
31212 + if (softback_top) {
31213 + if (softback_lines)
31214 +@@ -2170,9 +2219,6 @@ static int fbcon_switch(struct vc_data *vc)
31215 + logo_shown = FBCON_LOGO_CANSHOW;
31216 + }
31217 +
31218 +- prev_console = ops->currcon;
31219 +- if (prev_console != -1)
31220 +- old_info = registered_fb[con2fb_map[prev_console]];
31221 + /*
31222 + * FIXME: If we have multiple fbdev's loaded, we need to
31223 + * update all info->currcon. Perhaps, we can place this
31224 +@@ -2216,6 +2262,18 @@ static int fbcon_switch(struct vc_data *vc)
31225 + fbcon_del_cursor_timer(old_info);
31226 + }
31227 +
31228 ++ if (fbcon_decor_active_vc(vc)) {
31229 ++ struct vc_data *vc_curr = vc_cons[prev_console].d;
31230 ++
31231 ++ if (!vc_curr->vc_decor.theme ||
31232 ++ strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
31233 ++ (fbcon_decor_active_nores(info, vc_curr) &&
31234 ++ !fbcon_decor_active(info, vc_curr))) {
31235 ++ fbcon_decor_disable(vc, 0);
31236 ++ fbcon_decor_call_helper("modechange", vc->vc_num);
31237 ++ }
31238 ++ }
31239 ++
31240 + if (fbcon_is_inactive(vc, info) ||
31241 + ops->blank_state != FB_BLANK_UNBLANK)
31242 + fbcon_del_cursor_timer(info);
31243 +@@ -2264,11 +2322,10 @@ static int fbcon_switch(struct vc_data *vc)
31244 + ops->update_start(info);
31245 + }
31246 +
31247 +- fbcon_set_palette(vc, color_table);
31248 ++ fbcon_set_palette(vc, color_table);
31249 + fbcon_clear_margins(vc, 0);
31250 +
31251 + if (logo_shown == FBCON_LOGO_DRAW) {
31252 +-
31253 + logo_shown = fg_console;
31254 + /* This is protected above by initmem_freed */
31255 + fb_show_logo(info, ops->rotate);
31256 +@@ -2324,15 +2381,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
31257 + }
31258 + }
31259 +
31260 +- if (!fbcon_is_inactive(vc, info)) {
31261 ++ if (!fbcon_is_inactive(vc, info)) {
31262 + if (ops->blank_state != blank) {
31263 + ops->blank_state = blank;
31264 + fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
31265 + ops->cursor_flash = (!blank);
31266 +
31267 +- if (!(info->flags & FBINFO_MISC_USEREVENT))
31268 +- if (fb_blank(info, blank))
31269 +- fbcon_generic_blank(vc, info, blank);
31270 ++ if (!(info->flags & FBINFO_MISC_USEREVENT)) {
31271 ++ if (fb_blank(info, blank)) {
31272 ++ if (fbcon_decor_active(info, vc))
31273 ++ fbcon_decor_blank(vc, info, blank);
31274 ++ else
31275 ++ fbcon_generic_blank(vc, info, blank);
31276 ++ }
31277 ++ }
31278 + }
31279 +
31280 + if (!blank)
31281 +@@ -2454,7 +2516,7 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
31282 + vc->vc_complement_mask >>= 1;
31283 + vc->vc_s_complement_mask >>= 1;
31284 + }
31285 +-
31286 ++
31287 + /* ++Edmund: reorder the attribute bits */
31288 + if (vc->vc_can_do_color) {
31289 + unsigned short *cp =
31290 +@@ -2477,7 +2539,7 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
31291 + vc->vc_complement_mask <<= 1;
31292 + vc->vc_s_complement_mask <<= 1;
31293 + }
31294 +-
31295 ++
31296 + /* ++Edmund: reorder the attribute bits */
31297 + {
31298 + unsigned short *cp =
31299 +@@ -2507,13 +2569,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
31300 + }
31301 +
31302 + if (resize) {
31303 ++ /* reset wrap/pan */
31304 + int cols, rows;
31305 +
31306 + cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
31307 + rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
31308 ++
31309 ++ if (fbcon_decor_active(info, vc)) {
31310 ++ info->var.xoffset = info->var.yoffset = p->yscroll = 0;
31311 ++ cols = vc->vc_decor.twidth;
31312 ++ rows = vc->vc_decor.theight;
31313 ++ }
31314 + cols /= w;
31315 + rows /= h;
31316 ++
31317 + vc_resize(vc, cols, rows);
31318 ++
31319 + if (CON_IS_VISIBLE(vc) && softback_buf)
31320 + fbcon_update_softback(vc);
31321 + } else if (CON_IS_VISIBLE(vc)
31322 +@@ -2597,7 +2661,7 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font, unsigne
31323 + /* Check if the same font is on some other console already */
31324 + for (i = first_fb_vc; i <= last_fb_vc; i++) {
31325 + struct vc_data *tmp = vc_cons[i].d;
31326 +-
31327 ++
31328 + if (fb_display[i].userfont &&
31329 + fb_display[i].fontdata &&
31330 + FNTSUM(fb_display[i].fontdata) == csum &&
31331 +@@ -2642,7 +2713,11 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
31332 + int i, j, k, depth;
31333 + u8 val;
31334 +
31335 +- if (fbcon_is_inactive(vc, info))
31336 ++ if (fbcon_is_inactive(vc, info)
31337 ++#ifdef CONFIG_FB_CON_DECOR
31338 ++ || vc->vc_num != fg_console
31339 ++#endif
31340 ++ )
31341 + return -EINVAL;
31342 +
31343 + if (!CON_IS_VISIBLE(vc))
31344 +@@ -2668,14 +2743,56 @@ static int fbcon_set_palette(struct vc_data *vc, unsigned char *table)
31345 + } else
31346 + fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
31347 +
31348 +- return fb_set_cmap(&palette_cmap, info);
31349 ++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
31350 ++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
31351 ++
31352 ++ u16 *red, *green, *blue;
31353 ++ int minlen = min(min(info->var.red.length, info->var.green.length),
31354 ++ info->var.blue.length);
31355 ++ int h;
31356 ++
31357 ++ struct fb_cmap cmap = {
31358 ++ .start = 0,
31359 ++ .len = (1 << minlen),
31360 ++ .red = NULL,
31361 ++ .green = NULL,
31362 ++ .blue = NULL,
31363 ++ .transp = NULL
31364 ++ };
31365 ++
31366 ++ red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
31367 ++
31368 ++ if (!red)
31369 ++ goto out;
31370 ++
31371 ++ green = red + 256;
31372 ++ blue = green + 256;
31373 ++ cmap.red = red;
31374 ++ cmap.green = green;
31375 ++ cmap.blue = blue;
31376 ++
31377 ++ for (i = 0; i < cmap.len; i++) {
31378 ++ red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
31379 ++ }
31380 ++
31381 ++ h = fb_set_cmap(&cmap, info);
31382 ++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
31383 ++ kfree(red);
31384 ++
31385 ++ return h;
31386 ++
31387 ++ } else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
31388 ++ info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
31389 ++ fb_set_cmap(&info->bgdecor.cmap, info);
31390 ++
31391 ++out: return fb_set_cmap(&palette_cmap, info);
31392 + }
31393 +
31394 + static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
31395 + {
31396 + unsigned long p;
31397 + int line;
31398 +-
31399 ++
31400 + if (vc->vc_num != fg_console || !softback_lines)
31401 + return (u16 *) (vc->vc_origin + offset);
31402 + line = offset / vc->vc_size_row;
31403 +@@ -2894,7 +3011,14 @@ static void fbcon_modechanged(struct fb_info *info)
31404 + rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
31405 + cols /= vc->vc_font.width;
31406 + rows /= vc->vc_font.height;
31407 +- vc_resize(vc, cols, rows);
31408 ++
31409 ++ if (!fbcon_decor_active_nores(info, vc)) {
31410 ++ vc_resize(vc, cols, rows);
31411 ++ } else {
31412 ++ fbcon_decor_disable(vc, 0);
31413 ++ fbcon_decor_call_helper("modechange", vc->vc_num);
31414 ++ }
31415 ++
31416 + updatescrollmode(p, info, vc);
31417 + scrollback_max = 0;
31418 + scrollback_current = 0;
31419 +@@ -2939,7 +3063,9 @@ static void fbcon_set_all_vcs(struct fb_info *info)
31420 + rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
31421 + cols /= vc->vc_font.width;
31422 + rows /= vc->vc_font.height;
31423 +- vc_resize(vc, cols, rows);
31424 ++ if (!fbcon_decor_active_nores(info, vc)) {
31425 ++ vc_resize(vc, cols, rows);
31426 ++ }
31427 + }
31428 +
31429 + if (fg != -1)
31430 +@@ -3549,6 +3675,7 @@ static void fbcon_exit(void)
31431 + }
31432 + }
31433 +
31434 ++ fbcon_decor_exit();
31435 + fbcon_has_exited = 1;
31436 + }
31437 +
31438 +@@ -3602,7 +3729,7 @@ static void __exit fb_console_exit(void)
31439 + fbcon_exit();
31440 + console_unlock();
31441 + unregister_con_driver(&fb_con);
31442 +-}
31443 ++}
31444 +
31445 + module_exit(fb_console_exit);
31446 +
31447 +diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
31448 +new file mode 100644
31449 +index 0000000..7189ce6
31450 +--- /dev/null
31451 ++++ b/drivers/video/console/fbcondecor.c
31452 +@@ -0,0 +1,555 @@
31453 ++/*
31454 ++ * linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
31455 ++ *
31456 ++ * Copyright (C) 2004-2009 Michal Januszewski <spock@g.o>
31457 ++ *
31458 ++ * Code based upon "Bootsplash" (C) 2001-2003
31459 ++ * Volker Poplawski <volker@×××××××××.de>,
31460 ++ * Stefan Reinauer <stepan@××××.de>,
31461 ++ * Steffen Winterfeldt <snwint@××××.de>,
31462 ++ * Michael Schroeder <mls@××××.de>,
31463 ++ * Ken Wimer <wimer@××××.de>.
31464 ++ *
31465 ++ * Compat ioctl support by Thorsten Klein <TK@××××××××××××××.de>.
31466 ++ *
31467 ++ * This file is subject to the terms and conditions of the GNU General Public
31468 ++ * License. See the file COPYING in the main directory of this archive for
31469 ++ * more details.
31470 ++ *
31471 ++ */
31472 ++#include <linux/module.h>
31473 ++#include <linux/kernel.h>
31474 ++#include <linux/string.h>
31475 ++#include <linux/types.h>
31476 ++#include <linux/fb.h>
31477 ++#include <linux/vt_kern.h>
31478 ++#include <linux/vmalloc.h>
31479 ++#include <linux/unistd.h>
31480 ++#include <linux/syscalls.h>
31481 ++#include <linux/init.h>
31482 ++#include <linux/proc_fs.h>
31483 ++#include <linux/workqueue.h>
31484 ++#include <linux/kmod.h>
31485 ++#include <linux/miscdevice.h>
31486 ++#include <linux/device.h>
31487 ++#include <linux/fs.h>
31488 ++#include <linux/compat.h>
31489 ++#include <linux/console.h>
31490 ++
31491 ++#include <asm/uaccess.h>
31492 ++#include <asm/irq.h>
31493 ++
31494 ++#include "fbcon.h"
31495 ++#include "fbcondecor.h"
31496 ++
31497 ++extern signed char con2fb_map[];
31498 ++static int fbcon_decor_enable(struct vc_data *vc);
31499 ++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
31500 ++static int initialized = 0;
31501 ++
31502 ++int fbcon_decor_call_helper(char* cmd, unsigned short vc)
31503 ++{
31504 ++ char *envp[] = {
31505 ++ "HOME=/",
31506 ++ "PATH=/sbin:/bin",
31507 ++ NULL
31508 ++ };
31509 ++
31510 ++ char tfb[5];
31511 ++ char tcons[5];
31512 ++ unsigned char fb = (int) con2fb_map[vc];
31513 ++
31514 ++ char *argv[] = {
31515 ++ fbcon_decor_path,
31516 ++ "2",
31517 ++ cmd,
31518 ++ tcons,
31519 ++ tfb,
31520 ++ vc_cons[vc].d->vc_decor.theme,
31521 ++ NULL
31522 ++ };
31523 ++
31524 ++ snprintf(tfb,5,"%d",fb);
31525 ++ snprintf(tcons,5,"%d",vc);
31526 ++
31527 ++ return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
31528 ++}
31529 ++
31530 ++/* Disables fbcondecor on a virtual console; called with console sem held. */
31531 ++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
31532 ++{
31533 ++ struct fb_info* info;
31534 ++
31535 ++ if (!vc->vc_decor.state)
31536 ++ return -EINVAL;
31537 ++
31538 ++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
31539 ++
31540 ++ if (info == NULL)
31541 ++ return -EINVAL;
31542 ++
31543 ++ vc->vc_decor.state = 0;
31544 ++ vc_resize(vc, info->var.xres / vc->vc_font.width,
31545 ++ info->var.yres / vc->vc_font.height);
31546 ++
31547 ++ if (fg_console == vc->vc_num && redraw) {
31548 ++ redraw_screen(vc, 0);
31549 ++ update_region(vc, vc->vc_origin +
31550 ++ vc->vc_size_row * vc->vc_top,
31551 ++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
31552 ++ }
31553 ++
31554 ++ printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
31555 ++ vc->vc_num);
31556 ++
31557 ++ return 0;
31558 ++}
31559 ++
31560 ++/* Enables fbcondecor on a virtual console; called with console sem held. */
31561 ++static int fbcon_decor_enable(struct vc_data *vc)
31562 ++{
31563 ++ struct fb_info* info;
31564 ++
31565 ++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
31566 ++
31567 ++ if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
31568 ++ info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
31569 ++ vc->vc_num == fg_console))
31570 ++ return -EINVAL;
31571 ++
31572 ++ vc->vc_decor.state = 1;
31573 ++ vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
31574 ++ vc->vc_decor.theight / vc->vc_font.height);
31575 ++
31576 ++ if (fg_console == vc->vc_num) {
31577 ++ redraw_screen(vc, 0);
31578 ++ update_region(vc, vc->vc_origin +
31579 ++ vc->vc_size_row * vc->vc_top,
31580 ++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
31581 ++ fbcon_decor_clear_margins(vc, info, 0);
31582 ++ }
31583 ++
31584 ++ printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
31585 ++ vc->vc_num);
31586 ++
31587 ++ return 0;
31588 ++}
31589 ++
31590 ++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
31591 ++{
31592 ++ int ret;
31593 ++
31594 ++// if (origin == FBCON_DECOR_IO_ORIG_USER)
31595 ++ console_lock();
31596 ++ if (!state)
31597 ++ ret = fbcon_decor_disable(vc, 1);
31598 ++ else
31599 ++ ret = fbcon_decor_enable(vc);
31600 ++// if (origin == FBCON_DECOR_IO_ORIG_USER)
31601 ++ console_unlock();
31602 ++
31603 ++ return ret;
31604 ++}
31605 ++
31606 ++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
31607 ++{
31608 ++ *state = vc->vc_decor.state;
31609 ++}
31610 ++
31611 ++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
31612 ++{
31613 ++ struct fb_info *info;
31614 ++ int len;
31615 ++ char *tmp;
31616 ++
31617 ++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
31618 ++
31619 ++ if (info == NULL || !cfg->twidth || !cfg->theight ||
31620 ++ cfg->tx + cfg->twidth > info->var.xres ||
31621 ++ cfg->ty + cfg->theight > info->var.yres)
31622 ++ return -EINVAL;
31623 ++
31624 ++ len = strlen_user(cfg->theme);
31625 ++ if (!len || len > FBCON_DECOR_THEME_LEN)
31626 ++ return -EINVAL;
31627 ++ tmp = kmalloc(len, GFP_KERNEL);
31628 ++ if (!tmp)
31629 ++ return -ENOMEM;
31630 ++ if (copy_from_user(tmp, (void __user *)cfg->theme, len))
31631 ++ return -EFAULT;
31632 ++ cfg->theme = tmp;
31633 ++ cfg->state = 0;
31634 ++
31635 ++ /* If this ioctl is a response to a request from kernel, the console sem
31636 ++ * is already held; we also don't need to disable decor because either the
31637 ++ * new config and background picture will be successfully loaded, and the
31638 ++ * decor will stay on, or in case of a failure it'll be turned off in fbcon. */
31639 ++// if (origin == FBCON_DECOR_IO_ORIG_USER) {
31640 ++ console_lock();
31641 ++ if (vc->vc_decor.state)
31642 ++ fbcon_decor_disable(vc, 1);
31643 ++// }
31644 ++
31645 ++ if (vc->vc_decor.theme)
31646 ++ kfree(vc->vc_decor.theme);
31647 ++
31648 ++ vc->vc_decor = *cfg;
31649 ++
31650 ++// if (origin == FBCON_DECOR_IO_ORIG_USER)
31651 ++ console_unlock();
31652 ++
31653 ++ printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
31654 ++ vc->vc_num, vc->vc_decor.theme);
31655 ++ return 0;
31656 ++}
31657 ++
31658 ++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc, struct vc_decor *decor)
31659 ++{
31660 ++ char __user *tmp;
31661 ++
31662 ++ tmp = decor->theme;
31663 ++ *decor = vc->vc_decor;
31664 ++ decor->theme = tmp;
31665 ++
31666 ++ if (vc->vc_decor.theme) {
31667 ++ if (copy_to_user(tmp, vc->vc_decor.theme, strlen(vc->vc_decor.theme) + 1))
31668 ++ return -EFAULT;
31669 ++ } else
31670 ++ if (put_user(0, tmp))
31671 ++ return -EFAULT;
31672 ++
31673 ++ return 0;
31674 ++}
31675 ++
31676 ++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img, unsigned char origin)
31677 ++{
31678 ++ struct fb_info *info;
31679 ++ int len;
31680 ++ u8 *tmp;
31681 ++
31682 ++ if (vc->vc_num != fg_console)
31683 ++ return -EINVAL;
31684 ++
31685 ++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
31686 ++
31687 ++ if (info == NULL)
31688 ++ return -EINVAL;
31689 ++
31690 ++ if (img->width != info->var.xres || img->height != info->var.yres) {
31691 ++ printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
31692 ++ printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height, info->var.xres, info->var.yres);
31693 ++ return -EINVAL;
31694 ++ }
31695 ++
31696 ++ if (img->depth != info->var.bits_per_pixel) {
31697 ++ printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
31698 ++ return -EINVAL;
31699 ++ }
31700 ++
31701 ++ if (img->depth == 8) {
31702 ++ if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
31703 ++ !img->cmap.blue)
31704 ++ return -EINVAL;
31705 ++
31706 ++ tmp = vmalloc(img->cmap.len * 3 * 2);
31707 ++ if (!tmp)
31708 ++ return -ENOMEM;
31709 ++
31710 ++ if (copy_from_user(tmp,
31711 ++ (void __user*)img->cmap.red, (img->cmap.len << 1)) ||
31712 ++ copy_from_user(tmp + (img->cmap.len << 1),
31713 ++ (void __user*)img->cmap.green, (img->cmap.len << 1)) ||
31714 ++ copy_from_user(tmp + (img->cmap.len << 2),
31715 ++ (void __user*)img->cmap.blue, (img->cmap.len << 1))) {
31716 ++ vfree(tmp);
31717 ++ return -EFAULT;
31718 ++ }
31719 ++
31720 ++ img->cmap.transp = NULL;
31721 ++ img->cmap.red = (u16*)tmp;
31722 ++ img->cmap.green = img->cmap.red + img->cmap.len;
31723 ++ img->cmap.blue = img->cmap.green + img->cmap.len;
31724 ++ } else {
31725 ++ img->cmap.red = NULL;
31726 ++ }
31727 ++
31728 ++ len = ((img->depth + 7) >> 3) * img->width * img->height;
31729 ++
31730 ++ /*
31731 ++ * Allocate an additional byte so that we never go outside of the
31732 ++ * buffer boundaries in the rendering functions in a 24 bpp mode.
31733 ++ */
31734 ++ tmp = vmalloc(len + 1);
31735 ++
31736 ++ if (!tmp)
31737 ++ goto out;
31738 ++
31739 ++ if (copy_from_user(tmp, (void __user*)img->data, len))
31740 ++ goto out;
31741 ++
31742 ++ img->data = tmp;
31743 ++
31744 ++ /* If this ioctl is a response to a request from kernel, the console sem
31745 ++ * is already held. */
31746 ++// if (origin == FBCON_DECOR_IO_ORIG_USER)
31747 ++ console_lock();
31748 ++
31749 ++ if (info->bgdecor.data)
31750 ++ vfree((u8*)info->bgdecor.data);
31751 ++ if (info->bgdecor.cmap.red)
31752 ++ vfree(info->bgdecor.cmap.red);
31753 ++
31754 ++ info->bgdecor = *img;
31755 ++
31756 ++ if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
31757 ++ redraw_screen(vc, 0);
31758 ++ update_region(vc, vc->vc_origin +
31759 ++ vc->vc_size_row * vc->vc_top,
31760 ++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
31761 ++ fbcon_decor_clear_margins(vc, info, 0);
31762 ++ }
31763 ++
31764 ++// if (origin == FBCON_DECOR_IO_ORIG_USER)
31765 ++ console_unlock();
31766 ++
31767 ++ return 0;
31768 ++
31769 ++out: if (img->cmap.red)
31770 ++ vfree(img->cmap.red);
31771 ++
31772 ++ if (tmp)
31773 ++ vfree(tmp);
31774 ++ return -ENOMEM;
31775 ++}
31776 ++
31777 ++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
31778 ++{
31779 ++ struct fbcon_decor_iowrapper __user *wrapper = (void __user*) arg;
31780 ++ struct vc_data *vc = NULL;
31781 ++ unsigned short vc_num = 0;
31782 ++ unsigned char origin = 0;
31783 ++ void __user *data = NULL;
31784 ++
31785 ++ if (!access_ok(VERIFY_READ, wrapper,
31786 ++ sizeof(struct fbcon_decor_iowrapper)))
31787 ++ return -EFAULT;
31788 ++
31789 ++ __get_user(vc_num, &wrapper->vc);
31790 ++ __get_user(origin, &wrapper->origin);
31791 ++ __get_user(data, &wrapper->data);
31792 ++
31793 ++ if (!vc_cons_allocated(vc_num))
31794 ++ return -EINVAL;
31795 ++
31796 ++ vc = vc_cons[vc_num].d;
31797 ++
31798 ++ switch (cmd) {
31799 ++ case FBIOCONDECOR_SETPIC:
31800 ++ {
31801 ++ struct fb_image img;
31802 ++ if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
31803 ++ return -EFAULT;
31804 ++
31805 ++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
31806 ++ }
31807 ++ case FBIOCONDECOR_SETCFG:
31808 ++ {
31809 ++ struct vc_decor cfg;
31810 ++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
31811 ++ return -EFAULT;
31812 ++
31813 ++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
31814 ++ }
31815 ++ case FBIOCONDECOR_GETCFG:
31816 ++ {
31817 ++ int rval;
31818 ++ struct vc_decor cfg;
31819 ++
31820 ++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
31821 ++ return -EFAULT;
31822 ++
31823 ++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
31824 ++
31825 ++ if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
31826 ++ return -EFAULT;
31827 ++ return rval;
31828 ++ }
31829 ++ case FBIOCONDECOR_SETSTATE:
31830 ++ {
31831 ++ unsigned int state = 0;
31832 ++ if (get_user(state, (unsigned int __user *)data))
31833 ++ return -EFAULT;
31834 ++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
31835 ++ }
31836 ++ case FBIOCONDECOR_GETSTATE:
31837 ++ {
31838 ++ unsigned int state = 0;
31839 ++ fbcon_decor_ioctl_dogetstate(vc, &state);
31840 ++ return put_user(state, (unsigned int __user *)data);
31841 ++ }
31842 ++
31843 ++ default:
31844 ++ return -ENOIOCTLCMD;
31845 ++ }
31846 ++}
31847 ++
31848 ++#ifdef CONFIG_COMPAT
31849 ++
31850 ++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {
31851 ++
31852 ++ struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
31853 ++ struct vc_data *vc = NULL;
31854 ++ unsigned short vc_num = 0;
31855 ++ unsigned char origin = 0;
31856 ++ compat_uptr_t data_compat = 0;
31857 ++ void __user *data = NULL;
31858 ++
31859 ++ if (!access_ok(VERIFY_READ, wrapper,
31860 ++ sizeof(struct fbcon_decor_iowrapper32)))
31861 ++ return -EFAULT;
31862 ++
31863 ++ __get_user(vc_num, &wrapper->vc);
31864 ++ __get_user(origin, &wrapper->origin);
31865 ++ __get_user(data_compat, &wrapper->data);
31866 ++ data = compat_ptr(data_compat);
31867 ++
31868 ++ if (!vc_cons_allocated(vc_num))
31869 ++ return -EINVAL;
31870 ++
31871 ++ vc = vc_cons[vc_num].d;
31872 ++
31873 ++ switch (cmd) {
31874 ++ case FBIOCONDECOR_SETPIC32:
31875 ++ {
31876 ++ struct fb_image32 img_compat;
31877 ++ struct fb_image img;
31878 ++
31879 ++ if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
31880 ++ return -EFAULT;
31881 ++
31882 ++ fb_image_from_compat(img, img_compat);
31883 ++
31884 ++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
31885 ++ }
31886 ++
31887 ++ case FBIOCONDECOR_SETCFG32:
31888 ++ {
31889 ++ struct vc_decor32 cfg_compat;
31890 ++ struct vc_decor cfg;
31891 ++
31892 ++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
31893 ++ return -EFAULT;
31894 ++
31895 ++ vc_decor_from_compat(cfg, cfg_compat);
31896 ++
31897 ++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
31898 ++ }
31899 ++
31900 ++ case FBIOCONDECOR_GETCFG32:
31901 ++ {
31902 ++ int rval;
31903 ++ struct vc_decor32 cfg_compat;
31904 ++ struct vc_decor cfg;
31905 ++
31906 ++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
31907 ++ return -EFAULT;
31908 ++ cfg.theme = compat_ptr(cfg_compat.theme);
31909 ++
31910 ++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
31911 ++
31912 ++ vc_decor_to_compat(cfg_compat, cfg);
31913 ++
31914 ++ if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
31915 ++ return -EFAULT;
31916 ++ return rval;
31917 ++ }
31918 ++
31919 ++ case FBIOCONDECOR_SETSTATE32:
31920 ++ {
31921 ++ compat_uint_t state_compat = 0;
31922 ++ unsigned int state = 0;
31923 ++
31924 ++ if (get_user(state_compat, (compat_uint_t __user *)data))
31925 ++ return -EFAULT;
31926 ++
31927 ++ state = (unsigned int)state_compat;
31928 ++
31929 ++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
31930 ++ }
31931 ++
31932 ++ case FBIOCONDECOR_GETSTATE32:
31933 ++ {
31934 ++ compat_uint_t state_compat = 0;
31935 ++ unsigned int state = 0;
31936 ++
31937 ++ fbcon_decor_ioctl_dogetstate(vc, &state);
31938 ++ state_compat = (compat_uint_t)state;
31939 ++
31940 ++ return put_user(state_compat, (compat_uint_t __user *)data);
31941 ++ }
31942 ++
31943 ++ default:
31944 ++ return -ENOIOCTLCMD;
31945 ++ }
31946 ++}
31947 ++#else
31948 ++ #define fbcon_decor_compat_ioctl NULL
31949 ++#endif
31950 ++
31951 ++static struct file_operations fbcon_decor_ops = {
31952 ++ .owner = THIS_MODULE,
31953 ++ .unlocked_ioctl = fbcon_decor_ioctl,
31954 ++ .compat_ioctl = fbcon_decor_compat_ioctl
31955 ++};
31956 ++
31957 ++static struct miscdevice fbcon_decor_dev = {
31958 ++ .minor = MISC_DYNAMIC_MINOR,
31959 ++ .name = "fbcondecor",
31960 ++ .fops = &fbcon_decor_ops
31961 ++};
31962 ++
31963 ++void fbcon_decor_reset()
31964 ++{
31965 ++ int i;
31966 ++
31967 ++ for (i = 0; i < num_registered_fb; i++) {
31968 ++ registered_fb[i]->bgdecor.data = NULL;
31969 ++ registered_fb[i]->bgdecor.cmap.red = NULL;
31970 ++ }
31971 ++
31972 ++ for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
31973 ++ vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
31974 ++ vc_cons[i].d->vc_decor.theight = 0;
31975 ++ vc_cons[i].d->vc_decor.theme = NULL;
31976 ++ }
31977 ++
31978 ++ return;
31979 ++}
31980 ++
31981 ++int fbcon_decor_init()
31982 ++{
31983 ++ int i;
31984 ++
31985 ++ fbcon_decor_reset();
31986 ++
31987 ++ if (initialized)
31988 ++ return 0;
31989 ++
31990 ++ i = misc_register(&fbcon_decor_dev);
31991 ++ if (i) {
31992 ++ printk(KERN_ERR "fbcondecor: failed to register device\n");
31993 ++ return i;
31994 ++ }
31995 ++
31996 ++ fbcon_decor_call_helper("init", 0);
31997 ++ initialized = 1;
31998 ++ return 0;
31999 ++}
32000 ++
32001 ++int fbcon_decor_exit(void)
32002 ++{
32003 ++ fbcon_decor_reset();
32004 ++ return 0;
32005 ++}
32006 ++
32007 ++EXPORT_SYMBOL(fbcon_decor_path);
32008 +diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
32009 +new file mode 100644
32010 +index 0000000..1d852dd
32011 +--- /dev/null
32012 ++++ b/drivers/video/console/fbcondecor.h
32013 +@@ -0,0 +1,78 @@
32014 ++/*
32015 ++ * linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
32016 ++ *
32017 ++ * Copyright (C) 2004 Michal Januszewski <spock@g.o>
32018 ++ *
32019 ++ */
32020 ++
32021 ++#ifndef __FBCON_DECOR_H
32022 ++#define __FBCON_DECOR_H
32023 ++
32024 ++#ifndef _LINUX_FB_H
32025 ++#include <linux/fb.h>
32026 ++#endif
32027 ++
32028 ++/* This is needed for vc_cons in fbcmap.c */
32029 ++#include <linux/vt_kern.h>
32030 ++
32031 ++struct fb_cursor;
32032 ++struct fb_info;
32033 ++struct vc_data;
32034 ++
32035 ++#ifdef CONFIG_FB_CON_DECOR
32036 ++/* fbcondecor.c */
32037 ++int fbcon_decor_init(void);
32038 ++int fbcon_decor_exit(void);
32039 ++int fbcon_decor_call_helper(char* cmd, unsigned short cons);
32040 ++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
32041 ++
32042 ++/* cfbcondecor.c */
32043 ++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
32044 ++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
32045 ++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
32046 ++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
32047 ++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
32048 ++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
32049 ++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
32050 ++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
32051 ++
32052 ++/* vt.c */
32053 ++void acquire_console_sem(void);
32054 ++void release_console_sem(void);
32055 ++void do_unblank_screen(int entering_gfx);
32056 ++
32057 ++/* struct vc_data *y */
32058 ++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
32059 ++
32060 ++/* struct fb_info *x, struct vc_data *y */
32061 ++#define fbcon_decor_active_nores(x,y) (x->bgdecor.data && fbcon_decor_active_vc(y))
32062 ++
32063 ++/* struct fb_info *x, struct vc_data *y */
32064 ++#define fbcon_decor_active(x,y) (fbcon_decor_active_nores(x,y) && \
32065 ++ x->bgdecor.width == x->var.xres && \
32066 ++ x->bgdecor.height == x->var.yres && \
32067 ++ x->bgdecor.depth == x->var.bits_per_pixel)
32068 ++
32069 ++
32070 ++#else /* CONFIG_FB_CON_DECOR */
32071 ++
32072 ++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
32073 ++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
32074 ++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
32075 ++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
32076 ++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
32077 ++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
32078 ++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
32079 ++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
32080 ++static inline int fbcon_decor_call_helper(char* cmd, unsigned short cons) { return 0; }
32081 ++static inline int fbcon_decor_init(void) { return 0; }
32082 ++static inline int fbcon_decor_exit(void) { return 0; }
32083 ++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
32084 ++
32085 ++#define fbcon_decor_active_vc(y) (0)
32086 ++#define fbcon_decor_active_nores(x,y) (0)
32087 ++#define fbcon_decor_active(x,y) (0)
32088 ++
32089 ++#endif /* CONFIG_FB_CON_DECOR */
32090 ++
32091 ++#endif /* __FBCON_DECOR_H */
32092 +diff --git a/drivers/video/fbcmap.c b/drivers/video/fbcmap.c
32093 +index 5c3960d..162b5f4 100644
32094 +--- a/drivers/video/fbcmap.c
32095 ++++ b/drivers/video/fbcmap.c
32096 +@@ -17,6 +17,8 @@
32097 + #include <linux/slab.h>
32098 + #include <linux/uaccess.h>
32099 +
32100 ++#include "console/fbcondecor.h"
32101 ++
32102 + static u16 red2[] __read_mostly = {
32103 + 0x0000, 0xaaaa
32104 + };
32105 +@@ -249,14 +251,17 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
32106 + if (transp)
32107 + htransp = *transp++;
32108 + if (info->fbops->fb_setcolreg(start++,
32109 +- hred, hgreen, hblue,
32110 ++ hred, hgreen, hblue,
32111 + htransp, info))
32112 + break;
32113 + }
32114 + }
32115 +- if (rc == 0)
32116 ++ if (rc == 0) {
32117 + fb_copy_cmap(cmap, &info->cmap);
32118 +-
32119 ++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
32120 ++ info->fix.visual == FB_VISUAL_DIRECTCOLOR)
32121 ++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
32122 ++ }
32123 + return rc;
32124 + }
32125 +
32126 +diff --git a/drivers/video/fbmem.c b/drivers/video/fbmem.c
32127 +index c6ce416..7ce6640 100644
32128 +--- a/drivers/video/fbmem.c
32129 ++++ b/drivers/video/fbmem.c
32130 +@@ -1231,15 +1231,6 @@ struct fb_fix_screeninfo32 {
32131 + u16 reserved[3];
32132 + };
32133 +
32134 +-struct fb_cmap32 {
32135 +- u32 start;
32136 +- u32 len;
32137 +- compat_caddr_t red;
32138 +- compat_caddr_t green;
32139 +- compat_caddr_t blue;
32140 +- compat_caddr_t transp;
32141 +-};
32142 +-
32143 + static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
32144 + unsigned long arg)
32145 + {
32146 +diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
32147 +new file mode 100644
32148 +index 0000000..04b8d80
32149 +--- /dev/null
32150 ++++ b/include/linux/console_decor.h
32151 +@@ -0,0 +1,46 @@
32152 ++#ifndef _LINUX_CONSOLE_DECOR_H_
32153 ++#define _LINUX_CONSOLE_DECOR_H_ 1
32154 ++
32155 ++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
32156 ++struct vc_decor {
32157 ++ __u8 bg_color; /* The color that is to be treated as transparent */
32158 ++ __u8 state; /* Current decor state: 0 = off, 1 = on */
32159 ++ __u16 tx, ty; /* Top left corner coordinates of the text field */
32160 ++ __u16 twidth, theight; /* Width and height of the text field */
32161 ++ char* theme;
32162 ++};
32163 ++
32164 ++#ifdef __KERNEL__
32165 ++#ifdef CONFIG_COMPAT
32166 ++#include <linux/compat.h>
32167 ++
32168 ++struct vc_decor32 {
32169 ++ __u8 bg_color; /* The color that is to be treated as transparent */
32170 ++ __u8 state; /* Current decor state: 0 = off, 1 = on */
32171 ++ __u16 tx, ty; /* Top left corner coordinates of the text field */
32172 ++ __u16 twidth, theight; /* Width and height of the text field */
32173 ++ compat_uptr_t theme;
32174 ++};
32175 ++
32176 ++#define vc_decor_from_compat(to, from) \
32177 ++ (to).bg_color = (from).bg_color; \
32178 ++ (to).state = (from).state; \
32179 ++ (to).tx = (from).tx; \
32180 ++ (to).ty = (from).ty; \
32181 ++ (to).twidth = (from).twidth; \
32182 ++ (to).theight = (from).theight; \
32183 ++ (to).theme = compat_ptr((from).theme)
32184 ++
32185 ++#define vc_decor_to_compat(to, from) \
32186 ++ (to).bg_color = (from).bg_color; \
32187 ++ (to).state = (from).state; \
32188 ++ (to).tx = (from).tx; \
32189 ++ (to).ty = (from).ty; \
32190 ++ (to).twidth = (from).twidth; \
32191 ++ (to).theight = (from).theight; \
32192 ++ (to).theme = ptr_to_compat((from).theme)
32193 ++
32194 ++#endif /* CONFIG_COMPAT */
32195 ++#endif /* __KERNEL__ */
32196 ++
32197 ++#endif
32198 +diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
32199 +index 7f0c329..98f5d60 100644
32200 +--- a/include/linux/console_struct.h
32201 ++++ b/include/linux/console_struct.h
32202 +@@ -19,6 +19,7 @@
32203 + struct vt_struct;
32204 +
32205 + #define NPAR 16
32206 ++#include <linux/console_decor.h>
32207 +
32208 + struct vc_data {
32209 + struct tty_port port; /* Upper level data */
32210 +@@ -107,6 +108,8 @@ struct vc_data {
32211 + unsigned long vc_uni_pagedir;
32212 + unsigned long *vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
32213 + bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
32214 ++
32215 ++ struct vc_decor vc_decor;
32216 + /* additional information is in vt_kern.h */
32217 + };
32218 +
32219 +diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
32220 +index d31cb68..ad161bb 100644
32221 +--- a/include/uapi/linux/fb.h
32222 ++++ b/include/uapi/linux/fb.h
32223 +@@ -8,6 +8,25 @@
32224 +
32225 + #define FB_MAX 32 /* sufficient for now */
32226 +
32227 ++struct fbcon_decor_iowrapper
32228 ++{
32229 ++ unsigned short vc; /* Virtual console */
32230 ++ unsigned char origin; /* Point of origin of the request */
32231 ++ void *data;
32232 ++};
32233 ++
32234 ++#ifdef __KERNEL__
32235 ++#ifdef CONFIG_COMPAT
32236 ++#include <linux/compat.h>
32237 ++struct fbcon_decor_iowrapper32
32238 ++{
32239 ++ unsigned short vc; /* Virtual console */
32240 ++ unsigned char origin; /* Point of origin of the request */
32241 ++ compat_uptr_t data;
32242 ++};
32243 ++#endif /* CONFIG_COMPAT */
32244 ++#endif /* __KERNEL__ */
32245 ++
32246 + /* ioctls
32247 + 0x46 is 'F' */
32248 + #define FBIOGET_VSCREENINFO 0x4600
32249 +@@ -34,6 +53,24 @@
32250 + #define FBIOPUT_MODEINFO 0x4617
32251 + #define FBIOGET_DISPINFO 0x4618
32252 + #define FBIO_WAITFORVSYNC _IOW('F', 0x20, __u32)
32253 ++#define FBIOCONDECOR_SETCFG _IOWR('F', 0x19, struct fbcon_decor_iowrapper)
32254 ++#define FBIOCONDECOR_GETCFG _IOR('F', 0x1A, struct fbcon_decor_iowrapper)
32255 ++#define FBIOCONDECOR_SETSTATE _IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
32256 ++#define FBIOCONDECOR_GETSTATE _IOR('F', 0x1C, struct fbcon_decor_iowrapper)
32257 ++#define FBIOCONDECOR_SETPIC _IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
32258 ++#ifdef __KERNEL__
32259 ++#ifdef CONFIG_COMPAT
32260 ++#define FBIOCONDECOR_SETCFG32 _IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
32261 ++#define FBIOCONDECOR_GETCFG32 _IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
32262 ++#define FBIOCONDECOR_SETSTATE32 _IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
32263 ++#define FBIOCONDECOR_GETSTATE32 _IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
32264 ++#define FBIOCONDECOR_SETPIC32 _IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
32265 ++#endif /* CONFIG_COMPAT */
32266 ++#endif /* __KERNEL__ */
32267 ++
32268 ++#define FBCON_DECOR_THEME_LEN 128 /* Maximum lenght of a theme name */
32269 ++#define FBCON_DECOR_IO_ORIG_KERNEL 0 /* Kernel ioctl origin */
32270 ++#define FBCON_DECOR_IO_ORIG_USER 1 /* User ioctl origin */
32271 +
32272 + #define FB_TYPE_PACKED_PIXELS 0 /* Packed Pixels */
32273 + #define FB_TYPE_PLANES 1 /* Non interleaved planes */
32274 +@@ -286,6 +323,28 @@ struct fb_cmap {
32275 + __u16 *transp; /* transparency, can be NULL */
32276 + };
32277 +
32278 ++#ifdef __KERNEL__
32279 ++#ifdef CONFIG_COMPAT
32280 ++struct fb_cmap32 {
32281 ++ __u32 start;
32282 ++ __u32 len; /* Number of entries */
32283 ++ compat_uptr_t red; /* Red values */
32284 ++ compat_uptr_t green;
32285 ++ compat_uptr_t blue;
32286 ++ compat_uptr_t transp; /* transparency, can be NULL */
32287 ++};
32288 ++
32289 ++#define fb_cmap_from_compat(to, from) \
32290 ++ (to).start = (from).start; \
32291 ++ (to).len = (from).len; \
32292 ++ (to).red = compat_ptr((from).red); \
32293 ++ (to).green = compat_ptr((from).green); \
32294 ++ (to).blue = compat_ptr((from).blue); \
32295 ++ (to).transp = compat_ptr((from).transp)
32296 ++
32297 ++#endif /* CONFIG_COMPAT */
32298 ++#endif /* __KERNEL__ */
32299 ++
32300 + struct fb_con2fbmap {
32301 + __u32 console;
32302 + __u32 framebuffer;
32303 +@@ -367,6 +426,34 @@ struct fb_image {
32304 + struct fb_cmap cmap; /* color map info */
32305 + };
32306 +
32307 ++#ifdef __KERNEL__
32308 ++#ifdef CONFIG_COMPAT
32309 ++struct fb_image32 {
32310 ++ __u32 dx; /* Where to place image */
32311 ++ __u32 dy;
32312 ++ __u32 width; /* Size of image */
32313 ++ __u32 height;
32314 ++ __u32 fg_color; /* Only used when a mono bitmap */
32315 ++ __u32 bg_color;
32316 ++ __u8 depth; /* Depth of the image */
32317 ++ const compat_uptr_t data; /* Pointer to image data */
32318 ++ struct fb_cmap32 cmap; /* color map info */
32319 ++};
32320 ++
32321 ++#define fb_image_from_compat(to, from) \
32322 ++ (to).dx = (from).dx; \
32323 ++ (to).dy = (from).dy; \
32324 ++ (to).width = (from).width; \
32325 ++ (to).height = (from).height; \
32326 ++ (to).fg_color = (from).fg_color; \
32327 ++ (to).bg_color = (from).bg_color; \
32328 ++ (to).depth = (from).depth; \
32329 ++ (to).data = compat_ptr((from).data); \
32330 ++ fb_cmap_from_compat((to).cmap, (from).cmap)
32331 ++
32332 ++#endif /* CONFIG_COMPAT */
32333 ++#endif /* __KERNEL__ */
32334 ++
32335 + /*
32336 + * hardware cursor control
32337 + */
32338 +
32339 +diff --git a/include/linux/fb.h b/include/linux/fb.h
32340 +index d31cb68..ad161bb 100644
32341 +--- a/include/linux/fb.h
32342 ++++ b/include/linux/fb.h
32343 +@@ -488,5 +488,8 @@ #define FBINFO_STATE_SUSPENDED 1
32344 + u32 state; /* Hardware state i.e suspend */
32345 + void *fbcon_par; /* fbcon use-only private area */
32346 ++
32347 ++ struct fb_image bgdecor;
32348 ++
32349 + /* From here on everything is device dependent */
32350 + void *par;
32351 + /* we need the PCI or similar aperture base/size not
32352 +
32353 +diff --git a/kernel/sysctl.c b/kernel/sysctl.c
32354 +index 4ab1187..6561627 100644
32355 +--- a/kernel/sysctl.c
32356 ++++ b/kernel/sysctl.c
32357 +@@ -145,6 +145,10 @@ static int min_percpu_pagelist_fract = 8;
32358 + static int ngroups_max = NGROUPS_MAX;
32359 + static const int cap_last_cap = CAP_LAST_CAP;
32360 +
32361 ++#ifdef CONFIG_FB_CON_DECOR
32362 ++extern char fbcon_decor_path[];
32363 ++#endif
32364 ++
32365 + #ifdef CONFIG_INOTIFY_USER
32366 + #include <linux/inotify.h>
32367 + #endif
32368 +@@ -248,6 +252,15 @@ static struct ctl_table sysctl_base_table[] = {
32369 + .mode = 0555,
32370 + .child = dev_table,
32371 + },
32372 ++#ifdef CONFIG_FB_CON_DECOR
32373 ++ {
32374 ++ .procname = "fbcondecor",
32375 ++ .data = &fbcon_decor_path,
32376 ++ .maxlen = KMOD_PATH_LEN,
32377 ++ .mode = 0644,
32378 ++ .proc_handler = &proc_dostring,
32379 ++ },
32380 ++#endif
32381 + { }
32382 + };
32383 +
32384 +@@ -1091,7 +1104,7 @@ static struct ctl_table vm_table[] = {
32385 + .proc_handler = proc_dointvec,
32386 + },
32387 + {
32388 +- .procname = "page-cluster",
32389 ++ .procname = "page-cluster",
32390 + .data = &page_cluster,
32391 + .maxlen = sizeof(int),
32392 + .mode = 0644,
32393 +@@ -1535,7 +1548,7 @@ static struct ctl_table fs_table[] = {
32394 + .mode = 0555,
32395 + .child = inotify_table,
32396 + },
32397 +-#endif
32398 ++#endif
32399 + #ifdef CONFIG_EPOLL
32400 + {
32401 + .procname = "epoll",
32402 +@@ -1873,12 +1886,12 @@ static int __do_proc_dointvec(void *tbl_data, struct ctl_table *table,
32403 + unsigned long page = 0;
32404 + size_t left;
32405 + char *kbuf;
32406 +-
32407 ++
32408 + if (!tbl_data || !table->maxlen || !*lenp || (*ppos && !write)) {
32409 + *lenp = 0;
32410 + return 0;
32411 + }
32412 +-
32413 ++
32414 + i = (int *) tbl_data;
32415 + vleft = table->maxlen / sizeof(*i);
32416 + left = *lenp;
32417 +@@ -1967,7 +1980,7 @@ static int do_proc_dointvec(struct ctl_table *table, int write,
32418 + * @ppos: file position
32419 + *
32420 + * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
32421 +- * values from/to the user buffer, treated as an ASCII string.
32422 ++ * values from/to the user buffer, treated as an ASCII string.
32423 + *
32424 + * Returns 0 on success.
32425 + */
32426 +@@ -2326,7 +2339,7 @@ static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp,
32427 + * @ppos: file position
32428 + *
32429 + * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
32430 +- * values from/to the user buffer, treated as an ASCII string.
32431 ++ * values from/to the user buffer, treated as an ASCII string.
32432 + * The values read are assumed to be in seconds, and are converted into
32433 + * jiffies.
32434 + *
32435 +@@ -2348,8 +2361,8 @@ int proc_dointvec_jiffies(struct ctl_table *table, int write,
32436 + * @ppos: pointer to the file position
32437 + *
32438 + * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
32439 +- * values from/to the user buffer, treated as an ASCII string.
32440 +- * The values read are assumed to be in 1/USER_HZ seconds, and
32441 ++ * values from/to the user buffer, treated as an ASCII string.
32442 ++ * The values read are assumed to be in 1/USER_HZ seconds, and
32443 + * are converted into jiffies.
32444 + *
32445 + * Returns 0 on success.
32446 +@@ -2371,8 +2384,8 @@ int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write,
32447 + * @ppos: the current position in the file
32448 + *
32449 + * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
32450 +- * values from/to the user buffer, treated as an ASCII string.
32451 +- * The values read are assumed to be in 1/1000 seconds, and
32452 ++ * values from/to the user buffer, treated as an ASCII string.
32453 ++ * The values read are assumed to be in 1/1000 seconds, and
32454 + * are converted into jiffies.
32455 + *
32456 + * Returns 0 on success.
32457 +--
32458 +1.7.10
32459 +
32460
32461 Added: genpatches-2.6/trunk/3.10.7/4500_nouveau-video-output-control-Kconfig.patch
32462 ===================================================================
32463 --- genpatches-2.6/trunk/3.10.7/4500_nouveau-video-output-control-Kconfig.patch (rev 0)
32464 +++ genpatches-2.6/trunk/3.10.7/4500_nouveau-video-output-control-Kconfig.patch 2013-08-29 12:09:12 UTC (rev 2497)
32465 @@ -0,0 +1,10 @@
32466 +--- a/drivers/gpu/drm/nouveau/Kconfig
32467 ++++ b/drivers/gpu/drm/nouveau/Kconfig
32468 +@@ -15,6 +15,7 @@ config DRM_NOUVEAU
32469 + select ACPI_WMI if ACPI && X86
32470 + select MXM_WMI if ACPI && X86
32471 + select POWER_SUPPLY
32472 ++ select VIDEO_OUTPUT_CONTROL
32473 + help
32474 + Choose this option for open-source nVidia support.
32475 +
32476
32477 Added: genpatches-2.6/trunk/3.10.7/4567_distro-Gentoo-Kconfig.patch
32478 ===================================================================
32479 --- genpatches-2.6/trunk/3.10.7/4567_distro-Gentoo-Kconfig.patch (rev 0)
32480 +++ genpatches-2.6/trunk/3.10.7/4567_distro-Gentoo-Kconfig.patch 2013-08-29 12:09:12 UTC (rev 2497)
32481 @@ -0,0 +1,113 @@
32482 +--- a/Kconfig
32483 ++++ b/Kconfig
32484 +@@ -8,4 +8,6 @@ config SRCARCH
32485 + string
32486 + option env="SRCARCH"
32487 +
32488 ++source "distro/Kconfig"
32489 ++
32490 + source "arch/$SRCARCH/Kconfig"
32491 +--- /dev/null
32492 ++++ b/distro/Kconfig
32493 +@@ -0,0 +1,101 @@
32494 ++menu "Gentoo Linux"
32495 ++
32496 ++config GENTOO_LINUX
32497 ++ bool "Gentoo Linux support"
32498 ++
32499 ++ default y
32500 ++
32501 ++config GENTOO_LINUX_UDEV
32502 ++ bool "Linux dynamic and persistent device naming (userspace devfs) support"
32503 ++
32504 ++ depends on GENTOO_LINUX
32505 ++ default y if GENTOO_LINUX
32506 ++
32507 ++ select DEVTMPFS
32508 ++ select TMPFS
32509 ++
32510 ++ select MMU
32511 ++ select HOTPLUG
32512 ++ select SHMEM
32513 ++
32514 ++ help
32515 ++ In order to boot Gentoo Linux a minimal set of config settings needs to
32516 ++ be enabled in the kernel; to avoid the users from having to enable them
32517 ++ manually as part of a Gentoo Linux installation or a new clean config,
32518 ++ we enable these config settings by default for convenience.
32519 ++
32520 ++ Currently this only selects TMPFS, DEVTMPFS and their dependencies.
32521 ++ TMPFS is enabled to maintain a tmpfs file system at /dev/shm, /run and
32522 ++ /sys/fs/cgroup; DEVTMPFS to maintain a devtmpfs file system at /dev.
32523 ++
32524 ++ Some of these are critical files that need to be available early in the
32525 ++ boot process; if not available, it causes sysfs and udev to malfunction.
32526 ++
32527 ++ To ensure Gentoo Linux boots, it is best to leave this option enabled;
32528 ++ if you run a custom setup, you could consider whether to disable this.
32529 ++
32530 ++menu "Support for init systems, system and service managers"
32531 ++ visible if GENTOO_LINUX
32532 ++
32533 ++config GENTOO_LINUX_INIT_SCRIPT
32534 ++ bool "OpenRC, runit and other script based systems and managers"
32535 ++
32536 ++ default y if GENTOO_LINUX
32537 ++
32538 ++ depends on GENTOO_LINUX
32539 ++
32540 ++ select BINFMT_SCRIPT
32541 ++
32542 ++ help
32543 ++ The init system is the first thing that loads after the kernel booted.
32544 ++
32545 ++ These config settings allow you to select which init systems to support;
32546 ++ instead of having to select all the individual settings all over the
32547 ++ place, these settings allows you to select all the settings at once.
32548 ++
32549 ++ This particular setting enables all the known requirements for OpenRC,
32550 ++ runit and similar script based systems and managers.
32551 ++
32552 ++ If you are unsure about this, it is best to leave this option enabled.
32553 ++
32554 ++config GENTOO_LINUX_INIT_SYSTEMD
32555 ++ bool "systemd"
32556 ++
32557 ++ default n
32558 ++
32559 ++ depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
32560 ++
32561 ++ select AUTOFS4_FS
32562 ++ select BLK_DEV_BSG
32563 ++ select CGROUPS
32564 ++ select EPOLL
32565 ++ select FANOTIFY
32566 ++ select HOTPLUG
32567 ++ select INOTIFY_USER
32568 ++ select IPV6
32569 ++ select NET
32570 ++ select PROC_FS
32571 ++ select SIGNALFD
32572 ++ select SYSFS
32573 ++ select TIMERFD
32574 ++
32575 ++ select ANON_INODES
32576 ++ select BLOCK
32577 ++ select EVENTFD
32578 ++ select FSNOTIFY
32579 ++ select INET
32580 ++ select NLATTR
32581 ++
32582 ++ help
32583 ++ The init system is the first thing that loads after the kernel booted.
32584 ++
32585 ++ These config settings allow you to select which init systems to support;
32586 ++ instead of having to select all the individual settings all over the
32587 ++ place, these settings allows you to select all the settings at once.
32588 ++
32589 ++ This particular setting enables all the known requirements for systemd;
32590 ++ it also enables suggested optional settings, as the package suggests to.
32591 ++
32592 ++endmenu
32593 ++
32594 ++endmenu