1 |
Author: mpagano |
2 |
Date: 2014-06-09 17:53:50 +0000 (Mon, 09 Jun 2014) |
3 |
New Revision: 2821 |
4 |
|
5 |
Added: |
6 |
genpatches-2.6/trunk/3.4/1091_linux-3.4.92.patch |
7 |
Removed: |
8 |
genpatches-2.6/trunk/3.4/1501-futex-add-another-early-deadlock-detection-check.patch |
9 |
genpatches-2.6/trunk/3.4/1502-futex-prevent-attaching-to-kernel-threads.patch |
10 |
genpatches-2.6/trunk/3.4/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch |
11 |
genpatches-2.6/trunk/3.4/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch |
12 |
genpatches-2.6/trunk/3.4/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch |
13 |
genpatches-2.6/trunk/3.4/1506-futex-make-lookup_pi_state-more-robust.patch |
14 |
genpatches-2.6/trunk/3.4/2700_thinkpad-acpi_fix-issuing-duplicated-keyevents-for-brightness.patch |
15 |
Modified: |
16 |
genpatches-2.6/trunk/3.4/0000_README |
17 |
Log: |
18 |
Linux patch 3.4.92. Removal of redundant patches. |
19 |
|
20 |
Modified: genpatches-2.6/trunk/3.4/0000_README |
21 |
=================================================================== |
22 |
--- genpatches-2.6/trunk/3.4/0000_README 2014-06-09 12:35:00 UTC (rev 2820) |
23 |
+++ genpatches-2.6/trunk/3.4/0000_README 2014-06-09 17:53:50 UTC (rev 2821) |
24 |
@@ -403,6 +403,10 @@ |
25 |
From: http://www.kernel.org |
26 |
Desc: Linux 3.4.91 |
27 |
|
28 |
+Patch: 1091_linux-3.4.92.patch |
29 |
+From: http://www.kernel.org |
30 |
+Desc: Linux 3.4.92 |
31 |
+ |
32 |
Patch: 1500_XATTR_USER_PREFIX.patch |
33 |
From: https://bugs.gentoo.org/show_bug.cgi?id=470644 |
34 |
Desc: Support for namespace user.pax.* on tmpfs. |
35 |
@@ -411,30 +415,6 @@ |
36 |
From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=6a96e15096da6e7491107321cfa660c7c2aa119d |
37 |
Desc: selinux: add SOCK_DIAG_BY_FAMILY to the list of netlink message types |
38 |
|
39 |
-Patch: 1501-futex-add-another-early-deadlock-detection-check.patch |
40 |
-From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=866293ee54227584ffcb4a42f69c1f365974ba7f |
41 |
-Desc: CVE-2014-3153 |
42 |
- |
43 |
-Patch: 1502-futex-prevent-attaching-to-kernel-threads.patch |
44 |
-From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f0d71b3dcb8332f7971b5f2363632573e6d9486a |
45 |
-Desc: CVE-2014-3153 |
46 |
- |
47 |
-Patch: 1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch |
48 |
-From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=e9c243a5a6de0be8e584c604d353412584b592f8 |
49 |
-Desc: CVE-2014-3153 |
50 |
- |
51 |
-Patch: 1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch |
52 |
-From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b3eaa9fc5cd0a4d74b18f6b8dc617aeaf1873270 |
53 |
-Desc: CVE-2014-3153 |
54 |
- |
55 |
-Patch: 1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch |
56 |
-From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=13fbca4c6ecd96ec1a1cfa2e4f2ce191fe928a5e |
57 |
-Desc: CVE-2014-3153 |
58 |
- |
59 |
-Patch: 1506-futex-make-lookup_pi_state-more-robust.patch |
60 |
-From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=54a217887a7b658e2650c3feff22756ab80c7339 |
61 |
-Desc: CVE-2014-3153 |
62 |
- |
63 |
Patch: 1512_af_key-initialize-satype-in-key_notify_policy_flush.patch |
64 |
From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=85dfb745ee40232876663ae206cba35f24ab2a40 |
65 |
Desc: af_key: initialize satype in key_notify_policy_flush() |
66 |
@@ -459,10 +439,6 @@ |
67 |
From: Seth Forshee <seth.forshee@×××××××××.com> |
68 |
Desc: ACPI: Disable Windows 8 compatibility for some Lenovo ThinkPads |
69 |
|
70 |
-Patch: 2700_thinkpad-acpi_fix-issuing-duplicated-keyevents-for-brightness.patch |
71 |
-From: http://www.spinics.net/lists/ibm-acpi-devel/msg02805.html |
72 |
-Desc: thinkpad-acpi: fix issuing duplicated key events for brightness up/down |
73 |
- |
74 |
Patch: 4200_fbcondecor-0.9.6.patch |
75 |
From: http://dev.gentoo.org/~spock |
76 |
Desc: Bootsplash successor by Michal Januszewski ported by Alexxy |
77 |
|
78 |
Added: genpatches-2.6/trunk/3.4/1091_linux-3.4.92.patch |
79 |
=================================================================== |
80 |
--- genpatches-2.6/trunk/3.4/1091_linux-3.4.92.patch (rev 0) |
81 |
+++ genpatches-2.6/trunk/3.4/1091_linux-3.4.92.patch 2014-06-09 17:53:50 UTC (rev 2821) |
82 |
@@ -0,0 +1,7740 @@ |
83 |
+diff --git a/Documentation/i2c/busses/i2c-piix4 b/Documentation/i2c/busses/i2c-piix4 |
84 |
+index 475bb4ae0720..65da15796ed3 100644 |
85 |
+--- a/Documentation/i2c/busses/i2c-piix4 |
86 |
++++ b/Documentation/i2c/busses/i2c-piix4 |
87 |
+@@ -8,7 +8,7 @@ Supported adapters: |
88 |
+ Datasheet: Only available via NDA from ServerWorks |
89 |
+ * ATI IXP200, IXP300, IXP400, SB600, SB700 and SB800 southbridges |
90 |
+ Datasheet: Not publicly available |
91 |
+- * AMD Hudson-2 |
92 |
++ * AMD Hudson-2, CZ |
93 |
+ Datasheet: Not publicly available |
94 |
+ * Standard Microsystems (SMSC) SLC90E66 (Victory66) southbridge |
95 |
+ Datasheet: Publicly available at the SMSC website http://www.smsc.com |
96 |
+diff --git a/Documentation/ja_JP/HOWTO b/Documentation/ja_JP/HOWTO |
97 |
+index 050d37fe6d40..46ed73593465 100644 |
98 |
+--- a/Documentation/ja_JP/HOWTO |
99 |
++++ b/Documentation/ja_JP/HOWTO |
100 |
+@@ -315,7 +315,7 @@ Andrew Morton が Linux-kernel メーリングリストにカーネルリリー |
101 |
+ もし、2.6.x.y カーネルが存在しない場合には、番号が一番大きい 2.6.x が |
102 |
+ 最新の安定版カーネルです。 |
103 |
+ |
104 |
+-2.6.x.y は "stable" チーム <stable@××××××.org> でメンテされており、必 |
105 |
++2.6.x.y は "stable" チーム <stable@×××××××××××.org> でメンテされており、必 |
106 |
+ 要に応じてリリースされます。通常のリリース期間は 2週間毎ですが、差し迫っ |
107 |
+ た問題がなければもう少し長くなることもあります。セキュリティ関連の問題 |
108 |
+ の場合はこれに対してだいたいの場合、すぐにリリースがされます。 |
109 |
+diff --git a/Documentation/ja_JP/stable_kernel_rules.txt b/Documentation/ja_JP/stable_kernel_rules.txt |
110 |
+index 14265837c4ce..9dbda9b5d21e 100644 |
111 |
+--- a/Documentation/ja_JP/stable_kernel_rules.txt |
112 |
++++ b/Documentation/ja_JP/stable_kernel_rules.txt |
113 |
+@@ -50,16 +50,16 @@ linux-2.6.29/Documentation/stable_kernel_rules.txt |
114 |
+ |
115 |
+ -stable ツリーにパッチを送付する手続き- |
116 |
+ |
117 |
+- - 上記の規則に従っているかを確認した後に、stable@××××××.org にパッチ |
118 |
++ - 上記の規則に従っているかを確認した後に、stable@×××××××××××.org にパッチ |
119 |
+ を送る。 |
120 |
+ - 送信者はパッチがキューに受け付けられた際には ACK を、却下された場合 |
121 |
+ には NAK を受け取る。この反応は開発者たちのスケジュールによって、数 |
122 |
+ 日かかる場合がある。 |
123 |
+ - もし受け取られたら、パッチは他の開発者たちと関連するサブシステムの |
124 |
+ メンテナーによるレビューのために -stable キューに追加される。 |
125 |
+- - パッチに stable@××××××.org のアドレスが付加されているときには、それ |
126 |
++ - パッチに stable@×××××××××××.org のアドレスが付加されているときには、それ |
127 |
+ が Linus のツリーに入る時に自動的に stable チームに email される。 |
128 |
+- - セキュリティパッチはこのエイリアス (stable@××××××.org) に送られるべ |
129 |
++ - セキュリティパッチはこのエイリアス (stable@×××××××××××.org) に送られるべ |
130 |
+ きではなく、代わりに security@××××××.org のアドレスに送られる。 |
131 |
+ |
132 |
+ レビューサイクル- |
133 |
+diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt |
134 |
+index 753d18ae0105..63740dae90a0 100644 |
135 |
+--- a/Documentation/kernel-parameters.txt |
136 |
++++ b/Documentation/kernel-parameters.txt |
137 |
+@@ -773,6 +773,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. |
138 |
+ edd= [EDD] |
139 |
+ Format: {"off" | "on" | "skip[mbr]"} |
140 |
+ |
141 |
++ efi_no_storage_paranoia [EFI; X86] |
142 |
++ Using this parameter you can use more than 50% of |
143 |
++ your efi variable storage. Use this parameter only if |
144 |
++ you are really sure that your UEFI does sane gc and |
145 |
++ fulfills the spec otherwise your board may brick. |
146 |
++ |
147 |
+ eisa_irq_edge= [PARISC,HW] |
148 |
+ See header of drivers/parisc/eisa.c. |
149 |
+ |
150 |
+@@ -987,6 +993,20 @@ bytes respectively. Such letter suffixes can also be entirely omitted. |
151 |
+ i8k.restricted [HW] Allow controlling fans only if SYS_ADMIN |
152 |
+ capability is set. |
153 |
+ |
154 |
++ i915.invert_brightness= |
155 |
++ [DRM] Invert the sense of the variable that is used to |
156 |
++ set the brightness of the panel backlight. Normally a |
157 |
++ brightness value of 0 indicates backlight switched off, |
158 |
++ and the maximum of the brightness value sets the backlight |
159 |
++ to maximum brightness. If this parameter is set to 0 |
160 |
++ (default) and the machine requires it, or this parameter |
161 |
++ is set to 1, a brightness value of 0 sets the backlight |
162 |
++ to maximum brightness, and the maximum of the brightness |
163 |
++ value switches the backlight off. |
164 |
++ -1 -- never invert brightness |
165 |
++ 0 -- machine default |
166 |
++ 1 -- force brightness inversion |
167 |
++ |
168 |
+ icn= [HW,ISDN] |
169 |
+ Format: <io>[,<membase>[,<icn_id>[,<icn_id2>]]] |
170 |
+ |
171 |
+diff --git a/Documentation/zh_CN/HOWTO b/Documentation/zh_CN/HOWTO |
172 |
+index 7fba5aab9ef9..7599eb38b764 100644 |
173 |
+--- a/Documentation/zh_CN/HOWTO |
174 |
++++ b/Documentation/zh_CN/HOWTO |
175 |
+@@ -237,7 +237,7 @@ kernel.org网站的pub/linux/kernel/v2.6/目录下找到它。它的开发遵循 |
176 |
+ 如果没有2.6.x.y版本内核存在,那么最新的2.6.x版本内核就相当于是当前的稳定 |
177 |
+ 版内核。 |
178 |
+ |
179 |
+-2.6.x.y版本由“稳定版”小组(邮件地址<stable@××××××.org>)维护,一般隔周发 |
180 |
++2.6.x.y版本由“稳定版”小组(邮件地址<stable@×××××××××××.org>)维护,一般隔周发 |
181 |
+ 布新版本。 |
182 |
+ |
183 |
+ 内核源码中的Documentation/stable_kernel_rules.txt文件具体描述了可被稳定 |
184 |
+diff --git a/Documentation/zh_CN/stable_kernel_rules.txt b/Documentation/zh_CN/stable_kernel_rules.txt |
185 |
+index b5b9b0ab02fd..26ea5ed7cd9c 100644 |
186 |
+--- a/Documentation/zh_CN/stable_kernel_rules.txt |
187 |
++++ b/Documentation/zh_CN/stable_kernel_rules.txt |
188 |
+@@ -42,7 +42,7 @@ Documentation/stable_kernel_rules.txt 的中文翻译 |
189 |
+ |
190 |
+ 向稳定版代码树提交补丁的过程: |
191 |
+ |
192 |
+- - 在确认了补丁符合以上的规则后,将补丁发送到stable@××××××.org。 |
193 |
++ - 在确认了补丁符合以上的规则后,将补丁发送到stable@×××××××××××.org。 |
194 |
+ - 如果补丁被接受到队列里,发送者会收到一个ACK回复,如果没有被接受,收 |
195 |
+ 到的是NAK回复。回复需要几天的时间,这取决于开发者的时间安排。 |
196 |
+ - 被接受的补丁会被加到稳定版本队列里,等待其他开发者的审查。 |
197 |
+diff --git a/Makefile b/Makefile |
198 |
+index 16899b9ba84f..513b460b49f9 100644 |
199 |
+--- a/Makefile |
200 |
++++ b/Makefile |
201 |
+@@ -1,6 +1,6 @@ |
202 |
+ VERSION = 3 |
203 |
+ PATCHLEVEL = 4 |
204 |
+-SUBLEVEL = 91 |
205 |
++SUBLEVEL = 92 |
206 |
+ EXTRAVERSION = |
207 |
+ NAME = Saber-toothed Squirrel |
208 |
+ |
209 |
+diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c |
210 |
+index 90c50d4b43f7..5d1286d51154 100644 |
211 |
+--- a/arch/arm/kernel/crash_dump.c |
212 |
++++ b/arch/arm/kernel/crash_dump.c |
213 |
+@@ -39,7 +39,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, |
214 |
+ if (!csize) |
215 |
+ return 0; |
216 |
+ |
217 |
+- vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE); |
218 |
++ vaddr = ioremap(__pfn_to_phys(pfn), PAGE_SIZE); |
219 |
+ if (!vaddr) |
220 |
+ return -ENOMEM; |
221 |
+ |
222 |
+diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S |
223 |
+index 3735abd7f8f6..4014d9064be1 100644 |
224 |
+--- a/arch/parisc/kernel/syscall_table.S |
225 |
++++ b/arch/parisc/kernel/syscall_table.S |
226 |
+@@ -395,7 +395,7 @@ |
227 |
+ ENTRY_COMP(vmsplice) |
228 |
+ ENTRY_COMP(move_pages) /* 295 */ |
229 |
+ ENTRY_SAME(getcpu) |
230 |
+- ENTRY_SAME(epoll_pwait) |
231 |
++ ENTRY_COMP(epoll_pwait) |
232 |
+ ENTRY_COMP(statfs64) |
233 |
+ ENTRY_COMP(fstatfs64) |
234 |
+ ENTRY_COMP(kexec_load) /* 300 */ |
235 |
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c |
236 |
+index a9ce135893f8..3ec8b394146d 100644 |
237 |
+--- a/arch/s390/crypto/aes_s390.c |
238 |
++++ b/arch/s390/crypto/aes_s390.c |
239 |
+@@ -35,7 +35,6 @@ static u8 *ctrblk; |
240 |
+ static char keylen_flag; |
241 |
+ |
242 |
+ struct s390_aes_ctx { |
243 |
+- u8 iv[AES_BLOCK_SIZE]; |
244 |
+ u8 key[AES_MAX_KEY_SIZE]; |
245 |
+ long enc; |
246 |
+ long dec; |
247 |
+@@ -56,8 +55,7 @@ struct pcc_param { |
248 |
+ |
249 |
+ struct s390_xts_ctx { |
250 |
+ u8 key[32]; |
251 |
+- u8 xts_param[16]; |
252 |
+- struct pcc_param pcc; |
253 |
++ u8 pcc_key[32]; |
254 |
+ long enc; |
255 |
+ long dec; |
256 |
+ int key_len; |
257 |
+@@ -442,29 +440,35 @@ static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, |
258 |
+ return aes_set_key(tfm, in_key, key_len); |
259 |
+ } |
260 |
+ |
261 |
+-static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, void *param, |
262 |
++static int cbc_aes_crypt(struct blkcipher_desc *desc, long func, |
263 |
+ struct blkcipher_walk *walk) |
264 |
+ { |
265 |
++ struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); |
266 |
+ int ret = blkcipher_walk_virt(desc, walk); |
267 |
+ unsigned int nbytes = walk->nbytes; |
268 |
++ struct { |
269 |
++ u8 iv[AES_BLOCK_SIZE]; |
270 |
++ u8 key[AES_MAX_KEY_SIZE]; |
271 |
++ } param; |
272 |
+ |
273 |
+ if (!nbytes) |
274 |
+ goto out; |
275 |
+ |
276 |
+- memcpy(param, walk->iv, AES_BLOCK_SIZE); |
277 |
++ memcpy(param.iv, walk->iv, AES_BLOCK_SIZE); |
278 |
++ memcpy(param.key, sctx->key, sctx->key_len); |
279 |
+ do { |
280 |
+ /* only use complete blocks */ |
281 |
+ unsigned int n = nbytes & ~(AES_BLOCK_SIZE - 1); |
282 |
+ u8 *out = walk->dst.virt.addr; |
283 |
+ u8 *in = walk->src.virt.addr; |
284 |
+ |
285 |
+- ret = crypt_s390_kmc(func, param, out, in, n); |
286 |
++ ret = crypt_s390_kmc(func, ¶m, out, in, n); |
287 |
+ BUG_ON((ret < 0) || (ret != n)); |
288 |
+ |
289 |
+ nbytes &= AES_BLOCK_SIZE - 1; |
290 |
+ ret = blkcipher_walk_done(desc, walk, nbytes); |
291 |
+ } while ((nbytes = walk->nbytes)); |
292 |
+- memcpy(walk->iv, param, AES_BLOCK_SIZE); |
293 |
++ memcpy(walk->iv, param.iv, AES_BLOCK_SIZE); |
294 |
+ |
295 |
+ out: |
296 |
+ return ret; |
297 |
+@@ -481,7 +485,7 @@ static int cbc_aes_encrypt(struct blkcipher_desc *desc, |
298 |
+ return fallback_blk_enc(desc, dst, src, nbytes); |
299 |
+ |
300 |
+ blkcipher_walk_init(&walk, dst, src, nbytes); |
301 |
+- return cbc_aes_crypt(desc, sctx->enc, sctx->iv, &walk); |
302 |
++ return cbc_aes_crypt(desc, sctx->enc, &walk); |
303 |
+ } |
304 |
+ |
305 |
+ static int cbc_aes_decrypt(struct blkcipher_desc *desc, |
306 |
+@@ -495,7 +499,7 @@ static int cbc_aes_decrypt(struct blkcipher_desc *desc, |
307 |
+ return fallback_blk_dec(desc, dst, src, nbytes); |
308 |
+ |
309 |
+ blkcipher_walk_init(&walk, dst, src, nbytes); |
310 |
+- return cbc_aes_crypt(desc, sctx->dec, sctx->iv, &walk); |
311 |
++ return cbc_aes_crypt(desc, sctx->dec, &walk); |
312 |
+ } |
313 |
+ |
314 |
+ static struct crypto_alg cbc_aes_alg = { |
315 |
+@@ -587,7 +591,7 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, |
316 |
+ xts_ctx->enc = KM_XTS_128_ENCRYPT; |
317 |
+ xts_ctx->dec = KM_XTS_128_DECRYPT; |
318 |
+ memcpy(xts_ctx->key + 16, in_key, 16); |
319 |
+- memcpy(xts_ctx->pcc.key + 16, in_key + 16, 16); |
320 |
++ memcpy(xts_ctx->pcc_key + 16, in_key + 16, 16); |
321 |
+ break; |
322 |
+ case 48: |
323 |
+ xts_ctx->enc = 0; |
324 |
+@@ -598,7 +602,7 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, |
325 |
+ xts_ctx->enc = KM_XTS_256_ENCRYPT; |
326 |
+ xts_ctx->dec = KM_XTS_256_DECRYPT; |
327 |
+ memcpy(xts_ctx->key, in_key, 32); |
328 |
+- memcpy(xts_ctx->pcc.key, in_key + 32, 32); |
329 |
++ memcpy(xts_ctx->pcc_key, in_key + 32, 32); |
330 |
+ break; |
331 |
+ default: |
332 |
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; |
333 |
+@@ -617,28 +621,32 @@ static int xts_aes_crypt(struct blkcipher_desc *desc, long func, |
334 |
+ unsigned int nbytes = walk->nbytes; |
335 |
+ unsigned int n; |
336 |
+ u8 *in, *out; |
337 |
+- void *param; |
338 |
++ struct pcc_param pcc_param; |
339 |
++ struct { |
340 |
++ u8 key[32]; |
341 |
++ u8 init[16]; |
342 |
++ } xts_param; |
343 |
+ |
344 |
+ if (!nbytes) |
345 |
+ goto out; |
346 |
+ |
347 |
+- memset(xts_ctx->pcc.block, 0, sizeof(xts_ctx->pcc.block)); |
348 |
+- memset(xts_ctx->pcc.bit, 0, sizeof(xts_ctx->pcc.bit)); |
349 |
+- memset(xts_ctx->pcc.xts, 0, sizeof(xts_ctx->pcc.xts)); |
350 |
+- memcpy(xts_ctx->pcc.tweak, walk->iv, sizeof(xts_ctx->pcc.tweak)); |
351 |
+- param = xts_ctx->pcc.key + offset; |
352 |
+- ret = crypt_s390_pcc(func, param); |
353 |
++ memset(pcc_param.block, 0, sizeof(pcc_param.block)); |
354 |
++ memset(pcc_param.bit, 0, sizeof(pcc_param.bit)); |
355 |
++ memset(pcc_param.xts, 0, sizeof(pcc_param.xts)); |
356 |
++ memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak)); |
357 |
++ memcpy(pcc_param.key, xts_ctx->pcc_key, 32); |
358 |
++ ret = crypt_s390_pcc(func, &pcc_param.key[offset]); |
359 |
+ BUG_ON(ret < 0); |
360 |
+ |
361 |
+- memcpy(xts_ctx->xts_param, xts_ctx->pcc.xts, 16); |
362 |
+- param = xts_ctx->key + offset; |
363 |
++ memcpy(xts_param.key, xts_ctx->key, 32); |
364 |
++ memcpy(xts_param.init, pcc_param.xts, 16); |
365 |
+ do { |
366 |
+ /* only use complete blocks */ |
367 |
+ n = nbytes & ~(AES_BLOCK_SIZE - 1); |
368 |
+ out = walk->dst.virt.addr; |
369 |
+ in = walk->src.virt.addr; |
370 |
+ |
371 |
+- ret = crypt_s390_km(func, param, out, in, n); |
372 |
++ ret = crypt_s390_km(func, &xts_param.key[offset], out, in, n); |
373 |
+ BUG_ON(ret < 0 || ret != n); |
374 |
+ |
375 |
+ nbytes &= AES_BLOCK_SIZE - 1; |
376 |
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig |
377 |
+index b1478f407238..9df4ea1caaf1 100644 |
378 |
+--- a/arch/x86/Kconfig |
379 |
++++ b/arch/x86/Kconfig |
380 |
+@@ -2157,6 +2157,7 @@ source "fs/Kconfig.binfmt" |
381 |
+ config IA32_EMULATION |
382 |
+ bool "IA32 Emulation" |
383 |
+ depends on X86_64 |
384 |
++ select BINFMT_ELF |
385 |
+ select COMPAT_BINFMT_ELF |
386 |
+ ---help--- |
387 |
+ Include code to run legacy 32-bit programs under a |
388 |
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile |
389 |
+index 5a747dd884db..8dfb1ff56e04 100644 |
390 |
+--- a/arch/x86/boot/Makefile |
391 |
++++ b/arch/x86/boot/Makefile |
392 |
+@@ -52,18 +52,18 @@ $(obj)/cpustr.h: $(obj)/mkcpustr FORCE |
393 |
+ |
394 |
+ # How to compile the 16-bit code. Note we always compile for -march=i386, |
395 |
+ # that way we can complain to the user if the CPU is insufficient. |
396 |
+-KBUILD_CFLAGS := $(LINUXINCLUDE) -g -Os -D_SETUP -D__KERNEL__ \ |
397 |
++KBUILD_CFLAGS := $(LINUXINCLUDE) -m32 -g -Os -D_SETUP -D__KERNEL__ \ |
398 |
+ -DDISABLE_BRANCH_PROFILING \ |
399 |
+ -Wall -Wstrict-prototypes \ |
400 |
+ -march=i386 -mregparm=3 \ |
401 |
+ -include $(srctree)/$(src)/code16gcc.h \ |
402 |
+ -fno-strict-aliasing -fomit-frame-pointer \ |
403 |
++ -mno-mmx -mno-sse \ |
404 |
+ $(call cc-option, -ffreestanding) \ |
405 |
+ $(call cc-option, -fno-toplevel-reorder,\ |
406 |
+- $(call cc-option, -fno-unit-at-a-time)) \ |
407 |
++ $(call cc-option, -fno-unit-at-a-time)) \ |
408 |
+ $(call cc-option, -fno-stack-protector) \ |
409 |
+ $(call cc-option, -mpreferred-stack-boundary=2) |
410 |
+-KBUILD_CFLAGS += $(call cc-option, -m32) |
411 |
+ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ |
412 |
+ GCOV_PROFILE := n |
413 |
+ |
414 |
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile |
415 |
+index 5ef205c5f37b..7194d9f094bc 100644 |
416 |
+--- a/arch/x86/boot/compressed/Makefile |
417 |
++++ b/arch/x86/boot/compressed/Makefile |
418 |
+@@ -12,6 +12,7 @@ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING |
419 |
+ cflags-$(CONFIG_X86_32) := -march=i386 |
420 |
+ cflags-$(CONFIG_X86_64) := -mcmodel=small |
421 |
+ KBUILD_CFLAGS += $(cflags-y) |
422 |
++KBUILD_CFLAGS += -mno-mmx -mno-sse |
423 |
+ KBUILD_CFLAGS += $(call cc-option,-ffreestanding) |
424 |
+ KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector) |
425 |
+ |
426 |
+diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h |
427 |
+index 439a9acc132d..48fa3915fd02 100644 |
428 |
+--- a/arch/x86/include/asm/hugetlb.h |
429 |
++++ b/arch/x86/include/asm/hugetlb.h |
430 |
+@@ -51,6 +51,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, |
431 |
+ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, |
432 |
+ unsigned long addr, pte_t *ptep) |
433 |
+ { |
434 |
++ ptep_clear_flush(vma, addr, ptep); |
435 |
+ } |
436 |
+ |
437 |
+ static inline int huge_pte_none(pte_t pte) |
438 |
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c |
439 |
+index 13ad89971d47..69e231b9d925 100644 |
440 |
+--- a/arch/x86/kernel/crash.c |
441 |
++++ b/arch/x86/kernel/crash.c |
442 |
+@@ -95,10 +95,10 @@ void native_machine_crash_shutdown(struct pt_regs *regs) |
443 |
+ cpu_emergency_vmxoff(); |
444 |
+ cpu_emergency_svm_disable(); |
445 |
+ |
446 |
+- lapic_shutdown(); |
447 |
+ #if defined(CONFIG_X86_IO_APIC) |
448 |
+ disable_IO_APIC(); |
449 |
+ #endif |
450 |
++ lapic_shutdown(); |
451 |
+ #ifdef CONFIG_HPET_TIMER |
452 |
+ hpet_disable(); |
453 |
+ #endif |
454 |
+diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c |
455 |
+index af1d14a9ebda..dcbbaa165bde 100644 |
456 |
+--- a/arch/x86/kernel/ldt.c |
457 |
++++ b/arch/x86/kernel/ldt.c |
458 |
+@@ -20,6 +20,8 @@ |
459 |
+ #include <asm/mmu_context.h> |
460 |
+ #include <asm/syscalls.h> |
461 |
+ |
462 |
++int sysctl_ldt16 = 0; |
463 |
++ |
464 |
+ #ifdef CONFIG_SMP |
465 |
+ static void flush_ldt(void *current_mm) |
466 |
+ { |
467 |
+@@ -234,7 +236,7 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode) |
468 |
+ * IRET leaking the high bits of the kernel stack address. |
469 |
+ */ |
470 |
+ #ifdef CONFIG_X86_64 |
471 |
+- if (!ldt_info.seg_32bit) { |
472 |
++ if (!ldt_info.seg_32bit && !sysctl_ldt16) { |
473 |
+ error = -EINVAL; |
474 |
+ goto out_unlock; |
475 |
+ } |
476 |
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c |
477 |
+index bd70df6162f7..d398f317f5e0 100644 |
478 |
+--- a/arch/x86/kernel/reboot.c |
479 |
++++ b/arch/x86/kernel/reboot.c |
480 |
+@@ -668,6 +668,13 @@ void native_machine_shutdown(void) |
481 |
+ |
482 |
+ /* The boot cpu is always logical cpu 0 */ |
483 |
+ int reboot_cpu_id = 0; |
484 |
++#endif |
485 |
++ |
486 |
++#ifdef CONFIG_X86_IO_APIC |
487 |
++ disable_IO_APIC(); |
488 |
++#endif |
489 |
++ |
490 |
++#ifdef CONFIG_SMP |
491 |
+ |
492 |
+ #ifdef CONFIG_X86_32 |
493 |
+ /* See if there has been given a command line override */ |
494 |
+@@ -691,10 +698,6 @@ void native_machine_shutdown(void) |
495 |
+ |
496 |
+ lapic_shutdown(); |
497 |
+ |
498 |
+-#ifdef CONFIG_X86_IO_APIC |
499 |
+- disable_IO_APIC(); |
500 |
+-#endif |
501 |
+- |
502 |
+ #ifdef CONFIG_HPET_TIMER |
503 |
+ hpet_disable(); |
504 |
+ #endif |
505 |
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c |
506 |
+index 537dc033868d..46112cef59a7 100644 |
507 |
+--- a/arch/x86/kernel/setup.c |
508 |
++++ b/arch/x86/kernel/setup.c |
509 |
+@@ -625,7 +625,7 @@ static bool __init snb_gfx_workaround_needed(void) |
510 |
+ #ifdef CONFIG_PCI |
511 |
+ int i; |
512 |
+ u16 vendor, devid; |
513 |
+- static const u16 snb_ids[] = { |
514 |
++ static const __initconst u16 snb_ids[] = { |
515 |
+ 0x0102, |
516 |
+ 0x0112, |
517 |
+ 0x0122, |
518 |
+@@ -658,7 +658,7 @@ static bool __init snb_gfx_workaround_needed(void) |
519 |
+ */ |
520 |
+ static void __init trim_snb_memory(void) |
521 |
+ { |
522 |
+- static const unsigned long bad_pages[] = { |
523 |
++ static const __initconst unsigned long bad_pages[] = { |
524 |
+ 0x20050000, |
525 |
+ 0x20110000, |
526 |
+ 0x20130000, |
527 |
+diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c |
528 |
+index c346d1161488..f89cdc6ccd5b 100644 |
529 |
+--- a/arch/x86/kernel/step.c |
530 |
++++ b/arch/x86/kernel/step.c |
531 |
+@@ -157,6 +157,33 @@ static int enable_single_step(struct task_struct *child) |
532 |
+ return 1; |
533 |
+ } |
534 |
+ |
535 |
++static void set_task_blockstep(struct task_struct *task, bool on) |
536 |
++{ |
537 |
++ unsigned long debugctl; |
538 |
++ |
539 |
++ /* |
540 |
++ * Ensure irq/preemption can't change debugctl in between. |
541 |
++ * Note also that both TIF_BLOCKSTEP and debugctl should |
542 |
++ * be changed atomically wrt preemption. |
543 |
++ * FIXME: this means that set/clear TIF_BLOCKSTEP is simply |
544 |
++ * wrong if task != current, SIGKILL can wakeup the stopped |
545 |
++ * tracee and set/clear can play with the running task, this |
546 |
++ * can confuse the next __switch_to_xtra(). |
547 |
++ */ |
548 |
++ local_irq_disable(); |
549 |
++ debugctl = get_debugctlmsr(); |
550 |
++ if (on) { |
551 |
++ debugctl |= DEBUGCTLMSR_BTF; |
552 |
++ set_tsk_thread_flag(task, TIF_BLOCKSTEP); |
553 |
++ } else { |
554 |
++ debugctl &= ~DEBUGCTLMSR_BTF; |
555 |
++ clear_tsk_thread_flag(task, TIF_BLOCKSTEP); |
556 |
++ } |
557 |
++ if (task == current) |
558 |
++ update_debugctlmsr(debugctl); |
559 |
++ local_irq_enable(); |
560 |
++} |
561 |
++ |
562 |
+ /* |
563 |
+ * Enable single or block step. |
564 |
+ */ |
565 |
+@@ -169,19 +196,10 @@ static void enable_step(struct task_struct *child, bool block) |
566 |
+ * So no one should try to use debugger block stepping in a program |
567 |
+ * that uses user-mode single stepping itself. |
568 |
+ */ |
569 |
+- if (enable_single_step(child) && block) { |
570 |
+- unsigned long debugctl = get_debugctlmsr(); |
571 |
+- |
572 |
+- debugctl |= DEBUGCTLMSR_BTF; |
573 |
+- update_debugctlmsr(debugctl); |
574 |
+- set_tsk_thread_flag(child, TIF_BLOCKSTEP); |
575 |
+- } else if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) { |
576 |
+- unsigned long debugctl = get_debugctlmsr(); |
577 |
+- |
578 |
+- debugctl &= ~DEBUGCTLMSR_BTF; |
579 |
+- update_debugctlmsr(debugctl); |
580 |
+- clear_tsk_thread_flag(child, TIF_BLOCKSTEP); |
581 |
+- } |
582 |
++ if (enable_single_step(child) && block) |
583 |
++ set_task_blockstep(child, true); |
584 |
++ else if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) |
585 |
++ set_task_blockstep(child, false); |
586 |
+ } |
587 |
+ |
588 |
+ void user_enable_single_step(struct task_struct *child) |
589 |
+@@ -199,13 +217,8 @@ void user_disable_single_step(struct task_struct *child) |
590 |
+ /* |
591 |
+ * Make sure block stepping (BTF) is disabled. |
592 |
+ */ |
593 |
+- if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) { |
594 |
+- unsigned long debugctl = get_debugctlmsr(); |
595 |
+- |
596 |
+- debugctl &= ~DEBUGCTLMSR_BTF; |
597 |
+- update_debugctlmsr(debugctl); |
598 |
+- clear_tsk_thread_flag(child, TIF_BLOCKSTEP); |
599 |
+- } |
600 |
++ if (test_tsk_thread_flag(child, TIF_BLOCKSTEP)) |
601 |
++ set_task_blockstep(child, false); |
602 |
+ |
603 |
+ /* Always clear TIF_SINGLESTEP... */ |
604 |
+ clear_tsk_thread_flag(child, TIF_SINGLESTEP); |
605 |
+diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c |
606 |
+index b4d3c3927dd8..51534ef23c1c 100644 |
607 |
+--- a/arch/x86/kernel/sys_x86_64.c |
608 |
++++ b/arch/x86/kernel/sys_x86_64.c |
609 |
+@@ -115,7 +115,7 @@ static void find_start_end(unsigned long flags, unsigned long *begin, |
610 |
+ *begin = new_begin; |
611 |
+ } |
612 |
+ } else { |
613 |
+- *begin = TASK_UNMAPPED_BASE; |
614 |
++ *begin = current->mm->mmap_legacy_base; |
615 |
+ *end = TASK_SIZE; |
616 |
+ } |
617 |
+ } |
618 |
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c |
619 |
+index 845df6835f9f..5c1ae28825cd 100644 |
620 |
+--- a/arch/x86/mm/mmap.c |
621 |
++++ b/arch/x86/mm/mmap.c |
622 |
+@@ -112,12 +112,14 @@ static unsigned long mmap_legacy_base(void) |
623 |
+ */ |
624 |
+ void arch_pick_mmap_layout(struct mm_struct *mm) |
625 |
+ { |
626 |
++ mm->mmap_legacy_base = mmap_legacy_base(); |
627 |
++ mm->mmap_base = mmap_base(); |
628 |
++ |
629 |
+ if (mmap_is_legacy()) { |
630 |
+- mm->mmap_base = mmap_legacy_base(); |
631 |
++ mm->mmap_base = mm->mmap_legacy_base; |
632 |
+ mm->get_unmapped_area = arch_get_unmapped_area; |
633 |
+ mm->unmap_area = arch_unmap_area; |
634 |
+ } else { |
635 |
+- mm->mmap_base = mmap_base(); |
636 |
+ mm->get_unmapped_area = arch_get_unmapped_area_topdown; |
637 |
+ mm->unmap_area = arch_unmap_area_topdown; |
638 |
+ } |
639 |
+diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c |
640 |
+index 8a67b7c019b8..46e5387f5782 100644 |
641 |
+--- a/arch/x86/platform/efi/efi.c |
642 |
++++ b/arch/x86/platform/efi/efi.c |
643 |
+@@ -50,6 +50,13 @@ |
644 |
+ |
645 |
+ #define EFI_DEBUG 1 |
646 |
+ |
647 |
++#define EFI_MIN_RESERVE 5120 |
648 |
++ |
649 |
++#define EFI_DUMMY_GUID \ |
650 |
++ EFI_GUID(0x4424ac57, 0xbe4b, 0x47dd, 0x9e, 0x97, 0xed, 0x50, 0xf0, 0x9f, 0x92, 0xa9) |
651 |
++ |
652 |
++static efi_char16_t efi_dummy_name[6] = { 'D', 'U', 'M', 'M', 'Y', 0 }; |
653 |
++ |
654 |
+ struct efi __read_mostly efi = { |
655 |
+ .mps = EFI_INVALID_TABLE_ADDR, |
656 |
+ .acpi = EFI_INVALID_TABLE_ADDR, |
657 |
+@@ -102,6 +109,15 @@ static int __init setup_add_efi_memmap(char *arg) |
658 |
+ } |
659 |
+ early_param("add_efi_memmap", setup_add_efi_memmap); |
660 |
+ |
661 |
++static bool efi_no_storage_paranoia; |
662 |
++ |
663 |
++static int __init setup_storage_paranoia(char *arg) |
664 |
++{ |
665 |
++ efi_no_storage_paranoia = true; |
666 |
++ return 0; |
667 |
++} |
668 |
++early_param("efi_no_storage_paranoia", setup_storage_paranoia); |
669 |
++ |
670 |
+ |
671 |
+ static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc) |
672 |
+ { |
673 |
+@@ -923,6 +939,13 @@ void __init efi_enter_virtual_mode(void) |
674 |
+ runtime_code_page_mkexec(); |
675 |
+ |
676 |
+ kfree(new_memmap); |
677 |
++ |
678 |
++ /* clean DUMMY object */ |
679 |
++ efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, |
680 |
++ EFI_VARIABLE_NON_VOLATILE | |
681 |
++ EFI_VARIABLE_BOOTSERVICE_ACCESS | |
682 |
++ EFI_VARIABLE_RUNTIME_ACCESS, |
683 |
++ 0, NULL); |
684 |
+ } |
685 |
+ |
686 |
+ /* |
687 |
+@@ -960,3 +983,85 @@ u64 efi_mem_attributes(unsigned long phys_addr) |
688 |
+ } |
689 |
+ return 0; |
690 |
+ } |
691 |
++ |
692 |
++/* |
693 |
++ * Some firmware has serious problems when using more than 50% of the EFI |
694 |
++ * variable store, i.e. it triggers bugs that can brick machines. Ensure that |
695 |
++ * we never use more than this safe limit. |
696 |
++ * |
697 |
++ * Return EFI_SUCCESS if it is safe to write 'size' bytes to the variable |
698 |
++ * store. |
699 |
++ */ |
700 |
++efi_status_t efi_query_variable_store(u32 attributes, unsigned long size) |
701 |
++{ |
702 |
++ efi_status_t status; |
703 |
++ u64 storage_size, remaining_size, max_size; |
704 |
++ |
705 |
++ if (!(attributes & EFI_VARIABLE_NON_VOLATILE)) |
706 |
++ return 0; |
707 |
++ |
708 |
++ status = efi.query_variable_info(attributes, &storage_size, |
709 |
++ &remaining_size, &max_size); |
710 |
++ if (status != EFI_SUCCESS) |
711 |
++ return status; |
712 |
++ |
713 |
++ /* |
714 |
++ * Some firmware implementations refuse to boot if there's insufficient |
715 |
++ * space in the variable store. We account for that by refusing the |
716 |
++ * write if permitting it would reduce the available space to under |
717 |
++ * 5KB. This figure was provided by Samsung, so should be safe. |
718 |
++ */ |
719 |
++ if ((remaining_size - size < EFI_MIN_RESERVE) && |
720 |
++ !efi_no_storage_paranoia) { |
721 |
++ |
722 |
++ /* |
723 |
++ * Triggering garbage collection may require that the firmware |
724 |
++ * generate a real EFI_OUT_OF_RESOURCES error. We can force |
725 |
++ * that by attempting to use more space than is available. |
726 |
++ */ |
727 |
++ unsigned long dummy_size = remaining_size + 1024; |
728 |
++ void *dummy = kzalloc(dummy_size, GFP_ATOMIC); |
729 |
++ |
730 |
++ if (!dummy) |
731 |
++ return EFI_OUT_OF_RESOURCES; |
732 |
++ |
733 |
++ status = efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, |
734 |
++ EFI_VARIABLE_NON_VOLATILE | |
735 |
++ EFI_VARIABLE_BOOTSERVICE_ACCESS | |
736 |
++ EFI_VARIABLE_RUNTIME_ACCESS, |
737 |
++ dummy_size, dummy); |
738 |
++ |
739 |
++ if (status == EFI_SUCCESS) { |
740 |
++ /* |
741 |
++ * This should have failed, so if it didn't make sure |
742 |
++ * that we delete it... |
743 |
++ */ |
744 |
++ efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, |
745 |
++ EFI_VARIABLE_NON_VOLATILE | |
746 |
++ EFI_VARIABLE_BOOTSERVICE_ACCESS | |
747 |
++ EFI_VARIABLE_RUNTIME_ACCESS, |
748 |
++ 0, dummy); |
749 |
++ } |
750 |
++ |
751 |
++ kfree(dummy); |
752 |
++ |
753 |
++ /* |
754 |
++ * The runtime code may now have triggered a garbage collection |
755 |
++ * run, so check the variable info again |
756 |
++ */ |
757 |
++ status = efi.query_variable_info(attributes, &storage_size, |
758 |
++ &remaining_size, &max_size); |
759 |
++ |
760 |
++ if (status != EFI_SUCCESS) |
761 |
++ return status; |
762 |
++ |
763 |
++ /* |
764 |
++ * There still isn't enough room, so return an error |
765 |
++ */ |
766 |
++ if (remaining_size - size < EFI_MIN_RESERVE) |
767 |
++ return EFI_OUT_OF_RESOURCES; |
768 |
++ } |
769 |
++ |
770 |
++ return EFI_SUCCESS; |
771 |
++} |
772 |
++EXPORT_SYMBOL_GPL(efi_query_variable_store); |
773 |
+diff --git a/arch/x86/vdso/vdso32-setup.c b/arch/x86/vdso/vdso32-setup.c |
774 |
+index 66e6d9359826..c734408d55e4 100644 |
775 |
+--- a/arch/x86/vdso/vdso32-setup.c |
776 |
++++ b/arch/x86/vdso/vdso32-setup.c |
777 |
+@@ -41,6 +41,7 @@ enum { |
778 |
+ #ifdef CONFIG_X86_64 |
779 |
+ #define vdso_enabled sysctl_vsyscall32 |
780 |
+ #define arch_setup_additional_pages syscall32_setup_pages |
781 |
++extern int sysctl_ldt16; |
782 |
+ #endif |
783 |
+ |
784 |
+ /* |
785 |
+@@ -380,6 +381,13 @@ static ctl_table abi_table2[] = { |
786 |
+ .mode = 0644, |
787 |
+ .proc_handler = proc_dointvec |
788 |
+ }, |
789 |
++ { |
790 |
++ .procname = "ldt16", |
791 |
++ .data = &sysctl_ldt16, |
792 |
++ .maxlen = sizeof(int), |
793 |
++ .mode = 0644, |
794 |
++ .proc_handler = proc_dointvec |
795 |
++ }, |
796 |
+ {} |
797 |
+ }; |
798 |
+ |
799 |
+diff --git a/crypto/crypto_wq.c b/crypto/crypto_wq.c |
800 |
+index adad92a44ba2..2f1b8d12952a 100644 |
801 |
+--- a/crypto/crypto_wq.c |
802 |
++++ b/crypto/crypto_wq.c |
803 |
+@@ -33,7 +33,7 @@ static void __exit crypto_wq_exit(void) |
804 |
+ destroy_workqueue(kcrypto_wq); |
805 |
+ } |
806 |
+ |
807 |
+-module_init(crypto_wq_init); |
808 |
++subsys_initcall(crypto_wq_init); |
809 |
+ module_exit(crypto_wq_exit); |
810 |
+ |
811 |
+ MODULE_LICENSE("GPL"); |
812 |
+diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c |
813 |
+index cb9629638def..76da257cfc28 100644 |
814 |
+--- a/drivers/acpi/blacklist.c |
815 |
++++ b/drivers/acpi/blacklist.c |
816 |
+@@ -327,6 +327,19 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = { |
817 |
+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T500"), |
818 |
+ }, |
819 |
+ }, |
820 |
++ /* |
821 |
++ * Without this this EEEpc exports a non working WMI interface, with |
822 |
++ * this it exports a working "good old" eeepc_laptop interface, fixing |
823 |
++ * both brightness control, and rfkill not working. |
824 |
++ */ |
825 |
++ { |
826 |
++ .callback = dmi_enable_osi_linux, |
827 |
++ .ident = "Asus EEE PC 1015PX", |
828 |
++ .matches = { |
829 |
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer INC."), |
830 |
++ DMI_MATCH(DMI_PRODUCT_NAME, "1015PX"), |
831 |
++ }, |
832 |
++ }, |
833 |
+ {} |
834 |
+ }; |
835 |
+ |
836 |
+diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c |
837 |
+index 9dbd3aee0870..9f165a81d0ea 100644 |
838 |
+--- a/drivers/ata/ata_piix.c |
839 |
++++ b/drivers/ata/ata_piix.c |
840 |
+@@ -331,6 +331,14 @@ static const struct pci_device_id piix_pci_tbl[] = { |
841 |
+ { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata_snb }, |
842 |
+ /* SATA Controller IDE (Lynx Point) */ |
843 |
+ { 0x8086, 0x8c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, |
844 |
++ /* SATA Controller IDE (Lynx Point-LP) */ |
845 |
++ { 0x8086, 0x9c00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, |
846 |
++ /* SATA Controller IDE (Lynx Point-LP) */ |
847 |
++ { 0x8086, 0x9c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, |
848 |
++ /* SATA Controller IDE (Lynx Point-LP) */ |
849 |
++ { 0x8086, 0x9c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, |
850 |
++ /* SATA Controller IDE (Lynx Point-LP) */ |
851 |
++ { 0x8086, 0x9c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, |
852 |
+ /* SATA Controller IDE (DH89xxCC) */ |
853 |
+ { 0x8086, 0x2326, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, |
854 |
+ /* SATA Controller IDE (Avoton) */ |
855 |
+diff --git a/drivers/ata/pata_at91.c b/drivers/ata/pata_at91.c |
856 |
+index 53d3770a0b1b..47d5d58ad0eb 100644 |
857 |
+--- a/drivers/ata/pata_at91.c |
858 |
++++ b/drivers/ata/pata_at91.c |
859 |
+@@ -408,12 +408,13 @@ static int __devinit pata_at91_probe(struct platform_device *pdev) |
860 |
+ |
861 |
+ host->private_data = info; |
862 |
+ |
863 |
+- return ata_host_activate(host, gpio_is_valid(irq) ? gpio_to_irq(irq) : 0, |
864 |
+- gpio_is_valid(irq) ? ata_sff_interrupt : NULL, |
865 |
+- irq_flags, &pata_at91_sht); |
866 |
++ ret = ata_host_activate(host, gpio_is_valid(irq) ? gpio_to_irq(irq) : 0, |
867 |
++ gpio_is_valid(irq) ? ata_sff_interrupt : NULL, |
868 |
++ irq_flags, &pata_at91_sht); |
869 |
++ if (ret) |
870 |
++ goto err_put; |
871 |
+ |
872 |
+- if (!ret) |
873 |
+- return 0; |
874 |
++ return 0; |
875 |
+ |
876 |
+ err_put: |
877 |
+ clk_put(info->mck); |
878 |
+diff --git a/drivers/atm/ambassador.c b/drivers/atm/ambassador.c |
879 |
+index f8f41e0e8a8c..89b30f32ba68 100644 |
880 |
+--- a/drivers/atm/ambassador.c |
881 |
++++ b/drivers/atm/ambassador.c |
882 |
+@@ -802,7 +802,7 @@ static void fill_rx_pool (amb_dev * dev, unsigned char pool, |
883 |
+ } |
884 |
+ // cast needed as there is no %? for pointer differences |
885 |
+ PRINTD (DBG_SKB, "allocated skb at %p, head %p, area %li", |
886 |
+- skb, skb->head, (long) (skb_end_pointer(skb) - skb->head)); |
887 |
++ skb, skb->head, (long) skb_end_offset(skb)); |
888 |
+ rx.handle = virt_to_bus (skb); |
889 |
+ rx.host_address = cpu_to_be32 (virt_to_bus (skb->data)); |
890 |
+ if (rx_give (dev, &rx, pool)) |
891 |
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c |
892 |
+index b0e75ce3c8fc..81845fa2a9cd 100644 |
893 |
+--- a/drivers/atm/idt77252.c |
894 |
++++ b/drivers/atm/idt77252.c |
895 |
+@@ -1258,7 +1258,7 @@ idt77252_rx_raw(struct idt77252_dev *card) |
896 |
+ tail = readl(SAR_REG_RAWCT); |
897 |
+ |
898 |
+ pci_dma_sync_single_for_cpu(card->pcidev, IDT77252_PRV_PADDR(queue), |
899 |
+- skb_end_pointer(queue) - queue->head - 16, |
900 |
++ skb_end_offset(queue) - 16, |
901 |
+ PCI_DMA_FROMDEVICE); |
902 |
+ |
903 |
+ while (head != tail) { |
904 |
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c |
905 |
+index 97fc774e6bc2..baa2f190f3fe 100644 |
906 |
+--- a/drivers/base/dd.c |
907 |
++++ b/drivers/base/dd.c |
908 |
+@@ -51,6 +51,7 @@ static DEFINE_MUTEX(deferred_probe_mutex); |
909 |
+ static LIST_HEAD(deferred_probe_pending_list); |
910 |
+ static LIST_HEAD(deferred_probe_active_list); |
911 |
+ static struct workqueue_struct *deferred_wq; |
912 |
++static atomic_t deferred_trigger_count = ATOMIC_INIT(0); |
913 |
+ |
914 |
+ /** |
915 |
+ * deferred_probe_work_func() - Retry probing devices in the active list. |
916 |
+@@ -122,6 +123,17 @@ static bool driver_deferred_probe_enable = false; |
917 |
+ * This functions moves all devices from the pending list to the active |
918 |
+ * list and schedules the deferred probe workqueue to process them. It |
919 |
+ * should be called anytime a driver is successfully bound to a device. |
920 |
++ * |
921 |
++ * Note, there is a race condition in multi-threaded probe. In the case where |
922 |
++ * more than one device is probing at the same time, it is possible for one |
923 |
++ * probe to complete successfully while another is about to defer. If the second |
924 |
++ * depends on the first, then it will get put on the pending list after the |
925 |
++ * trigger event has already occured and will be stuck there. |
926 |
++ * |
927 |
++ * The atomic 'deferred_trigger_count' is used to determine if a successful |
928 |
++ * trigger has occurred in the midst of probing a driver. If the trigger count |
929 |
++ * changes in the midst of a probe, then deferred processing should be triggered |
930 |
++ * again. |
931 |
+ */ |
932 |
+ static void driver_deferred_probe_trigger(void) |
933 |
+ { |
934 |
+@@ -134,6 +146,7 @@ static void driver_deferred_probe_trigger(void) |
935 |
+ * into the active list so they can be retried by the workqueue |
936 |
+ */ |
937 |
+ mutex_lock(&deferred_probe_mutex); |
938 |
++ atomic_inc(&deferred_trigger_count); |
939 |
+ list_splice_tail_init(&deferred_probe_pending_list, |
940 |
+ &deferred_probe_active_list); |
941 |
+ mutex_unlock(&deferred_probe_mutex); |
942 |
+@@ -252,6 +265,7 @@ static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue); |
943 |
+ static int really_probe(struct device *dev, struct device_driver *drv) |
944 |
+ { |
945 |
+ int ret = 0; |
946 |
++ int local_trigger_count = atomic_read(&deferred_trigger_count); |
947 |
+ |
948 |
+ atomic_inc(&probe_count); |
949 |
+ pr_debug("bus: '%s': %s: probing driver %s with device %s\n", |
950 |
+@@ -290,6 +304,9 @@ probe_failed: |
951 |
+ /* Driver requested deferred probing */ |
952 |
+ dev_info(dev, "Driver %s requests probe deferral\n", drv->name); |
953 |
+ driver_deferred_probe_add(dev); |
954 |
++ /* Did a trigger occur while probing? Need to re-trigger if yes */ |
955 |
++ if (local_trigger_count != atomic_read(&deferred_trigger_count)) |
956 |
++ driver_deferred_probe_trigger(); |
957 |
+ } else if (ret != -ENODEV && ret != -ENXIO) { |
958 |
+ /* driver matched but the probe failed */ |
959 |
+ printk(KERN_WARNING |
960 |
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c |
961 |
+index 2cac6e64b67d..bc99e5c68d76 100644 |
962 |
+--- a/drivers/block/floppy.c |
963 |
++++ b/drivers/block/floppy.c |
964 |
+@@ -4306,7 +4306,7 @@ static int __init floppy_init(void) |
965 |
+ |
966 |
+ err = platform_device_register(&floppy_device[drive]); |
967 |
+ if (err) |
968 |
+- goto out_flush_work; |
969 |
++ goto out_remove_drives; |
970 |
+ |
971 |
+ err = device_create_file(&floppy_device[drive].dev, |
972 |
+ &dev_attr_cmos); |
973 |
+@@ -4324,6 +4324,15 @@ static int __init floppy_init(void) |
974 |
+ |
975 |
+ out_unreg_platform_dev: |
976 |
+ platform_device_unregister(&floppy_device[drive]); |
977 |
++out_remove_drives: |
978 |
++ while (drive--) { |
979 |
++ if ((allowed_drive_mask & (1 << drive)) && |
980 |
++ fdc_state[FDC(drive)].version != FDC_NONE) { |
981 |
++ del_gendisk(disks[drive]); |
982 |
++ device_remove_file(&floppy_device[drive].dev, &dev_attr_cmos); |
983 |
++ platform_device_unregister(&floppy_device[drive]); |
984 |
++ } |
985 |
++ } |
986 |
+ out_flush_work: |
987 |
+ flush_work_sync(&floppy_work); |
988 |
+ if (atomic_read(&usage_count)) |
989 |
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c |
990 |
+index d724da52153b..35fc56981875 100644 |
991 |
+--- a/drivers/block/nbd.c |
992 |
++++ b/drivers/block/nbd.c |
993 |
+@@ -584,10 +584,17 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd, |
994 |
+ struct request sreq; |
995 |
+ |
996 |
+ dev_info(disk_to_dev(nbd->disk), "NBD_DISCONNECT\n"); |
997 |
++ if (!nbd->sock) |
998 |
++ return -EINVAL; |
999 |
+ |
1000 |
++ mutex_unlock(&nbd->tx_lock); |
1001 |
++ fsync_bdev(bdev); |
1002 |
++ mutex_lock(&nbd->tx_lock); |
1003 |
+ blk_rq_init(NULL, &sreq); |
1004 |
+ sreq.cmd_type = REQ_TYPE_SPECIAL; |
1005 |
+ nbd_cmd(&sreq) = NBD_CMD_DISC; |
1006 |
++ |
1007 |
++ /* Check again after getting mutex back. */ |
1008 |
+ if (!nbd->sock) |
1009 |
+ return -EINVAL; |
1010 |
+ |
1011 |
+@@ -606,6 +613,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd, |
1012 |
+ nbd_clear_que(nbd); |
1013 |
+ BUG_ON(!list_empty(&nbd->queue_head)); |
1014 |
+ BUG_ON(!list_empty(&nbd->waiting_queue)); |
1015 |
++ kill_bdev(bdev); |
1016 |
+ if (file) |
1017 |
+ fput(file); |
1018 |
+ return 0; |
1019 |
+@@ -688,6 +696,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd, |
1020 |
+ nbd->file = NULL; |
1021 |
+ nbd_clear_que(nbd); |
1022 |
+ dev_warn(disk_to_dev(nbd->disk), "queue cleared\n"); |
1023 |
++ kill_bdev(bdev); |
1024 |
+ if (file) |
1025 |
+ fput(file); |
1026 |
+ nbd->bytesize = 0; |
1027 |
+diff --git a/drivers/char/ipmi/ipmi_kcs_sm.c b/drivers/char/ipmi/ipmi_kcs_sm.c |
1028 |
+index e53fc24c6af3..e1ddcf938519 100644 |
1029 |
+--- a/drivers/char/ipmi/ipmi_kcs_sm.c |
1030 |
++++ b/drivers/char/ipmi/ipmi_kcs_sm.c |
1031 |
+@@ -251,8 +251,9 @@ static inline int check_obf(struct si_sm_data *kcs, unsigned char status, |
1032 |
+ if (!GET_STATUS_OBF(status)) { |
1033 |
+ kcs->obf_timeout -= time; |
1034 |
+ if (kcs->obf_timeout < 0) { |
1035 |
+- start_error_recovery(kcs, "OBF not ready in time"); |
1036 |
+- return 1; |
1037 |
++ kcs->obf_timeout = OBF_RETRY_TIMEOUT; |
1038 |
++ start_error_recovery(kcs, "OBF not ready in time"); |
1039 |
++ return 1; |
1040 |
+ } |
1041 |
+ return 0; |
1042 |
+ } |
1043 |
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c |
1044 |
+index 1e638fff40ea..bdecba5de3a4 100644 |
1045 |
+--- a/drivers/char/ipmi/ipmi_si_intf.c |
1046 |
++++ b/drivers/char/ipmi/ipmi_si_intf.c |
1047 |
+@@ -244,6 +244,9 @@ struct smi_info { |
1048 |
+ /* The timer for this si. */ |
1049 |
+ struct timer_list si_timer; |
1050 |
+ |
1051 |
++ /* This flag is set, if the timer is running (timer_pending() isn't enough) */ |
1052 |
++ bool timer_running; |
1053 |
++ |
1054 |
+ /* The time (in jiffies) the last timeout occurred at. */ |
1055 |
+ unsigned long last_timeout_jiffies; |
1056 |
+ |
1057 |
+@@ -427,6 +430,13 @@ static void start_clear_flags(struct smi_info *smi_info) |
1058 |
+ smi_info->si_state = SI_CLEARING_FLAGS; |
1059 |
+ } |
1060 |
+ |
1061 |
++static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) |
1062 |
++{ |
1063 |
++ smi_info->last_timeout_jiffies = jiffies; |
1064 |
++ mod_timer(&smi_info->si_timer, new_val); |
1065 |
++ smi_info->timer_running = true; |
1066 |
++} |
1067 |
++ |
1068 |
+ /* |
1069 |
+ * When we have a situtaion where we run out of memory and cannot |
1070 |
+ * allocate messages, we just leave them in the BMC and run the system |
1071 |
+@@ -439,8 +449,7 @@ static inline void disable_si_irq(struct smi_info *smi_info) |
1072 |
+ start_disable_irq(smi_info); |
1073 |
+ smi_info->interrupt_disabled = 1; |
1074 |
+ if (!atomic_read(&smi_info->stop_operation)) |
1075 |
+- mod_timer(&smi_info->si_timer, |
1076 |
+- jiffies + SI_TIMEOUT_JIFFIES); |
1077 |
++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); |
1078 |
+ } |
1079 |
+ } |
1080 |
+ |
1081 |
+@@ -896,15 +905,7 @@ static void sender(void *send_info, |
1082 |
+ list_add_tail(&msg->link, &smi_info->xmit_msgs); |
1083 |
+ |
1084 |
+ if (smi_info->si_state == SI_NORMAL && smi_info->curr_msg == NULL) { |
1085 |
+- /* |
1086 |
+- * last_timeout_jiffies is updated here to avoid |
1087 |
+- * smi_timeout() handler passing very large time_diff |
1088 |
+- * value to smi_event_handler() that causes |
1089 |
+- * the send command to abort. |
1090 |
+- */ |
1091 |
+- smi_info->last_timeout_jiffies = jiffies; |
1092 |
+- |
1093 |
+- mod_timer(&smi_info->si_timer, jiffies + SI_TIMEOUT_JIFFIES); |
1094 |
++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); |
1095 |
+ |
1096 |
+ if (smi_info->thread) |
1097 |
+ wake_up_process(smi_info->thread); |
1098 |
+@@ -993,6 +994,17 @@ static int ipmi_thread(void *data) |
1099 |
+ |
1100 |
+ spin_lock_irqsave(&(smi_info->si_lock), flags); |
1101 |
+ smi_result = smi_event_handler(smi_info, 0); |
1102 |
++ |
1103 |
++ /* |
1104 |
++ * If the driver is doing something, there is a possible |
1105 |
++ * race with the timer. If the timer handler see idle, |
1106 |
++ * and the thread here sees something else, the timer |
1107 |
++ * handler won't restart the timer even though it is |
1108 |
++ * required. So start it here if necessary. |
1109 |
++ */ |
1110 |
++ if (smi_result != SI_SM_IDLE && !smi_info->timer_running) |
1111 |
++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); |
1112 |
++ |
1113 |
+ spin_unlock_irqrestore(&(smi_info->si_lock), flags); |
1114 |
+ busy_wait = ipmi_thread_busy_wait(smi_result, smi_info, |
1115 |
+ &busy_until); |
1116 |
+@@ -1062,10 +1074,6 @@ static void smi_timeout(unsigned long data) |
1117 |
+ * SI_USEC_PER_JIFFY); |
1118 |
+ smi_result = smi_event_handler(smi_info, time_diff); |
1119 |
+ |
1120 |
+- spin_unlock_irqrestore(&(smi_info->si_lock), flags); |
1121 |
+- |
1122 |
+- smi_info->last_timeout_jiffies = jiffies_now; |
1123 |
+- |
1124 |
+ if ((smi_info->irq) && (!smi_info->interrupt_disabled)) { |
1125 |
+ /* Running with interrupts, only do long timeouts. */ |
1126 |
+ timeout = jiffies + SI_TIMEOUT_JIFFIES; |
1127 |
+@@ -1087,7 +1095,10 @@ static void smi_timeout(unsigned long data) |
1128 |
+ |
1129 |
+ do_mod_timer: |
1130 |
+ if (smi_result != SI_SM_IDLE) |
1131 |
+- mod_timer(&(smi_info->si_timer), timeout); |
1132 |
++ smi_mod_timer(smi_info, timeout); |
1133 |
++ else |
1134 |
++ smi_info->timer_running = false; |
1135 |
++ spin_unlock_irqrestore(&(smi_info->si_lock), flags); |
1136 |
+ } |
1137 |
+ |
1138 |
+ static irqreturn_t si_irq_handler(int irq, void *data) |
1139 |
+@@ -1135,8 +1146,7 @@ static int smi_start_processing(void *send_info, |
1140 |
+ |
1141 |
+ /* Set up the timer that drives the interface. */ |
1142 |
+ setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi); |
1143 |
+- new_smi->last_timeout_jiffies = jiffies; |
1144 |
+- mod_timer(&new_smi->si_timer, jiffies + SI_TIMEOUT_JIFFIES); |
1145 |
++ smi_mod_timer(new_smi, jiffies + SI_TIMEOUT_JIFFIES); |
1146 |
+ |
1147 |
+ /* |
1148 |
+ * Check if the user forcefully enabled the daemon. |
1149 |
+diff --git a/drivers/char/random.c b/drivers/char/random.c |
1150 |
+index 817eeb642732..1052fc4cae66 100644 |
1151 |
+--- a/drivers/char/random.c |
1152 |
++++ b/drivers/char/random.c |
1153 |
+@@ -867,16 +867,24 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, |
1154 |
+ if (r->entropy_count / 8 < min + reserved) { |
1155 |
+ nbytes = 0; |
1156 |
+ } else { |
1157 |
++ int entropy_count, orig; |
1158 |
++retry: |
1159 |
++ entropy_count = orig = ACCESS_ONCE(r->entropy_count); |
1160 |
+ /* If limited, never pull more than available */ |
1161 |
+- if (r->limit && nbytes + reserved >= r->entropy_count / 8) |
1162 |
+- nbytes = r->entropy_count/8 - reserved; |
1163 |
+- |
1164 |
+- if (r->entropy_count / 8 >= nbytes + reserved) |
1165 |
+- r->entropy_count -= nbytes*8; |
1166 |
+- else |
1167 |
+- r->entropy_count = reserved; |
1168 |
++ if (r->limit && nbytes + reserved >= entropy_count / 8) |
1169 |
++ nbytes = entropy_count/8 - reserved; |
1170 |
++ |
1171 |
++ if (entropy_count / 8 >= nbytes + reserved) { |
1172 |
++ entropy_count -= nbytes*8; |
1173 |
++ if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) |
1174 |
++ goto retry; |
1175 |
++ } else { |
1176 |
++ entropy_count = reserved; |
1177 |
++ if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) |
1178 |
++ goto retry; |
1179 |
++ } |
1180 |
+ |
1181 |
+- if (r->entropy_count < random_write_wakeup_thresh) { |
1182 |
++ if (entropy_count < random_write_wakeup_thresh) { |
1183 |
+ wake_up_interruptible(&random_write_wait); |
1184 |
+ kill_fasync(&fasync, SIGIO, POLL_OUT); |
1185 |
+ } |
1186 |
+diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c |
1187 |
+index 7e2d54bffad6..9b8d231b1da6 100644 |
1188 |
+--- a/drivers/crypto/caam/error.c |
1189 |
++++ b/drivers/crypto/caam/error.c |
1190 |
+@@ -16,9 +16,13 @@ |
1191 |
+ char *tmp; \ |
1192 |
+ \ |
1193 |
+ tmp = kmalloc(sizeof(format) + max_alloc, GFP_ATOMIC); \ |
1194 |
+- sprintf(tmp, format, param); \ |
1195 |
+- strcat(str, tmp); \ |
1196 |
+- kfree(tmp); \ |
1197 |
++ if (likely(tmp)) { \ |
1198 |
++ sprintf(tmp, format, param); \ |
1199 |
++ strcat(str, tmp); \ |
1200 |
++ kfree(tmp); \ |
1201 |
++ } else { \ |
1202 |
++ strcat(str, "kmalloc failure in SPRINTFCAT"); \ |
1203 |
++ } \ |
1204 |
+ } |
1205 |
+ |
1206 |
+ static void report_jump_idx(u32 status, char *outstr) |
1207 |
+diff --git a/drivers/edac/i82975x_edac.c b/drivers/edac/i82975x_edac.c |
1208 |
+index 0cd8368f88f8..182d82afd223 100644 |
1209 |
+--- a/drivers/edac/i82975x_edac.c |
1210 |
++++ b/drivers/edac/i82975x_edac.c |
1211 |
+@@ -363,10 +363,6 @@ static enum dev_type i82975x_dram_type(void __iomem *mch_window, int rank) |
1212 |
+ static void i82975x_init_csrows(struct mem_ctl_info *mci, |
1213 |
+ struct pci_dev *pdev, void __iomem *mch_window) |
1214 |
+ { |
1215 |
+- static const char *labels[4] = { |
1216 |
+- "DIMM A1", "DIMM A2", |
1217 |
+- "DIMM B1", "DIMM B2" |
1218 |
+- }; |
1219 |
+ struct csrow_info *csrow; |
1220 |
+ unsigned long last_cumul_size; |
1221 |
+ u8 value; |
1222 |
+@@ -407,9 +403,10 @@ static void i82975x_init_csrows(struct mem_ctl_info *mci, |
1223 |
+ * [0-3] for dual-channel; i.e. csrow->nr_channels = 2 |
1224 |
+ */ |
1225 |
+ for (chan = 0; chan < csrow->nr_channels; chan++) |
1226 |
+- strncpy(csrow->channels[chan].label, |
1227 |
+- labels[(index >> 1) + (chan * 2)], |
1228 |
+- EDAC_MC_LABEL_LEN); |
1229 |
++ |
1230 |
++ snprintf(csrow->channels[chan].label, EDAC_MC_LABEL_LEN, "DIMM %c%d", |
1231 |
++ (chan == 0) ? 'A' : 'B', |
1232 |
++ index); |
1233 |
+ |
1234 |
+ if (cumul_size == last_cumul_size) |
1235 |
+ continue; /* not populated */ |
1236 |
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig |
1237 |
+index 9b00072a020f..42c759a4d047 100644 |
1238 |
+--- a/drivers/firmware/Kconfig |
1239 |
++++ b/drivers/firmware/Kconfig |
1240 |
+@@ -53,6 +53,24 @@ config EFI_VARS |
1241 |
+ Subsequent efibootmgr releases may be found at: |
1242 |
+ <http://linux.dell.com/efibootmgr> |
1243 |
+ |
1244 |
++config EFI_VARS_PSTORE |
1245 |
++ bool "Register efivars backend for pstore" |
1246 |
++ depends on EFI_VARS && PSTORE |
1247 |
++ default y |
1248 |
++ help |
1249 |
++ Say Y here to enable use efivars as a backend to pstore. This |
1250 |
++ will allow writing console messages, crash dumps, or anything |
1251 |
++ else supported by pstore to EFI variables. |
1252 |
++ |
1253 |
++config EFI_VARS_PSTORE_DEFAULT_DISABLE |
1254 |
++ bool "Disable using efivars as a pstore backend by default" |
1255 |
++ depends on EFI_VARS_PSTORE |
1256 |
++ default n |
1257 |
++ help |
1258 |
++ Saying Y here will disable the use of efivars as a storage |
1259 |
++ backend for pstore by default. This setting can be overridden |
1260 |
++ using the efivars module's pstore_disable parameter. |
1261 |
++ |
1262 |
+ config EFI_PCDP |
1263 |
+ bool "Console device selection via EFI PCDP or HCDP table" |
1264 |
+ depends on ACPI && EFI && IA64 |
1265 |
+diff --git a/drivers/firmware/efivars.c b/drivers/firmware/efivars.c |
1266 |
+index 2cbb675a66cc..80c6667c8a3b 100644 |
1267 |
+--- a/drivers/firmware/efivars.c |
1268 |
++++ b/drivers/firmware/efivars.c |
1269 |
+@@ -92,6 +92,11 @@ MODULE_VERSION(EFIVARS_VERSION); |
1270 |
+ |
1271 |
+ #define DUMP_NAME_LEN 52 |
1272 |
+ |
1273 |
++static bool efivars_pstore_disable = |
1274 |
++ IS_ENABLED(CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE); |
1275 |
++ |
1276 |
++module_param_named(pstore_disable, efivars_pstore_disable, bool, 0644); |
1277 |
++ |
1278 |
+ /* |
1279 |
+ * The maximum size of VariableName + Data = 1024 |
1280 |
+ * Therefore, it's reasonable to save that much |
1281 |
+@@ -149,6 +154,13 @@ efivar_create_sysfs_entry(struct efivars *efivars, |
1282 |
+ efi_char16_t *variable_name, |
1283 |
+ efi_guid_t *vendor_guid); |
1284 |
+ |
1285 |
++/* |
1286 |
++ * Prototype for workqueue functions updating sysfs entry |
1287 |
++ */ |
1288 |
++ |
1289 |
++static void efivar_update_sysfs_entries(struct work_struct *); |
1290 |
++static DECLARE_WORK(efivar_work, efivar_update_sysfs_entries); |
1291 |
++ |
1292 |
+ /* Return the number of unicode characters in data */ |
1293 |
+ static unsigned long |
1294 |
+ utf16_strnlen(efi_char16_t *s, size_t maxlength) |
1295 |
+@@ -396,10 +408,11 @@ static efi_status_t |
1296 |
+ get_var_data(struct efivars *efivars, struct efi_variable *var) |
1297 |
+ { |
1298 |
+ efi_status_t status; |
1299 |
++ unsigned long flags; |
1300 |
+ |
1301 |
+- spin_lock(&efivars->lock); |
1302 |
++ spin_lock_irqsave(&efivars->lock, flags); |
1303 |
+ status = get_var_data_locked(efivars, var); |
1304 |
+- spin_unlock(&efivars->lock); |
1305 |
++ spin_unlock_irqrestore(&efivars->lock, flags); |
1306 |
+ |
1307 |
+ if (status != EFI_SUCCESS) { |
1308 |
+ printk(KERN_WARNING "efivars: get_variable() failed 0x%lx!\n", |
1309 |
+@@ -408,6 +421,18 @@ get_var_data(struct efivars *efivars, struct efi_variable *var) |
1310 |
+ return status; |
1311 |
+ } |
1312 |
+ |
1313 |
++static efi_status_t |
1314 |
++check_var_size_locked(struct efivars *efivars, u32 attributes, |
1315 |
++ unsigned long size) |
1316 |
++{ |
1317 |
++ const struct efivar_operations *fops = efivars->ops; |
1318 |
++ |
1319 |
++ if (!efivars->ops->query_variable_store) |
1320 |
++ return EFI_UNSUPPORTED; |
1321 |
++ |
1322 |
++ return fops->query_variable_store(attributes, size); |
1323 |
++} |
1324 |
++ |
1325 |
+ static ssize_t |
1326 |
+ efivar_guid_read(struct efivar_entry *entry, char *buf) |
1327 |
+ { |
1328 |
+@@ -528,14 +553,19 @@ efivar_store_raw(struct efivar_entry *entry, const char *buf, size_t count) |
1329 |
+ return -EINVAL; |
1330 |
+ } |
1331 |
+ |
1332 |
+- spin_lock(&efivars->lock); |
1333 |
+- status = efivars->ops->set_variable(new_var->VariableName, |
1334 |
+- &new_var->VendorGuid, |
1335 |
+- new_var->Attributes, |
1336 |
+- new_var->DataSize, |
1337 |
+- new_var->Data); |
1338 |
++ spin_lock_irq(&efivars->lock); |
1339 |
++ |
1340 |
++ status = check_var_size_locked(efivars, new_var->Attributes, |
1341 |
++ new_var->DataSize + utf16_strsize(new_var->VariableName, 1024)); |
1342 |
+ |
1343 |
+- spin_unlock(&efivars->lock); |
1344 |
++ if (status == EFI_SUCCESS || status == EFI_UNSUPPORTED) |
1345 |
++ status = efivars->ops->set_variable(new_var->VariableName, |
1346 |
++ &new_var->VendorGuid, |
1347 |
++ new_var->Attributes, |
1348 |
++ new_var->DataSize, |
1349 |
++ new_var->Data); |
1350 |
++ |
1351 |
++ spin_unlock_irq(&efivars->lock); |
1352 |
+ |
1353 |
+ if (status != EFI_SUCCESS) { |
1354 |
+ printk(KERN_WARNING "efivars: set_variable() failed: status=%lx\n", |
1355 |
+@@ -632,21 +662,49 @@ static struct kobj_type efivar_ktype = { |
1356 |
+ .default_attrs = def_attrs, |
1357 |
+ }; |
1358 |
+ |
1359 |
+-static struct pstore_info efi_pstore_info; |
1360 |
+- |
1361 |
+ static inline void |
1362 |
+ efivar_unregister(struct efivar_entry *var) |
1363 |
+ { |
1364 |
+ kobject_put(&var->kobj); |
1365 |
+ } |
1366 |
+ |
1367 |
+-#ifdef CONFIG_PSTORE |
1368 |
++static int efi_status_to_err(efi_status_t status) |
1369 |
++{ |
1370 |
++ int err; |
1371 |
++ |
1372 |
++ switch (status) { |
1373 |
++ case EFI_INVALID_PARAMETER: |
1374 |
++ err = -EINVAL; |
1375 |
++ break; |
1376 |
++ case EFI_OUT_OF_RESOURCES: |
1377 |
++ err = -ENOSPC; |
1378 |
++ break; |
1379 |
++ case EFI_DEVICE_ERROR: |
1380 |
++ err = -EIO; |
1381 |
++ break; |
1382 |
++ case EFI_WRITE_PROTECTED: |
1383 |
++ err = -EROFS; |
1384 |
++ break; |
1385 |
++ case EFI_SECURITY_VIOLATION: |
1386 |
++ err = -EACCES; |
1387 |
++ break; |
1388 |
++ case EFI_NOT_FOUND: |
1389 |
++ err = -ENOENT; |
1390 |
++ break; |
1391 |
++ default: |
1392 |
++ err = -EINVAL; |
1393 |
++ } |
1394 |
++ |
1395 |
++ return err; |
1396 |
++} |
1397 |
++ |
1398 |
++#ifdef CONFIG_EFI_VARS_PSTORE |
1399 |
+ |
1400 |
+ static int efi_pstore_open(struct pstore_info *psi) |
1401 |
+ { |
1402 |
+ struct efivars *efivars = psi->data; |
1403 |
+ |
1404 |
+- spin_lock(&efivars->lock); |
1405 |
++ spin_lock_irq(&efivars->lock); |
1406 |
+ efivars->walk_entry = list_first_entry(&efivars->list, |
1407 |
+ struct efivar_entry, list); |
1408 |
+ return 0; |
1409 |
+@@ -656,7 +714,7 @@ static int efi_pstore_close(struct pstore_info *psi) |
1410 |
+ { |
1411 |
+ struct efivars *efivars = psi->data; |
1412 |
+ |
1413 |
+- spin_unlock(&efivars->lock); |
1414 |
++ spin_unlock_irq(&efivars->lock); |
1415 |
+ return 0; |
1416 |
+ } |
1417 |
+ |
1418 |
+@@ -710,11 +768,30 @@ static int efi_pstore_write(enum pstore_type_id type, |
1419 |
+ struct efivars *efivars = psi->data; |
1420 |
+ struct efivar_entry *entry, *found = NULL; |
1421 |
+ int i, ret = 0; |
1422 |
++ efi_status_t status = EFI_NOT_FOUND; |
1423 |
++ unsigned long flags; |
1424 |
+ |
1425 |
+ sprintf(stub_name, "dump-type%u-%u-", type, part); |
1426 |
+ sprintf(name, "%s%lu", stub_name, get_seconds()); |
1427 |
+ |
1428 |
+- spin_lock(&efivars->lock); |
1429 |
++ spin_lock_irqsave(&efivars->lock, flags); |
1430 |
++ |
1431 |
++ if (size) { |
1432 |
++ /* |
1433 |
++ * Check if there is a space enough to log. |
1434 |
++ * size: a size of logging data |
1435 |
++ * DUMP_NAME_LEN * 2: a maximum size of variable name |
1436 |
++ */ |
1437 |
++ |
1438 |
++ status = check_var_size_locked(efivars, PSTORE_EFI_ATTRIBUTES, |
1439 |
++ size + DUMP_NAME_LEN * 2); |
1440 |
++ |
1441 |
++ if (status) { |
1442 |
++ spin_unlock_irqrestore(&efivars->lock, flags); |
1443 |
++ *id = part; |
1444 |
++ return -ENOSPC; |
1445 |
++ } |
1446 |
++ } |
1447 |
+ |
1448 |
+ for (i = 0; i < DUMP_NAME_LEN; i++) |
1449 |
+ efi_name[i] = stub_name[i]; |
1450 |
+@@ -752,16 +829,13 @@ static int efi_pstore_write(enum pstore_type_id type, |
1451 |
+ efivars->ops->set_variable(efi_name, &vendor, PSTORE_EFI_ATTRIBUTES, |
1452 |
+ size, psi->buf); |
1453 |
+ |
1454 |
+- spin_unlock(&efivars->lock); |
1455 |
++ spin_unlock_irqrestore(&efivars->lock, flags); |
1456 |
+ |
1457 |
+ if (found) |
1458 |
+ efivar_unregister(found); |
1459 |
+ |
1460 |
+- if (size) |
1461 |
+- ret = efivar_create_sysfs_entry(efivars, |
1462 |
+- utf16_strsize(efi_name, |
1463 |
+- DUMP_NAME_LEN * 2), |
1464 |
+- efi_name, &vendor); |
1465 |
++ if (reason == KMSG_DUMP_OOPS) |
1466 |
++ schedule_work(&efivar_work); |
1467 |
+ |
1468 |
+ *id = part; |
1469 |
+ return ret; |
1470 |
+@@ -774,37 +848,6 @@ static int efi_pstore_erase(enum pstore_type_id type, u64 id, |
1471 |
+ |
1472 |
+ return 0; |
1473 |
+ } |
1474 |
+-#else |
1475 |
+-static int efi_pstore_open(struct pstore_info *psi) |
1476 |
+-{ |
1477 |
+- return 0; |
1478 |
+-} |
1479 |
+- |
1480 |
+-static int efi_pstore_close(struct pstore_info *psi) |
1481 |
+-{ |
1482 |
+- return 0; |
1483 |
+-} |
1484 |
+- |
1485 |
+-static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, |
1486 |
+- struct timespec *timespec, |
1487 |
+- char **buf, struct pstore_info *psi) |
1488 |
+-{ |
1489 |
+- return -1; |
1490 |
+-} |
1491 |
+- |
1492 |
+-static int efi_pstore_write(enum pstore_type_id type, |
1493 |
+- enum kmsg_dump_reason reason, u64 *id, |
1494 |
+- unsigned int part, size_t size, struct pstore_info *psi) |
1495 |
+-{ |
1496 |
+- return 0; |
1497 |
+-} |
1498 |
+- |
1499 |
+-static int efi_pstore_erase(enum pstore_type_id type, u64 id, |
1500 |
+- struct pstore_info *psi) |
1501 |
+-{ |
1502 |
+- return 0; |
1503 |
+-} |
1504 |
+-#endif |
1505 |
+ |
1506 |
+ static struct pstore_info efi_pstore_info = { |
1507 |
+ .owner = THIS_MODULE, |
1508 |
+@@ -816,6 +859,24 @@ static struct pstore_info efi_pstore_info = { |
1509 |
+ .erase = efi_pstore_erase, |
1510 |
+ }; |
1511 |
+ |
1512 |
++static void efivar_pstore_register(struct efivars *efivars) |
1513 |
++{ |
1514 |
++ efivars->efi_pstore_info = efi_pstore_info; |
1515 |
++ efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL); |
1516 |
++ if (efivars->efi_pstore_info.buf) { |
1517 |
++ efivars->efi_pstore_info.bufsize = 1024; |
1518 |
++ efivars->efi_pstore_info.data = efivars; |
1519 |
++ spin_lock_init(&efivars->efi_pstore_info.buf_lock); |
1520 |
++ pstore_register(&efivars->efi_pstore_info); |
1521 |
++ } |
1522 |
++} |
1523 |
++#else |
1524 |
++static void efivar_pstore_register(struct efivars *efivars) |
1525 |
++{ |
1526 |
++ return; |
1527 |
++} |
1528 |
++#endif |
1529 |
++ |
1530 |
+ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, |
1531 |
+ struct bin_attribute *bin_attr, |
1532 |
+ char *buf, loff_t pos, size_t count) |
1533 |
+@@ -836,7 +897,7 @@ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, |
1534 |
+ return -EINVAL; |
1535 |
+ } |
1536 |
+ |
1537 |
+- spin_lock(&efivars->lock); |
1538 |
++ spin_lock_irq(&efivars->lock); |
1539 |
+ |
1540 |
+ /* |
1541 |
+ * Does this variable already exist? |
1542 |
+@@ -854,10 +915,18 @@ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, |
1543 |
+ } |
1544 |
+ } |
1545 |
+ if (found) { |
1546 |
+- spin_unlock(&efivars->lock); |
1547 |
++ spin_unlock_irq(&efivars->lock); |
1548 |
+ return -EINVAL; |
1549 |
+ } |
1550 |
+ |
1551 |
++ status = check_var_size_locked(efivars, new_var->Attributes, |
1552 |
++ new_var->DataSize + utf16_strsize(new_var->VariableName, 1024)); |
1553 |
++ |
1554 |
++ if (status && status != EFI_UNSUPPORTED) { |
1555 |
++ spin_unlock_irq(&efivars->lock); |
1556 |
++ return efi_status_to_err(status); |
1557 |
++ } |
1558 |
++ |
1559 |
+ /* now *really* create the variable via EFI */ |
1560 |
+ status = efivars->ops->set_variable(new_var->VariableName, |
1561 |
+ &new_var->VendorGuid, |
1562 |
+@@ -868,10 +937,10 @@ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, |
1563 |
+ if (status != EFI_SUCCESS) { |
1564 |
+ printk(KERN_WARNING "efivars: set_variable() failed: status=%lx\n", |
1565 |
+ status); |
1566 |
+- spin_unlock(&efivars->lock); |
1567 |
++ spin_unlock_irq(&efivars->lock); |
1568 |
+ return -EIO; |
1569 |
+ } |
1570 |
+- spin_unlock(&efivars->lock); |
1571 |
++ spin_unlock_irq(&efivars->lock); |
1572 |
+ |
1573 |
+ /* Create the entry in sysfs. Locking is not required here */ |
1574 |
+ status = efivar_create_sysfs_entry(efivars, |
1575 |
+@@ -899,7 +968,7 @@ static ssize_t efivar_delete(struct file *filp, struct kobject *kobj, |
1576 |
+ if (!capable(CAP_SYS_ADMIN)) |
1577 |
+ return -EACCES; |
1578 |
+ |
1579 |
+- spin_lock(&efivars->lock); |
1580 |
++ spin_lock_irq(&efivars->lock); |
1581 |
+ |
1582 |
+ /* |
1583 |
+ * Does this variable already exist? |
1584 |
+@@ -917,7 +986,7 @@ static ssize_t efivar_delete(struct file *filp, struct kobject *kobj, |
1585 |
+ } |
1586 |
+ } |
1587 |
+ if (!found) { |
1588 |
+- spin_unlock(&efivars->lock); |
1589 |
++ spin_unlock_irq(&efivars->lock); |
1590 |
+ return -EINVAL; |
1591 |
+ } |
1592 |
+ /* force the Attributes/DataSize to 0 to ensure deletion */ |
1593 |
+@@ -933,12 +1002,12 @@ static ssize_t efivar_delete(struct file *filp, struct kobject *kobj, |
1594 |
+ if (status != EFI_SUCCESS) { |
1595 |
+ printk(KERN_WARNING "efivars: set_variable() failed: status=%lx\n", |
1596 |
+ status); |
1597 |
+- spin_unlock(&efivars->lock); |
1598 |
++ spin_unlock_irq(&efivars->lock); |
1599 |
+ return -EIO; |
1600 |
+ } |
1601 |
+ list_del(&search_efivar->list); |
1602 |
+ /* We need to release this lock before unregistering. */ |
1603 |
+- spin_unlock(&efivars->lock); |
1604 |
++ spin_unlock_irq(&efivars->lock); |
1605 |
+ efivar_unregister(search_efivar); |
1606 |
+ |
1607 |
+ /* It's dead Jim.... */ |
1608 |
+@@ -967,6 +1036,53 @@ static bool variable_is_present(efi_char16_t *variable_name, efi_guid_t *vendor) |
1609 |
+ return found; |
1610 |
+ } |
1611 |
+ |
1612 |
++static void efivar_update_sysfs_entries(struct work_struct *work) |
1613 |
++{ |
1614 |
++ struct efivars *efivars = &__efivars; |
1615 |
++ efi_guid_t vendor; |
1616 |
++ efi_char16_t *variable_name; |
1617 |
++ unsigned long variable_name_size = 1024; |
1618 |
++ efi_status_t status = EFI_NOT_FOUND; |
1619 |
++ bool found; |
1620 |
++ |
1621 |
++ /* Add new sysfs entries */ |
1622 |
++ while (1) { |
1623 |
++ variable_name = kzalloc(variable_name_size, GFP_KERNEL); |
1624 |
++ if (!variable_name) { |
1625 |
++ pr_err("efivars: Memory allocation failed.\n"); |
1626 |
++ return; |
1627 |
++ } |
1628 |
++ |
1629 |
++ spin_lock_irq(&efivars->lock); |
1630 |
++ found = false; |
1631 |
++ while (1) { |
1632 |
++ variable_name_size = 1024; |
1633 |
++ status = efivars->ops->get_next_variable( |
1634 |
++ &variable_name_size, |
1635 |
++ variable_name, |
1636 |
++ &vendor); |
1637 |
++ if (status != EFI_SUCCESS) { |
1638 |
++ break; |
1639 |
++ } else { |
1640 |
++ if (!variable_is_present(variable_name, |
1641 |
++ &vendor)) { |
1642 |
++ found = true; |
1643 |
++ break; |
1644 |
++ } |
1645 |
++ } |
1646 |
++ } |
1647 |
++ spin_unlock_irq(&efivars->lock); |
1648 |
++ |
1649 |
++ if (!found) { |
1650 |
++ kfree(variable_name); |
1651 |
++ break; |
1652 |
++ } else |
1653 |
++ efivar_create_sysfs_entry(efivars, |
1654 |
++ variable_name_size, |
1655 |
++ variable_name, &vendor); |
1656 |
++ } |
1657 |
++} |
1658 |
++ |
1659 |
+ /* |
1660 |
+ * Returns the size of variable_name, in bytes, including the |
1661 |
+ * terminating NULL character, or variable_name_size if no NULL |
1662 |
+@@ -1093,9 +1209,9 @@ efivar_create_sysfs_entry(struct efivars *efivars, |
1663 |
+ kfree(short_name); |
1664 |
+ short_name = NULL; |
1665 |
+ |
1666 |
+- spin_lock(&efivars->lock); |
1667 |
++ spin_lock_irq(&efivars->lock); |
1668 |
+ list_add(&new_efivar->list, &efivars->list); |
1669 |
+- spin_unlock(&efivars->lock); |
1670 |
++ spin_unlock_irq(&efivars->lock); |
1671 |
+ |
1672 |
+ return 0; |
1673 |
+ } |
1674 |
+@@ -1164,9 +1280,9 @@ void unregister_efivars(struct efivars *efivars) |
1675 |
+ struct efivar_entry *entry, *n; |
1676 |
+ |
1677 |
+ list_for_each_entry_safe(entry, n, &efivars->list, list) { |
1678 |
+- spin_lock(&efivars->lock); |
1679 |
++ spin_lock_irq(&efivars->lock); |
1680 |
+ list_del(&entry->list); |
1681 |
+- spin_unlock(&efivars->lock); |
1682 |
++ spin_unlock_irq(&efivars->lock); |
1683 |
+ efivar_unregister(entry); |
1684 |
+ } |
1685 |
+ if (efivars->new_var) |
1686 |
+@@ -1278,15 +1394,8 @@ int register_efivars(struct efivars *efivars, |
1687 |
+ if (error) |
1688 |
+ unregister_efivars(efivars); |
1689 |
+ |
1690 |
+- efivars->efi_pstore_info = efi_pstore_info; |
1691 |
+- |
1692 |
+- efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL); |
1693 |
+- if (efivars->efi_pstore_info.buf) { |
1694 |
+- efivars->efi_pstore_info.bufsize = 1024; |
1695 |
+- efivars->efi_pstore_info.data = efivars; |
1696 |
+- spin_lock_init(&efivars->efi_pstore_info.buf_lock); |
1697 |
+- pstore_register(&efivars->efi_pstore_info); |
1698 |
+- } |
1699 |
++ if (!efivars_pstore_disable) |
1700 |
++ efivar_pstore_register(efivars); |
1701 |
+ |
1702 |
+ out: |
1703 |
+ kfree(variable_name); |
1704 |
+@@ -1324,6 +1433,7 @@ efivars_init(void) |
1705 |
+ ops.get_variable = efi.get_variable; |
1706 |
+ ops.set_variable = efi.set_variable; |
1707 |
+ ops.get_next_variable = efi.get_next_variable; |
1708 |
++ ops.query_variable_store = efi_query_variable_store; |
1709 |
+ error = register_efivars(&__efivars, &ops, efi_kobj); |
1710 |
+ if (error) |
1711 |
+ goto err_put; |
1712 |
+diff --git a/drivers/gpu/drm/drm_crtc_helper.c b/drivers/gpu/drm/drm_crtc_helper.c |
1713 |
+index 81118893264c..b3abf7044718 100644 |
1714 |
+--- a/drivers/gpu/drm/drm_crtc_helper.c |
1715 |
++++ b/drivers/gpu/drm/drm_crtc_helper.c |
1716 |
+@@ -328,8 +328,8 @@ drm_crtc_prepare_encoders(struct drm_device *dev) |
1717 |
+ * drm_crtc_set_mode - set a mode |
1718 |
+ * @crtc: CRTC to program |
1719 |
+ * @mode: mode to use |
1720 |
+- * @x: width of mode |
1721 |
+- * @y: height of mode |
1722 |
++ * @x: horizontal offset into the surface |
1723 |
++ * @y: vertical offset into the surface |
1724 |
+ * |
1725 |
+ * LOCKING: |
1726 |
+ * Caller must hold mode config lock. |
1727 |
+diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c |
1728 |
+index 34791fb7aa78..39f81115fcf1 100644 |
1729 |
+--- a/drivers/gpu/drm/i915/i915_debugfs.c |
1730 |
++++ b/drivers/gpu/drm/i915/i915_debugfs.c |
1731 |
+@@ -30,6 +30,7 @@ |
1732 |
+ #include <linux/debugfs.h> |
1733 |
+ #include <linux/slab.h> |
1734 |
+ #include <linux/export.h> |
1735 |
++#include <generated/utsrelease.h> |
1736 |
+ #include "drmP.h" |
1737 |
+ #include "drm.h" |
1738 |
+ #include "intel_drv.h" |
1739 |
+@@ -340,7 +341,7 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data) |
1740 |
+ seq_printf(m, "No flip due on pipe %c (plane %c)\n", |
1741 |
+ pipe, plane); |
1742 |
+ } else { |
1743 |
+- if (!work->pending) { |
1744 |
++ if (atomic_read(&work->pending) < INTEL_FLIP_COMPLETE) { |
1745 |
+ seq_printf(m, "Flip queued on pipe %c (plane %c)\n", |
1746 |
+ pipe, plane); |
1747 |
+ } else { |
1748 |
+@@ -351,7 +352,7 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data) |
1749 |
+ seq_printf(m, "Stall check enabled, "); |
1750 |
+ else |
1751 |
+ seq_printf(m, "Stall check waiting for page flip ioctl, "); |
1752 |
+- seq_printf(m, "%d prepares\n", work->pending); |
1753 |
++ seq_printf(m, "%d prepares\n", atomic_read(&work->pending)); |
1754 |
+ |
1755 |
+ if (work->old_fb_obj) { |
1756 |
+ struct drm_i915_gem_object *obj = work->old_fb_obj; |
1757 |
+@@ -750,6 +751,7 @@ static int i915_error_state(struct seq_file *m, void *unused) |
1758 |
+ |
1759 |
+ seq_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec, |
1760 |
+ error->time.tv_usec); |
1761 |
++ seq_printf(m, "Kernel: " UTS_RELEASE "\n"); |
1762 |
+ seq_printf(m, "PCI ID: 0x%04x\n", dev->pci_device); |
1763 |
+ seq_printf(m, "EIR: 0x%08x\n", error->eir); |
1764 |
+ seq_printf(m, "PGTBL_ER: 0x%08x\n", error->pgtbl_er); |
1765 |
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h |
1766 |
+index 232119ac266f..a8f00d06a46f 100644 |
1767 |
+--- a/drivers/gpu/drm/i915/i915_drv.h |
1768 |
++++ b/drivers/gpu/drm/i915/i915_drv.h |
1769 |
+@@ -296,7 +296,8 @@ enum intel_pch { |
1770 |
+ |
1771 |
+ #define QUIRK_PIPEA_FORCE (1<<0) |
1772 |
+ #define QUIRK_LVDS_SSC_DISABLE (1<<1) |
1773 |
+-#define QUIRK_NO_PCH_PWM_ENABLE (1<<2) |
1774 |
++#define QUIRK_INVERT_BRIGHTNESS (1<<2) |
1775 |
++#define QUIRK_NO_PCH_PWM_ENABLE (1<<3) |
1776 |
+ |
1777 |
+ struct intel_fbdev; |
1778 |
+ struct intel_fbc_work; |
1779 |
+@@ -1397,6 +1398,7 @@ static inline void intel_unregister_dsm_handler(void) { return; } |
1780 |
+ #endif /* CONFIG_ACPI */ |
1781 |
+ |
1782 |
+ /* modesetting */ |
1783 |
++extern void i915_redisable_vga(struct drm_device *dev); |
1784 |
+ extern void intel_modeset_init(struct drm_device *dev); |
1785 |
+ extern void intel_modeset_gem_init(struct drm_device *dev); |
1786 |
+ extern void intel_modeset_cleanup(struct drm_device *dev); |
1787 |
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c |
1788 |
+index eb339456c111..2ac4ded0de99 100644 |
1789 |
+--- a/drivers/gpu/drm/i915/i915_gem.c |
1790 |
++++ b/drivers/gpu/drm/i915/i915_gem.c |
1791 |
+@@ -2468,6 +2468,11 @@ i915_find_fence_reg(struct drm_device *dev, |
1792 |
+ return avail; |
1793 |
+ } |
1794 |
+ |
1795 |
++static void i915_gem_write_fence__ipi(void *data) |
1796 |
++{ |
1797 |
++ wbinvd(); |
1798 |
++} |
1799 |
++ |
1800 |
+ /** |
1801 |
+ * i915_gem_object_get_fence - set up a fence reg for an object |
1802 |
+ * @obj: object to map through a fence reg |
1803 |
+@@ -2589,6 +2594,17 @@ update: |
1804 |
+ switch (INTEL_INFO(dev)->gen) { |
1805 |
+ case 7: |
1806 |
+ case 6: |
1807 |
++ /* In order to fully serialize access to the fenced region and |
1808 |
++ * the update to the fence register we need to take extreme |
1809 |
++ * measures on SNB+. In theory, the write to the fence register |
1810 |
++ * flushes all memory transactions before, and coupled with the |
1811 |
++ * mb() placed around the register write we serialise all memory |
1812 |
++ * operations with respect to the changes in the tiler. Yet, on |
1813 |
++ * SNB+ we need to take a step further and emit an explicit wbinvd() |
1814 |
++ * on each processor in order to manually flush all memory |
1815 |
++ * transactions before updating the fence register. |
1816 |
++ */ |
1817 |
++ on_each_cpu(i915_gem_write_fence__ipi, NULL, 1); |
1818 |
+ ret = sandybridge_write_fence_reg(obj, pipelined); |
1819 |
+ break; |
1820 |
+ case 5: |
1821 |
+@@ -3411,14 +3427,15 @@ i915_gem_pin_ioctl(struct drm_device *dev, void *data, |
1822 |
+ goto out; |
1823 |
+ } |
1824 |
+ |
1825 |
+- obj->user_pin_count++; |
1826 |
+- obj->pin_filp = file; |
1827 |
+- if (obj->user_pin_count == 1) { |
1828 |
++ if (obj->user_pin_count == 0) { |
1829 |
+ ret = i915_gem_object_pin(obj, args->alignment, true); |
1830 |
+ if (ret) |
1831 |
+ goto out; |
1832 |
+ } |
1833 |
+ |
1834 |
++ obj->user_pin_count++; |
1835 |
++ obj->pin_filp = file; |
1836 |
++ |
1837 |
+ /* XXX - flush the CPU caches for pinned objects |
1838 |
+ * as the X server doesn't manage domains yet |
1839 |
+ */ |
1840 |
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c |
1841 |
+index 8bca2d2d804d..fc6f32a579be 100644 |
1842 |
+--- a/drivers/gpu/drm/i915/i915_irq.c |
1843 |
++++ b/drivers/gpu/drm/i915/i915_irq.c |
1844 |
+@@ -1251,7 +1251,9 @@ static void i915_pageflip_stall_check(struct drm_device *dev, int pipe) |
1845 |
+ spin_lock_irqsave(&dev->event_lock, flags); |
1846 |
+ work = intel_crtc->unpin_work; |
1847 |
+ |
1848 |
+- if (work == NULL || work->pending || !work->enable_stall_check) { |
1849 |
++ if (work == NULL || |
1850 |
++ atomic_read(&work->pending) >= INTEL_FLIP_COMPLETE || |
1851 |
++ !work->enable_stall_check) { |
1852 |
+ /* Either the pending flip IRQ arrived, or we're too early. Don't check */ |
1853 |
+ spin_unlock_irqrestore(&dev->event_lock, flags); |
1854 |
+ return; |
1855 |
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c |
1856 |
+index 5647ce4b3ec7..c975c996ff5f 100644 |
1857 |
+--- a/drivers/gpu/drm/i915/intel_display.c |
1858 |
++++ b/drivers/gpu/drm/i915/intel_display.c |
1859 |
+@@ -25,6 +25,7 @@ |
1860 |
+ */ |
1861 |
+ |
1862 |
+ #include <linux/cpufreq.h> |
1863 |
++#include <linux/dmi.h> |
1864 |
+ #include <linux/module.h> |
1865 |
+ #include <linux/input.h> |
1866 |
+ #include <linux/i2c.h> |
1867 |
+@@ -7245,11 +7246,18 @@ static void do_intel_finish_page_flip(struct drm_device *dev, |
1868 |
+ |
1869 |
+ spin_lock_irqsave(&dev->event_lock, flags); |
1870 |
+ work = intel_crtc->unpin_work; |
1871 |
+- if (work == NULL || !work->pending) { |
1872 |
++ |
1873 |
++ /* Ensure we don't miss a work->pending update ... */ |
1874 |
++ smp_rmb(); |
1875 |
++ |
1876 |
++ if (work == NULL || atomic_read(&work->pending) < INTEL_FLIP_COMPLETE) { |
1877 |
+ spin_unlock_irqrestore(&dev->event_lock, flags); |
1878 |
+ return; |
1879 |
+ } |
1880 |
+ |
1881 |
++ /* and that the unpin work is consistent wrt ->pending. */ |
1882 |
++ smp_rmb(); |
1883 |
++ |
1884 |
+ intel_crtc->unpin_work = NULL; |
1885 |
+ |
1886 |
+ if (work->event) { |
1887 |
+@@ -7321,16 +7329,25 @@ void intel_prepare_page_flip(struct drm_device *dev, int plane) |
1888 |
+ to_intel_crtc(dev_priv->plane_to_crtc_mapping[plane]); |
1889 |
+ unsigned long flags; |
1890 |
+ |
1891 |
++ /* NB: An MMIO update of the plane base pointer will also |
1892 |
++ * generate a page-flip completion irq, i.e. every modeset |
1893 |
++ * is also accompanied by a spurious intel_prepare_page_flip(). |
1894 |
++ */ |
1895 |
+ spin_lock_irqsave(&dev->event_lock, flags); |
1896 |
+- if (intel_crtc->unpin_work) { |
1897 |
+- if ((++intel_crtc->unpin_work->pending) > 1) |
1898 |
+- DRM_ERROR("Prepared flip multiple times\n"); |
1899 |
+- } else { |
1900 |
+- DRM_DEBUG_DRIVER("preparing flip with no unpin work?\n"); |
1901 |
+- } |
1902 |
++ if (intel_crtc->unpin_work) |
1903 |
++ atomic_inc_not_zero(&intel_crtc->unpin_work->pending); |
1904 |
+ spin_unlock_irqrestore(&dev->event_lock, flags); |
1905 |
+ } |
1906 |
+ |
1907 |
++inline static void intel_mark_page_flip_active(struct intel_crtc *intel_crtc) |
1908 |
++{ |
1909 |
++ /* Ensure that the work item is consistent when activating it ... */ |
1910 |
++ smp_wmb(); |
1911 |
++ atomic_set(&intel_crtc->unpin_work->pending, INTEL_FLIP_PENDING); |
1912 |
++ /* and that it is marked active as soon as the irq could fire. */ |
1913 |
++ smp_wmb(); |
1914 |
++} |
1915 |
++ |
1916 |
+ static int intel_gen2_queue_flip(struct drm_device *dev, |
1917 |
+ struct drm_crtc *crtc, |
1918 |
+ struct drm_framebuffer *fb, |
1919 |
+@@ -7367,6 +7384,8 @@ static int intel_gen2_queue_flip(struct drm_device *dev, |
1920 |
+ OUT_RING(fb->pitches[0]); |
1921 |
+ OUT_RING(obj->gtt_offset + offset); |
1922 |
+ OUT_RING(0); /* aux display base address, unused */ |
1923 |
++ |
1924 |
++ intel_mark_page_flip_active(intel_crtc); |
1925 |
+ ADVANCE_LP_RING(); |
1926 |
+ return 0; |
1927 |
+ |
1928 |
+@@ -7410,6 +7429,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev, |
1929 |
+ OUT_RING(obj->gtt_offset + offset); |
1930 |
+ OUT_RING(MI_NOOP); |
1931 |
+ |
1932 |
++ intel_mark_page_flip_active(intel_crtc); |
1933 |
+ ADVANCE_LP_RING(); |
1934 |
+ return 0; |
1935 |
+ |
1936 |
+@@ -7453,6 +7473,8 @@ static int intel_gen4_queue_flip(struct drm_device *dev, |
1937 |
+ pf = 0; |
1938 |
+ pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff; |
1939 |
+ OUT_RING(pf | pipesrc); |
1940 |
++ |
1941 |
++ intel_mark_page_flip_active(intel_crtc); |
1942 |
+ ADVANCE_LP_RING(); |
1943 |
+ return 0; |
1944 |
+ |
1945 |
+@@ -7494,6 +7516,8 @@ static int intel_gen6_queue_flip(struct drm_device *dev, |
1946 |
+ pf = 0; |
1947 |
+ pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff; |
1948 |
+ OUT_RING(pf | pipesrc); |
1949 |
++ |
1950 |
++ intel_mark_page_flip_active(intel_crtc); |
1951 |
+ ADVANCE_LP_RING(); |
1952 |
+ return 0; |
1953 |
+ |
1954 |
+@@ -7548,6 +7572,8 @@ static int intel_gen7_queue_flip(struct drm_device *dev, |
1955 |
+ intel_ring_emit(ring, (fb->pitches[0] | obj->tiling_mode)); |
1956 |
+ intel_ring_emit(ring, (obj->gtt_offset)); |
1957 |
+ intel_ring_emit(ring, (MI_NOOP)); |
1958 |
++ |
1959 |
++ intel_mark_page_flip_active(intel_crtc); |
1960 |
+ intel_ring_advance(ring); |
1961 |
+ return 0; |
1962 |
+ |
1963 |
+@@ -9175,6 +9201,16 @@ static void quirk_no_pcm_pwm_enable(struct drm_device *dev) |
1964 |
+ DRM_INFO("applying no-PCH_PWM_ENABLE quirk\n"); |
1965 |
+ } |
1966 |
+ |
1967 |
++/* |
1968 |
++ * A machine (e.g. Acer Aspire 5734Z) may need to invert the panel backlight |
1969 |
++ * brightness value |
1970 |
++ */ |
1971 |
++static void quirk_invert_brightness(struct drm_device *dev) |
1972 |
++{ |
1973 |
++ struct drm_i915_private *dev_priv = dev->dev_private; |
1974 |
++ dev_priv->quirks |= QUIRK_INVERT_BRIGHTNESS; |
1975 |
++} |
1976 |
++ |
1977 |
+ struct intel_quirk { |
1978 |
+ int device; |
1979 |
+ int subsystem_vendor; |
1980 |
+@@ -9182,6 +9218,34 @@ struct intel_quirk { |
1981 |
+ void (*hook)(struct drm_device *dev); |
1982 |
+ }; |
1983 |
+ |
1984 |
++/* For systems that don't have a meaningful PCI subdevice/subvendor ID */ |
1985 |
++struct intel_dmi_quirk { |
1986 |
++ void (*hook)(struct drm_device *dev); |
1987 |
++ const struct dmi_system_id (*dmi_id_list)[]; |
1988 |
++}; |
1989 |
++ |
1990 |
++static int intel_dmi_reverse_brightness(const struct dmi_system_id *id) |
1991 |
++{ |
1992 |
++ DRM_INFO("Backlight polarity reversed on %s\n", id->ident); |
1993 |
++ return 1; |
1994 |
++} |
1995 |
++ |
1996 |
++static const struct intel_dmi_quirk intel_dmi_quirks[] = { |
1997 |
++ { |
1998 |
++ .dmi_id_list = &(const struct dmi_system_id[]) { |
1999 |
++ { |
2000 |
++ .callback = intel_dmi_reverse_brightness, |
2001 |
++ .ident = "NCR Corporation", |
2002 |
++ .matches = {DMI_MATCH(DMI_SYS_VENDOR, "NCR Corporation"), |
2003 |
++ DMI_MATCH(DMI_PRODUCT_NAME, ""), |
2004 |
++ }, |
2005 |
++ }, |
2006 |
++ { } /* terminating entry */ |
2007 |
++ }, |
2008 |
++ .hook = quirk_invert_brightness, |
2009 |
++ }, |
2010 |
++}; |
2011 |
++ |
2012 |
+ struct intel_quirk intel_quirks[] = { |
2013 |
+ /* HP Mini needs pipe A force quirk (LP: #322104) */ |
2014 |
+ { 0x27ae, 0x103c, 0x361a, quirk_pipea_force }, |
2015 |
+@@ -9208,6 +9272,18 @@ struct intel_quirk intel_quirks[] = { |
2016 |
+ /* Sony Vaio Y cannot use SSC on LVDS */ |
2017 |
+ { 0x0046, 0x104d, 0x9076, quirk_ssc_force_disable }, |
2018 |
+ |
2019 |
++ /* Acer Aspire 5734Z must invert backlight brightness */ |
2020 |
++ { 0x2a42, 0x1025, 0x0459, quirk_invert_brightness }, |
2021 |
++ |
2022 |
++ /* Acer/eMachines G725 */ |
2023 |
++ { 0x2a42, 0x1025, 0x0210, quirk_invert_brightness }, |
2024 |
++ |
2025 |
++ /* Acer/eMachines e725 */ |
2026 |
++ { 0x2a42, 0x1025, 0x0212, quirk_invert_brightness }, |
2027 |
++ |
2028 |
++ /* Acer/Packard Bell NCL20 */ |
2029 |
++ { 0x2a42, 0x1025, 0x034b, quirk_invert_brightness }, |
2030 |
++ |
2031 |
+ /* Dell XPS13 HD Sandy Bridge */ |
2032 |
+ { 0x0116, 0x1028, 0x052e, quirk_no_pcm_pwm_enable }, |
2033 |
+ /* Dell XPS13 HD and XPS13 FHD Ivy Bridge */ |
2034 |
+@@ -9229,6 +9305,10 @@ static void intel_init_quirks(struct drm_device *dev) |
2035 |
+ q->subsystem_device == PCI_ANY_ID)) |
2036 |
+ q->hook(dev); |
2037 |
+ } |
2038 |
++ for (i = 0; i < ARRAY_SIZE(intel_dmi_quirks); i++) { |
2039 |
++ if (dmi_check_system(*intel_dmi_quirks[i].dmi_id_list) != 0) |
2040 |
++ intel_dmi_quirks[i].hook(dev); |
2041 |
++ } |
2042 |
+ } |
2043 |
+ |
2044 |
+ /* Disable the VGA plane that we never use */ |
2045 |
+@@ -9254,6 +9334,23 @@ static void i915_disable_vga(struct drm_device *dev) |
2046 |
+ POSTING_READ(vga_reg); |
2047 |
+ } |
2048 |
+ |
2049 |
++void i915_redisable_vga(struct drm_device *dev) |
2050 |
++{ |
2051 |
++ struct drm_i915_private *dev_priv = dev->dev_private; |
2052 |
++ u32 vga_reg; |
2053 |
++ |
2054 |
++ if (HAS_PCH_SPLIT(dev)) |
2055 |
++ vga_reg = CPU_VGACNTRL; |
2056 |
++ else |
2057 |
++ vga_reg = VGACNTRL; |
2058 |
++ |
2059 |
++ if (I915_READ(vga_reg) != VGA_DISP_DISABLE) { |
2060 |
++ DRM_DEBUG_KMS("Something enabled VGA plane, disabling it\n"); |
2061 |
++ I915_WRITE(vga_reg, VGA_DISP_DISABLE); |
2062 |
++ POSTING_READ(vga_reg); |
2063 |
++ } |
2064 |
++} |
2065 |
++ |
2066 |
+ void intel_modeset_init(struct drm_device *dev) |
2067 |
+ { |
2068 |
+ struct drm_i915_private *dev_priv = dev->dev_private; |
2069 |
+@@ -9374,6 +9471,9 @@ void intel_modeset_cleanup(struct drm_device *dev) |
2070 |
+ del_timer_sync(&dev_priv->idle_timer); |
2071 |
+ cancel_work_sync(&dev_priv->idle_work); |
2072 |
+ |
2073 |
++ /* destroy backlight, if any, before the connectors */ |
2074 |
++ intel_panel_destroy_backlight(dev); |
2075 |
++ |
2076 |
+ drm_mode_config_cleanup(dev); |
2077 |
+ } |
2078 |
+ |
2079 |
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c |
2080 |
+index eee6cd391993..9a3ecd61de7c 100644 |
2081 |
+--- a/drivers/gpu/drm/i915/intel_dp.c |
2082 |
++++ b/drivers/gpu/drm/i915/intel_dp.c |
2083 |
+@@ -2289,11 +2289,6 @@ done: |
2084 |
+ static void |
2085 |
+ intel_dp_destroy(struct drm_connector *connector) |
2086 |
+ { |
2087 |
+- struct drm_device *dev = connector->dev; |
2088 |
+- |
2089 |
+- if (intel_dpd_is_edp(dev)) |
2090 |
+- intel_panel_destroy_backlight(dev); |
2091 |
+- |
2092 |
+ drm_sysfs_connector_remove(connector); |
2093 |
+ drm_connector_cleanup(connector); |
2094 |
+ kfree(connector); |
2095 |
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h |
2096 |
+index cd623e899f58..018dfbd776d9 100644 |
2097 |
+--- a/drivers/gpu/drm/i915/intel_drv.h |
2098 |
++++ b/drivers/gpu/drm/i915/intel_drv.h |
2099 |
+@@ -277,7 +277,10 @@ struct intel_unpin_work { |
2100 |
+ struct drm_i915_gem_object *old_fb_obj; |
2101 |
+ struct drm_i915_gem_object *pending_flip_obj; |
2102 |
+ struct drm_pending_vblank_event *event; |
2103 |
+- int pending; |
2104 |
++ atomic_t pending; |
2105 |
++#define INTEL_FLIP_INACTIVE 0 |
2106 |
++#define INTEL_FLIP_PENDING 1 |
2107 |
++#define INTEL_FLIP_COMPLETE 2 |
2108 |
+ bool enable_stall_check; |
2109 |
+ }; |
2110 |
+ |
2111 |
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c |
2112 |
+index dc7c5f6415a7..b695ab48ebf0 100644 |
2113 |
+--- a/drivers/gpu/drm/i915/intel_lvds.c |
2114 |
++++ b/drivers/gpu/drm/i915/intel_lvds.c |
2115 |
+@@ -535,6 +535,7 @@ static int intel_lid_notify(struct notifier_block *nb, unsigned long val, |
2116 |
+ |
2117 |
+ mutex_lock(&dev->mode_config.mutex); |
2118 |
+ drm_helper_resume_force_mode(dev); |
2119 |
++ i915_redisable_vga(dev); |
2120 |
+ mutex_unlock(&dev->mode_config.mutex); |
2121 |
+ |
2122 |
+ return NOTIFY_OK; |
2123 |
+@@ -552,8 +553,6 @@ static void intel_lvds_destroy(struct drm_connector *connector) |
2124 |
+ struct drm_device *dev = connector->dev; |
2125 |
+ struct drm_i915_private *dev_priv = dev->dev_private; |
2126 |
+ |
2127 |
+- intel_panel_destroy_backlight(dev); |
2128 |
+- |
2129 |
+ if (dev_priv->lid_notifier.notifier_call) |
2130 |
+ acpi_lid_notifier_unregister(&dev_priv->lid_notifier); |
2131 |
+ drm_sysfs_connector_remove(connector); |
2132 |
+diff --git a/drivers/gpu/drm/i915/intel_opregion.c b/drivers/gpu/drm/i915/intel_opregion.c |
2133 |
+index cffb0071f877..356a252e6861 100644 |
2134 |
+--- a/drivers/gpu/drm/i915/intel_opregion.c |
2135 |
++++ b/drivers/gpu/drm/i915/intel_opregion.c |
2136 |
+@@ -161,7 +161,7 @@ static u32 asle_set_backlight(struct drm_device *dev, u32 bclp) |
2137 |
+ |
2138 |
+ max = intel_panel_get_max_backlight(dev); |
2139 |
+ intel_panel_set_backlight(dev, bclp * max / 255); |
2140 |
+- asle->cblv = (bclp*0x64)/0xff | ASLE_CBLV_VALID; |
2141 |
++ asle->cblv = DIV_ROUND_UP(bclp * 100, 255) | ASLE_CBLV_VALID; |
2142 |
+ |
2143 |
+ return 0; |
2144 |
+ } |
2145 |
+diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c |
2146 |
+index 48177ec4720e..0bae2bbe7b3c 100644 |
2147 |
+--- a/drivers/gpu/drm/i915/intel_panel.c |
2148 |
++++ b/drivers/gpu/drm/i915/intel_panel.c |
2149 |
+@@ -28,6 +28,7 @@ |
2150 |
+ * Chris Wilson <chris@×××××××××××××××.uk> |
2151 |
+ */ |
2152 |
+ |
2153 |
++#include <linux/moduleparam.h> |
2154 |
+ #include "intel_drv.h" |
2155 |
+ |
2156 |
+ #define PCI_LBPC 0xf4 /* legacy/combination backlight modes */ |
2157 |
+@@ -189,6 +190,27 @@ u32 intel_panel_get_max_backlight(struct drm_device *dev) |
2158 |
+ return max; |
2159 |
+ } |
2160 |
+ |
2161 |
++static int i915_panel_invert_brightness; |
2162 |
++MODULE_PARM_DESC(invert_brightness, "Invert backlight brightness " |
2163 |
++ "(-1 force normal, 0 machine defaults, 1 force inversion), please " |
2164 |
++ "report PCI device ID, subsystem vendor and subsystem device ID " |
2165 |
++ "to dri-devel@×××××××××××××××××.org, if your machine needs it. " |
2166 |
++ "It will then be included in an upcoming module version."); |
2167 |
++module_param_named(invert_brightness, i915_panel_invert_brightness, int, 0600); |
2168 |
++static u32 intel_panel_compute_brightness(struct drm_device *dev, u32 val) |
2169 |
++{ |
2170 |
++ struct drm_i915_private *dev_priv = dev->dev_private; |
2171 |
++ |
2172 |
++ if (i915_panel_invert_brightness < 0) |
2173 |
++ return val; |
2174 |
++ |
2175 |
++ if (i915_panel_invert_brightness > 0 || |
2176 |
++ dev_priv->quirks & QUIRK_INVERT_BRIGHTNESS) |
2177 |
++ return intel_panel_get_max_backlight(dev) - val; |
2178 |
++ |
2179 |
++ return val; |
2180 |
++} |
2181 |
++ |
2182 |
+ u32 intel_panel_get_backlight(struct drm_device *dev) |
2183 |
+ { |
2184 |
+ struct drm_i915_private *dev_priv = dev->dev_private; |
2185 |
+@@ -209,6 +231,7 @@ u32 intel_panel_get_backlight(struct drm_device *dev) |
2186 |
+ } |
2187 |
+ } |
2188 |
+ |
2189 |
++ val = intel_panel_compute_brightness(dev, val); |
2190 |
+ DRM_DEBUG_DRIVER("get backlight PWM = %d\n", val); |
2191 |
+ return val; |
2192 |
+ } |
2193 |
+@@ -226,6 +249,7 @@ static void intel_panel_actually_set_backlight(struct drm_device *dev, u32 level |
2194 |
+ u32 tmp; |
2195 |
+ |
2196 |
+ DRM_DEBUG_DRIVER("set backlight PWM = %d\n", level); |
2197 |
++ level = intel_panel_compute_brightness(dev, level); |
2198 |
+ |
2199 |
+ if (HAS_PCH_SPLIT(dev)) |
2200 |
+ return intel_pch_panel_set_backlight(dev, level); |
2201 |
+@@ -335,6 +359,9 @@ int intel_panel_setup_backlight(struct drm_device *dev) |
2202 |
+ |
2203 |
+ intel_panel_init_backlight(dev); |
2204 |
+ |
2205 |
++ if (WARN_ON(dev_priv->backlight)) |
2206 |
++ return -ENODEV; |
2207 |
++ |
2208 |
+ if (dev_priv->int_lvds_connector) |
2209 |
+ connector = dev_priv->int_lvds_connector; |
2210 |
+ else if (dev_priv->int_edp_connector) |
2211 |
+@@ -362,8 +389,10 @@ int intel_panel_setup_backlight(struct drm_device *dev) |
2212 |
+ void intel_panel_destroy_backlight(struct drm_device *dev) |
2213 |
+ { |
2214 |
+ struct drm_i915_private *dev_priv = dev->dev_private; |
2215 |
+- if (dev_priv->backlight) |
2216 |
++ if (dev_priv->backlight) { |
2217 |
+ backlight_device_unregister(dev_priv->backlight); |
2218 |
++ dev_priv->backlight = NULL; |
2219 |
++ } |
2220 |
+ } |
2221 |
+ #else |
2222 |
+ int intel_panel_setup_backlight(struct drm_device *dev) |
2223 |
+diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c |
2224 |
+index c0ba260cc222..8d55a33d7226 100644 |
2225 |
+--- a/drivers/gpu/drm/i915/intel_sdvo.c |
2226 |
++++ b/drivers/gpu/drm/i915/intel_sdvo.c |
2227 |
+@@ -2265,6 +2265,18 @@ intel_sdvo_output_setup(struct intel_sdvo *intel_sdvo, uint16_t flags) |
2228 |
+ return true; |
2229 |
+ } |
2230 |
+ |
2231 |
++static void intel_sdvo_output_cleanup(struct intel_sdvo *intel_sdvo) |
2232 |
++{ |
2233 |
++ struct drm_device *dev = intel_sdvo->base.base.dev; |
2234 |
++ struct drm_connector *connector, *tmp; |
2235 |
++ |
2236 |
++ list_for_each_entry_safe(connector, tmp, |
2237 |
++ &dev->mode_config.connector_list, head) { |
2238 |
++ if (intel_attached_encoder(connector) == &intel_sdvo->base) |
2239 |
++ intel_sdvo_destroy(connector); |
2240 |
++ } |
2241 |
++} |
2242 |
++ |
2243 |
+ static bool intel_sdvo_tv_create_property(struct intel_sdvo *intel_sdvo, |
2244 |
+ struct intel_sdvo_connector *intel_sdvo_connector, |
2245 |
+ int type) |
2246 |
+@@ -2583,7 +2595,8 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg) |
2247 |
+ intel_sdvo->caps.output_flags) != true) { |
2248 |
+ DRM_DEBUG_KMS("SDVO output failed to setup on SDVO%c\n", |
2249 |
+ IS_SDVOB(sdvo_reg) ? 'B' : 'C'); |
2250 |
+- goto err; |
2251 |
++ /* Output_setup can leave behind connectors! */ |
2252 |
++ goto err_output; |
2253 |
+ } |
2254 |
+ |
2255 |
+ /* Only enable the hotplug irq if we need it, to work around noisy |
2256 |
+@@ -2596,12 +2609,12 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg) |
2257 |
+ |
2258 |
+ /* Set the input timing to the screen. Assume always input 0. */ |
2259 |
+ if (!intel_sdvo_set_target_input(intel_sdvo)) |
2260 |
+- goto err; |
2261 |
++ goto err_output; |
2262 |
+ |
2263 |
+ if (!intel_sdvo_get_input_pixel_clock_range(intel_sdvo, |
2264 |
+ &intel_sdvo->pixel_clock_min, |
2265 |
+ &intel_sdvo->pixel_clock_max)) |
2266 |
+- goto err; |
2267 |
++ goto err_output; |
2268 |
+ |
2269 |
+ DRM_DEBUG_KMS("%s device VID/DID: %02X:%02X.%02X, " |
2270 |
+ "clock range %dMHz - %dMHz, " |
2271 |
+@@ -2621,6 +2634,9 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg) |
2272 |
+ (SDVO_OUTPUT_TMDS1 | SDVO_OUTPUT_RGB1) ? 'Y' : 'N'); |
2273 |
+ return true; |
2274 |
+ |
2275 |
++err_output: |
2276 |
++ intel_sdvo_output_cleanup(intel_sdvo); |
2277 |
++ |
2278 |
+ err: |
2279 |
+ drm_encoder_cleanup(&intel_encoder->base); |
2280 |
+ i2c_del_adapter(&intel_sdvo->ddc); |
2281 |
+diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c |
2282 |
+index 284bd25d5d21..4339694bd9f8 100644 |
2283 |
+--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c |
2284 |
++++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c |
2285 |
+@@ -375,9 +375,6 @@ bool nouveau_acpi_rom_supported(struct pci_dev *pdev) |
2286 |
+ acpi_status status; |
2287 |
+ acpi_handle dhandle, rom_handle; |
2288 |
+ |
2289 |
+- if (!nouveau_dsm_priv.dsm_detected && !nouveau_dsm_priv.optimus_detected) |
2290 |
+- return false; |
2291 |
+- |
2292 |
+ dhandle = DEVICE_ACPI_HANDLE(&pdev->dev); |
2293 |
+ if (!dhandle) |
2294 |
+ return false; |
2295 |
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c |
2296 |
+index 12ce044f12f5..2c3d5c8b7a3f 100644 |
2297 |
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c |
2298 |
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c |
2299 |
+@@ -946,7 +946,7 @@ nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) |
2300 |
+ if (dev_priv->gart_info.type == NOUVEAU_GART_AGP) { |
2301 |
+ mem->bus.offset = mem->start << PAGE_SHIFT; |
2302 |
+ mem->bus.base = dev_priv->gart_info.aper_base; |
2303 |
+- mem->bus.is_iomem = true; |
2304 |
++ mem->bus.is_iomem = !dev->agp->cant_use_aperture; |
2305 |
+ } |
2306 |
+ #endif |
2307 |
+ break; |
2308 |
+diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c |
2309 |
+index ebbfbd201739..dc612efbeb7a 100644 |
2310 |
+--- a/drivers/gpu/drm/radeon/atombios_crtc.c |
2311 |
++++ b/drivers/gpu/drm/radeon/atombios_crtc.c |
2312 |
+@@ -573,6 +573,11 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc, |
2313 |
+ /* use frac fb div on APUs */ |
2314 |
+ if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) |
2315 |
+ pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV; |
2316 |
++ /* use frac fb div on RS780/RS880 */ |
2317 |
++ if ((rdev->family == CHIP_RS780) || (rdev->family == CHIP_RS880)) |
2318 |
++ pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV; |
2319 |
++ if (ASIC_IS_DCE32(rdev) && mode->clock > 165000) |
2320 |
++ pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV; |
2321 |
+ } else { |
2322 |
+ pll->flags |= RADEON_PLL_LEGACY; |
2323 |
+ |
2324 |
+diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c |
2325 |
+index ad72295403a8..df62c393f2f5 100644 |
2326 |
+--- a/drivers/gpu/drm/radeon/evergreen.c |
2327 |
++++ b/drivers/gpu/drm/radeon/evergreen.c |
2328 |
+@@ -1292,7 +1292,7 @@ void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *s |
2329 |
+ WREG32(BIF_FB_EN, FB_READ_EN | FB_WRITE_EN); |
2330 |
+ |
2331 |
+ for (i = 0; i < rdev->num_crtc; i++) { |
2332 |
+- if (save->crtc_enabled) { |
2333 |
++ if (save->crtc_enabled[i]) { |
2334 |
+ if (ASIC_IS_DCE6(rdev)) { |
2335 |
+ tmp = RREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i]); |
2336 |
+ tmp |= EVERGREEN_CRTC_BLANK_DATA_EN; |
2337 |
+@@ -1874,7 +1874,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev) |
2338 |
+ case CHIP_SUMO: |
2339 |
+ rdev->config.evergreen.num_ses = 1; |
2340 |
+ rdev->config.evergreen.max_pipes = 4; |
2341 |
+- rdev->config.evergreen.max_tile_pipes = 2; |
2342 |
++ rdev->config.evergreen.max_tile_pipes = 4; |
2343 |
+ if (rdev->pdev->device == 0x9648) |
2344 |
+ rdev->config.evergreen.max_simds = 3; |
2345 |
+ else if ((rdev->pdev->device == 0x9647) || |
2346 |
+@@ -1963,7 +1963,7 @@ static void evergreen_gpu_init(struct radeon_device *rdev) |
2347 |
+ break; |
2348 |
+ case CHIP_CAICOS: |
2349 |
+ rdev->config.evergreen.num_ses = 1; |
2350 |
+- rdev->config.evergreen.max_pipes = 4; |
2351 |
++ rdev->config.evergreen.max_pipes = 2; |
2352 |
+ rdev->config.evergreen.max_tile_pipes = 2; |
2353 |
+ rdev->config.evergreen.max_simds = 2; |
2354 |
+ rdev->config.evergreen.max_backends = 1 * rdev->config.evergreen.num_ses; |
2355 |
+@@ -3219,6 +3219,8 @@ static int evergreen_startup(struct radeon_device *rdev) |
2356 |
+ /* enable pcie gen2 link */ |
2357 |
+ evergreen_pcie_gen2_enable(rdev); |
2358 |
+ |
2359 |
++ evergreen_mc_program(rdev); |
2360 |
++ |
2361 |
+ if (ASIC_IS_DCE5(rdev)) { |
2362 |
+ if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw || !rdev->mc_fw) { |
2363 |
+ r = ni_init_microcode(rdev); |
2364 |
+@@ -3246,7 +3248,6 @@ static int evergreen_startup(struct radeon_device *rdev) |
2365 |
+ if (r) |
2366 |
+ return r; |
2367 |
+ |
2368 |
+- evergreen_mc_program(rdev); |
2369 |
+ if (rdev->flags & RADEON_IS_AGP) { |
2370 |
+ evergreen_agp_enable(rdev); |
2371 |
+ } else { |
2372 |
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c |
2373 |
+index f481da3a4f29..461262eee79a 100644 |
2374 |
+--- a/drivers/gpu/drm/radeon/ni.c |
2375 |
++++ b/drivers/gpu/drm/radeon/ni.c |
2376 |
+@@ -1552,6 +1552,8 @@ static int cayman_startup(struct radeon_device *rdev) |
2377 |
+ /* enable pcie gen2 link */ |
2378 |
+ evergreen_pcie_gen2_enable(rdev); |
2379 |
+ |
2380 |
++ evergreen_mc_program(rdev); |
2381 |
++ |
2382 |
+ if (rdev->flags & RADEON_IS_IGP) { |
2383 |
+ if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) { |
2384 |
+ r = ni_init_microcode(rdev); |
2385 |
+@@ -1580,7 +1582,6 @@ static int cayman_startup(struct radeon_device *rdev) |
2386 |
+ if (r) |
2387 |
+ return r; |
2388 |
+ |
2389 |
+- evergreen_mc_program(rdev); |
2390 |
+ r = cayman_pcie_gart_enable(rdev); |
2391 |
+ if (r) |
2392 |
+ return r; |
2393 |
+diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c |
2394 |
+index 49b622904217..1555cd694111 100644 |
2395 |
+--- a/drivers/gpu/drm/radeon/r600.c |
2396 |
++++ b/drivers/gpu/drm/radeon/r600.c |
2397 |
+@@ -2431,6 +2431,8 @@ int r600_startup(struct radeon_device *rdev) |
2398 |
+ /* enable pcie gen2 link */ |
2399 |
+ r600_pcie_gen2_enable(rdev); |
2400 |
+ |
2401 |
++ r600_mc_program(rdev); |
2402 |
++ |
2403 |
+ if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) { |
2404 |
+ r = r600_init_microcode(rdev); |
2405 |
+ if (r) { |
2406 |
+@@ -2443,7 +2445,6 @@ int r600_startup(struct radeon_device *rdev) |
2407 |
+ if (r) |
2408 |
+ return r; |
2409 |
+ |
2410 |
+- r600_mc_program(rdev); |
2411 |
+ if (rdev->flags & RADEON_IS_AGP) { |
2412 |
+ r600_agp_enable(rdev); |
2413 |
+ } else { |
2414 |
+diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c |
2415 |
+index 0b5920671450..61ffe3c76474 100644 |
2416 |
+--- a/drivers/gpu/drm/radeon/r600_hdmi.c |
2417 |
++++ b/drivers/gpu/drm/radeon/r600_hdmi.c |
2418 |
+@@ -530,7 +530,7 @@ void r600_hdmi_enable(struct drm_encoder *encoder) |
2419 |
+ WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0x1, ~0x1); |
2420 |
+ } else if (ASIC_IS_DCE3(rdev)) { |
2421 |
+ /* TODO */ |
2422 |
+- } else if (rdev->family >= CHIP_R600) { |
2423 |
++ } else if (ASIC_IS_DCE2(rdev)) { |
2424 |
+ switch (radeon_encoder->encoder_id) { |
2425 |
+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: |
2426 |
+ WREG32_P(AVIVO_TMDSA_CNTL, AVIVO_TMDSA_CNTL_HDMI_EN, |
2427 |
+@@ -602,7 +602,7 @@ void r600_hdmi_disable(struct drm_encoder *encoder) |
2428 |
+ WREG32_P(radeon_encoder->hdmi_config_offset + 0xc, 0, ~0x1); |
2429 |
+ } else if (ASIC_IS_DCE32(rdev)) { |
2430 |
+ WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0, ~0x1); |
2431 |
+- } else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) { |
2432 |
++ } else if (ASIC_IS_DCE2(rdev) && !ASIC_IS_DCE3(rdev)) { |
2433 |
+ switch (radeon_encoder->encoder_id) { |
2434 |
+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: |
2435 |
+ WREG32_P(AVIVO_TMDSA_CNTL, 0, |
2436 |
+diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c |
2437 |
+index 2a2cf0b88a28..428bce6cb4f0 100644 |
2438 |
+--- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c |
2439 |
++++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c |
2440 |
+@@ -202,6 +202,13 @@ static bool radeon_atpx_detect(void) |
2441 |
+ has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); |
2442 |
+ } |
2443 |
+ |
2444 |
++ /* some newer PX laptops mark the dGPU as a non-VGA display device */ |
2445 |
++ while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { |
2446 |
++ vga_count++; |
2447 |
++ |
2448 |
++ has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); |
2449 |
++ } |
2450 |
++ |
2451 |
+ if (has_atpx && vga_count == 2) { |
2452 |
+ acpi_get_name(radeon_atpx_priv.atpx_handle, ACPI_FULL_PATHNAME, &buffer); |
2453 |
+ printk(KERN_INFO "VGA switcheroo: detected switching method %s handle\n", |
2454 |
+diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c |
2455 |
+index 07d0bcd718a1..cf5dd63a95c3 100644 |
2456 |
+--- a/drivers/gpu/drm/radeon/radeon_combios.c |
2457 |
++++ b/drivers/gpu/drm/radeon/radeon_combios.c |
2458 |
+@@ -898,10 +898,14 @@ struct radeon_encoder_primary_dac *radeon_combios_get_primary_dac_info(struct |
2459 |
+ } |
2460 |
+ |
2461 |
+ /* quirks */ |
2462 |
++ /* Radeon 7000 (RV100) */ |
2463 |
++ if (((dev->pdev->device == 0x5159) && |
2464 |
++ (dev->pdev->subsystem_vendor == 0x174B) && |
2465 |
++ (dev->pdev->subsystem_device == 0x7c28)) || |
2466 |
+ /* Radeon 9100 (R200) */ |
2467 |
+- if ((dev->pdev->device == 0x514D) && |
2468 |
++ ((dev->pdev->device == 0x514D) && |
2469 |
+ (dev->pdev->subsystem_vendor == 0x174B) && |
2470 |
+- (dev->pdev->subsystem_device == 0x7149)) { |
2471 |
++ (dev->pdev->subsystem_device == 0x7149))) { |
2472 |
+ /* vbios value is bad, use the default */ |
2473 |
+ found = 0; |
2474 |
+ } |
2475 |
+@@ -1484,6 +1488,9 @@ bool radeon_get_legacy_connector_info_from_table(struct drm_device *dev) |
2476 |
+ of_machine_is_compatible("PowerBook6,7")) { |
2477 |
+ /* ibook */ |
2478 |
+ rdev->mode_info.connector_table = CT_IBOOK; |
2479 |
++ } else if (of_machine_is_compatible("PowerMac3,5")) { |
2480 |
++ /* PowerMac G4 Silver radeon 7500 */ |
2481 |
++ rdev->mode_info.connector_table = CT_MAC_G4_SILVER; |
2482 |
+ } else if (of_machine_is_compatible("PowerMac4,4")) { |
2483 |
+ /* emac */ |
2484 |
+ rdev->mode_info.connector_table = CT_EMAC; |
2485 |
+@@ -1509,6 +1516,11 @@ bool radeon_get_legacy_connector_info_from_table(struct drm_device *dev) |
2486 |
+ (rdev->pdev->subsystem_device == 0x4150)) { |
2487 |
+ /* Mac G5 tower 9600 */ |
2488 |
+ rdev->mode_info.connector_table = CT_MAC_G5_9600; |
2489 |
++ } else if ((rdev->pdev->device == 0x4c66) && |
2490 |
++ (rdev->pdev->subsystem_vendor == 0x1002) && |
2491 |
++ (rdev->pdev->subsystem_device == 0x4c66)) { |
2492 |
++ /* SAM440ep RV250 embedded board */ |
2493 |
++ rdev->mode_info.connector_table = CT_SAM440EP; |
2494 |
+ } else |
2495 |
+ #endif /* CONFIG_PPC_PMAC */ |
2496 |
+ #ifdef CONFIG_PPC64 |
2497 |
+@@ -2082,6 +2094,115 @@ bool radeon_get_legacy_connector_info_from_table(struct drm_device *dev) |
2498 |
+ CONNECTOR_OBJECT_ID_SVIDEO, |
2499 |
+ &hpd); |
2500 |
+ break; |
2501 |
++ case CT_SAM440EP: |
2502 |
++ DRM_INFO("Connector Table: %d (SAM440ep embedded board)\n", |
2503 |
++ rdev->mode_info.connector_table); |
2504 |
++ /* LVDS */ |
2505 |
++ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_NONE_DETECTED, 0, 0); |
2506 |
++ hpd.hpd = RADEON_HPD_NONE; |
2507 |
++ radeon_add_legacy_encoder(dev, |
2508 |
++ radeon_get_encoder_enum(dev, |
2509 |
++ ATOM_DEVICE_LCD1_SUPPORT, |
2510 |
++ 0), |
2511 |
++ ATOM_DEVICE_LCD1_SUPPORT); |
2512 |
++ radeon_add_legacy_connector(dev, 0, ATOM_DEVICE_LCD1_SUPPORT, |
2513 |
++ DRM_MODE_CONNECTOR_LVDS, &ddc_i2c, |
2514 |
++ CONNECTOR_OBJECT_ID_LVDS, |
2515 |
++ &hpd); |
2516 |
++ /* DVI-I - secondary dac, int tmds */ |
2517 |
++ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_DVI, 0, 0); |
2518 |
++ hpd.hpd = RADEON_HPD_1; /* ??? */ |
2519 |
++ radeon_add_legacy_encoder(dev, |
2520 |
++ radeon_get_encoder_enum(dev, |
2521 |
++ ATOM_DEVICE_DFP1_SUPPORT, |
2522 |
++ 0), |
2523 |
++ ATOM_DEVICE_DFP1_SUPPORT); |
2524 |
++ radeon_add_legacy_encoder(dev, |
2525 |
++ radeon_get_encoder_enum(dev, |
2526 |
++ ATOM_DEVICE_CRT2_SUPPORT, |
2527 |
++ 2), |
2528 |
++ ATOM_DEVICE_CRT2_SUPPORT); |
2529 |
++ radeon_add_legacy_connector(dev, 1, |
2530 |
++ ATOM_DEVICE_DFP1_SUPPORT | |
2531 |
++ ATOM_DEVICE_CRT2_SUPPORT, |
2532 |
++ DRM_MODE_CONNECTOR_DVII, &ddc_i2c, |
2533 |
++ CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I, |
2534 |
++ &hpd); |
2535 |
++ /* VGA - primary dac */ |
2536 |
++ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_VGA, 0, 0); |
2537 |
++ hpd.hpd = RADEON_HPD_NONE; |
2538 |
++ radeon_add_legacy_encoder(dev, |
2539 |
++ radeon_get_encoder_enum(dev, |
2540 |
++ ATOM_DEVICE_CRT1_SUPPORT, |
2541 |
++ 1), |
2542 |
++ ATOM_DEVICE_CRT1_SUPPORT); |
2543 |
++ radeon_add_legacy_connector(dev, 2, |
2544 |
++ ATOM_DEVICE_CRT1_SUPPORT, |
2545 |
++ DRM_MODE_CONNECTOR_VGA, &ddc_i2c, |
2546 |
++ CONNECTOR_OBJECT_ID_VGA, |
2547 |
++ &hpd); |
2548 |
++ /* TV - TV DAC */ |
2549 |
++ ddc_i2c.valid = false; |
2550 |
++ hpd.hpd = RADEON_HPD_NONE; |
2551 |
++ radeon_add_legacy_encoder(dev, |
2552 |
++ radeon_get_encoder_enum(dev, |
2553 |
++ ATOM_DEVICE_TV1_SUPPORT, |
2554 |
++ 2), |
2555 |
++ ATOM_DEVICE_TV1_SUPPORT); |
2556 |
++ radeon_add_legacy_connector(dev, 3, ATOM_DEVICE_TV1_SUPPORT, |
2557 |
++ DRM_MODE_CONNECTOR_SVIDEO, |
2558 |
++ &ddc_i2c, |
2559 |
++ CONNECTOR_OBJECT_ID_SVIDEO, |
2560 |
++ &hpd); |
2561 |
++ break; |
2562 |
++ case CT_MAC_G4_SILVER: |
2563 |
++ DRM_INFO("Connector Table: %d (mac g4 silver)\n", |
2564 |
++ rdev->mode_info.connector_table); |
2565 |
++ /* DVI-I - tv dac, int tmds */ |
2566 |
++ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_DVI, 0, 0); |
2567 |
++ hpd.hpd = RADEON_HPD_1; /* ??? */ |
2568 |
++ radeon_add_legacy_encoder(dev, |
2569 |
++ radeon_get_encoder_enum(dev, |
2570 |
++ ATOM_DEVICE_DFP1_SUPPORT, |
2571 |
++ 0), |
2572 |
++ ATOM_DEVICE_DFP1_SUPPORT); |
2573 |
++ radeon_add_legacy_encoder(dev, |
2574 |
++ radeon_get_encoder_enum(dev, |
2575 |
++ ATOM_DEVICE_CRT2_SUPPORT, |
2576 |
++ 2), |
2577 |
++ ATOM_DEVICE_CRT2_SUPPORT); |
2578 |
++ radeon_add_legacy_connector(dev, 0, |
2579 |
++ ATOM_DEVICE_DFP1_SUPPORT | |
2580 |
++ ATOM_DEVICE_CRT2_SUPPORT, |
2581 |
++ DRM_MODE_CONNECTOR_DVII, &ddc_i2c, |
2582 |
++ CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I, |
2583 |
++ &hpd); |
2584 |
++ /* VGA - primary dac */ |
2585 |
++ ddc_i2c = combios_setup_i2c_bus(rdev, DDC_VGA, 0, 0); |
2586 |
++ hpd.hpd = RADEON_HPD_NONE; |
2587 |
++ radeon_add_legacy_encoder(dev, |
2588 |
++ radeon_get_encoder_enum(dev, |
2589 |
++ ATOM_DEVICE_CRT1_SUPPORT, |
2590 |
++ 1), |
2591 |
++ ATOM_DEVICE_CRT1_SUPPORT); |
2592 |
++ radeon_add_legacy_connector(dev, 1, ATOM_DEVICE_CRT1_SUPPORT, |
2593 |
++ DRM_MODE_CONNECTOR_VGA, &ddc_i2c, |
2594 |
++ CONNECTOR_OBJECT_ID_VGA, |
2595 |
++ &hpd); |
2596 |
++ /* TV - TV DAC */ |
2597 |
++ ddc_i2c.valid = false; |
2598 |
++ hpd.hpd = RADEON_HPD_NONE; |
2599 |
++ radeon_add_legacy_encoder(dev, |
2600 |
++ radeon_get_encoder_enum(dev, |
2601 |
++ ATOM_DEVICE_TV1_SUPPORT, |
2602 |
++ 2), |
2603 |
++ ATOM_DEVICE_TV1_SUPPORT); |
2604 |
++ radeon_add_legacy_connector(dev, 2, ATOM_DEVICE_TV1_SUPPORT, |
2605 |
++ DRM_MODE_CONNECTOR_SVIDEO, |
2606 |
++ &ddc_i2c, |
2607 |
++ CONNECTOR_OBJECT_ID_SVIDEO, |
2608 |
++ &hpd); |
2609 |
++ break; |
2610 |
+ default: |
2611 |
+ DRM_INFO("Connector table: %d (invalid)\n", |
2612 |
+ rdev->mode_info.connector_table); |
2613 |
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c |
2614 |
+index ab63bcd17277..1334dbd15c1b 100644 |
2615 |
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c |
2616 |
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c |
2617 |
+@@ -1423,6 +1423,24 @@ struct drm_connector_funcs radeon_dp_connector_funcs = { |
2618 |
+ .force = radeon_dvi_force, |
2619 |
+ }; |
2620 |
+ |
2621 |
++static const struct drm_connector_funcs radeon_edp_connector_funcs = { |
2622 |
++ .dpms = drm_helper_connector_dpms, |
2623 |
++ .detect = radeon_dp_detect, |
2624 |
++ .fill_modes = drm_helper_probe_single_connector_modes, |
2625 |
++ .set_property = radeon_lvds_set_property, |
2626 |
++ .destroy = radeon_dp_connector_destroy, |
2627 |
++ .force = radeon_dvi_force, |
2628 |
++}; |
2629 |
++ |
2630 |
++static const struct drm_connector_funcs radeon_lvds_bridge_connector_funcs = { |
2631 |
++ .dpms = drm_helper_connector_dpms, |
2632 |
++ .detect = radeon_dp_detect, |
2633 |
++ .fill_modes = drm_helper_probe_single_connector_modes, |
2634 |
++ .set_property = radeon_lvds_set_property, |
2635 |
++ .destroy = radeon_dp_connector_destroy, |
2636 |
++ .force = radeon_dvi_force, |
2637 |
++}; |
2638 |
++ |
2639 |
+ void |
2640 |
+ radeon_add_atom_connector(struct drm_device *dev, |
2641 |
+ uint32_t connector_id, |
2642 |
+@@ -1514,8 +1532,6 @@ radeon_add_atom_connector(struct drm_device *dev, |
2643 |
+ goto failed; |
2644 |
+ radeon_dig_connector->igp_lane_info = igp_lane_info; |
2645 |
+ radeon_connector->con_priv = radeon_dig_connector; |
2646 |
+- drm_connector_init(dev, &radeon_connector->base, &radeon_dp_connector_funcs, connector_type); |
2647 |
+- drm_connector_helper_add(&radeon_connector->base, &radeon_dp_connector_helper_funcs); |
2648 |
+ if (i2c_bus->valid) { |
2649 |
+ /* add DP i2c bus */ |
2650 |
+ if (connector_type == DRM_MODE_CONNECTOR_eDP) |
2651 |
+@@ -1532,6 +1548,10 @@ radeon_add_atom_connector(struct drm_device *dev, |
2652 |
+ case DRM_MODE_CONNECTOR_VGA: |
2653 |
+ case DRM_MODE_CONNECTOR_DVIA: |
2654 |
+ default: |
2655 |
++ drm_connector_init(dev, &radeon_connector->base, |
2656 |
++ &radeon_dp_connector_funcs, connector_type); |
2657 |
++ drm_connector_helper_add(&radeon_connector->base, |
2658 |
++ &radeon_dp_connector_helper_funcs); |
2659 |
+ connector->interlace_allowed = true; |
2660 |
+ connector->doublescan_allowed = true; |
2661 |
+ radeon_connector->dac_load_detect = true; |
2662 |
+@@ -1544,6 +1564,10 @@ radeon_add_atom_connector(struct drm_device *dev, |
2663 |
+ case DRM_MODE_CONNECTOR_HDMIA: |
2664 |
+ case DRM_MODE_CONNECTOR_HDMIB: |
2665 |
+ case DRM_MODE_CONNECTOR_DisplayPort: |
2666 |
++ drm_connector_init(dev, &radeon_connector->base, |
2667 |
++ &radeon_dp_connector_funcs, connector_type); |
2668 |
++ drm_connector_helper_add(&radeon_connector->base, |
2669 |
++ &radeon_dp_connector_helper_funcs); |
2670 |
+ drm_connector_attach_property(&radeon_connector->base, |
2671 |
+ rdev->mode_info.underscan_property, |
2672 |
+ UNDERSCAN_OFF); |
2673 |
+@@ -1568,6 +1592,10 @@ radeon_add_atom_connector(struct drm_device *dev, |
2674 |
+ break; |
2675 |
+ case DRM_MODE_CONNECTOR_LVDS: |
2676 |
+ case DRM_MODE_CONNECTOR_eDP: |
2677 |
++ drm_connector_init(dev, &radeon_connector->base, |
2678 |
++ &radeon_lvds_bridge_connector_funcs, connector_type); |
2679 |
++ drm_connector_helper_add(&radeon_connector->base, |
2680 |
++ &radeon_dp_connector_helper_funcs); |
2681 |
+ drm_connector_attach_property(&radeon_connector->base, |
2682 |
+ dev->mode_config.scaling_mode_property, |
2683 |
+ DRM_MODE_SCALE_FULLSCREEN); |
2684 |
+@@ -1731,7 +1759,7 @@ radeon_add_atom_connector(struct drm_device *dev, |
2685 |
+ goto failed; |
2686 |
+ radeon_dig_connector->igp_lane_info = igp_lane_info; |
2687 |
+ radeon_connector->con_priv = radeon_dig_connector; |
2688 |
+- drm_connector_init(dev, &radeon_connector->base, &radeon_dp_connector_funcs, connector_type); |
2689 |
++ drm_connector_init(dev, &radeon_connector->base, &radeon_edp_connector_funcs, connector_type); |
2690 |
+ drm_connector_helper_add(&radeon_connector->base, &radeon_dp_connector_helper_funcs); |
2691 |
+ if (i2c_bus->valid) { |
2692 |
+ /* add DP i2c bus */ |
2693 |
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c |
2694 |
+index 00d9cac69f93..60404f4b2446 100644 |
2695 |
+--- a/drivers/gpu/drm/radeon/radeon_display.c |
2696 |
++++ b/drivers/gpu/drm/radeon/radeon_display.c |
2697 |
+@@ -750,6 +750,7 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) |
2698 |
+ if (radeon_connector->edid) { |
2699 |
+ drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); |
2700 |
+ ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); |
2701 |
++ drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid); |
2702 |
+ return ret; |
2703 |
+ } |
2704 |
+ drm_mode_connector_update_edid_property(&radeon_connector->base, NULL); |
2705 |
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c |
2706 |
+index 3c2628b14d56..9b46238b9692 100644 |
2707 |
+--- a/drivers/gpu/drm/radeon/radeon_kms.c |
2708 |
++++ b/drivers/gpu/drm/radeon/radeon_kms.c |
2709 |
+@@ -39,8 +39,12 @@ int radeon_driver_unload_kms(struct drm_device *dev) |
2710 |
+ |
2711 |
+ if (rdev == NULL) |
2712 |
+ return 0; |
2713 |
++ if (rdev->rmmio == NULL) |
2714 |
++ goto done_free; |
2715 |
+ radeon_modeset_fini(rdev); |
2716 |
+ radeon_device_fini(rdev); |
2717 |
++ |
2718 |
++done_free: |
2719 |
+ kfree(rdev); |
2720 |
+ dev->dev_private = NULL; |
2721 |
+ return 0; |
2722 |
+diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h |
2723 |
+index dabfefda8f55..65da706bc7be 100644 |
2724 |
+--- a/drivers/gpu/drm/radeon/radeon_mode.h |
2725 |
++++ b/drivers/gpu/drm/radeon/radeon_mode.h |
2726 |
+@@ -210,6 +210,8 @@ enum radeon_connector_table { |
2727 |
+ CT_RN50_POWER, |
2728 |
+ CT_MAC_X800, |
2729 |
+ CT_MAC_G5_9600, |
2730 |
++ CT_SAM440EP, |
2731 |
++ CT_MAC_G4_SILVER |
2732 |
+ }; |
2733 |
+ |
2734 |
+ enum radeon_dvo_chip { |
2735 |
+diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c |
2736 |
+index 4a3937fa2dc7..1ec1255520ad 100644 |
2737 |
+--- a/drivers/gpu/drm/radeon/rv770.c |
2738 |
++++ b/drivers/gpu/drm/radeon/rv770.c |
2739 |
+@@ -1058,6 +1058,8 @@ static int rv770_startup(struct radeon_device *rdev) |
2740 |
+ /* enable pcie gen2 link */ |
2741 |
+ rv770_pcie_gen2_enable(rdev); |
2742 |
+ |
2743 |
++ rv770_mc_program(rdev); |
2744 |
++ |
2745 |
+ if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) { |
2746 |
+ r = r600_init_microcode(rdev); |
2747 |
+ if (r) { |
2748 |
+@@ -1070,7 +1072,6 @@ static int rv770_startup(struct radeon_device *rdev) |
2749 |
+ if (r) |
2750 |
+ return r; |
2751 |
+ |
2752 |
+- rv770_mc_program(rdev); |
2753 |
+ if (rdev->flags & RADEON_IS_AGP) { |
2754 |
+ rv770_agp_enable(rdev); |
2755 |
+ } else { |
2756 |
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c |
2757 |
+index c22b5e7849f8..e710073923e9 100644 |
2758 |
+--- a/drivers/gpu/drm/radeon/si.c |
2759 |
++++ b/drivers/gpu/drm/radeon/si.c |
2760 |
+@@ -3834,6 +3834,8 @@ static int si_startup(struct radeon_device *rdev) |
2761 |
+ struct radeon_ring *ring; |
2762 |
+ int r; |
2763 |
+ |
2764 |
++ si_mc_program(rdev); |
2765 |
++ |
2766 |
+ if (!rdev->me_fw || !rdev->pfp_fw || !rdev->ce_fw || |
2767 |
+ !rdev->rlc_fw || !rdev->mc_fw) { |
2768 |
+ r = si_init_microcode(rdev); |
2769 |
+@@ -3853,7 +3855,6 @@ static int si_startup(struct radeon_device *rdev) |
2770 |
+ if (r) |
2771 |
+ return r; |
2772 |
+ |
2773 |
+- si_mc_program(rdev); |
2774 |
+ r = si_pcie_gart_enable(rdev); |
2775 |
+ if (r) |
2776 |
+ return r; |
2777 |
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c |
2778 |
+index ceb70dde21ab..a67e61be38f2 100644 |
2779 |
+--- a/drivers/gpu/drm/ttm/ttm_bo.c |
2780 |
++++ b/drivers/gpu/drm/ttm/ttm_bo.c |
2781 |
+@@ -1091,24 +1091,32 @@ out_unlock: |
2782 |
+ return ret; |
2783 |
+ } |
2784 |
+ |
2785 |
+-static int ttm_bo_mem_compat(struct ttm_placement *placement, |
2786 |
+- struct ttm_mem_reg *mem) |
2787 |
++static bool ttm_bo_mem_compat(struct ttm_placement *placement, |
2788 |
++ struct ttm_mem_reg *mem, |
2789 |
++ uint32_t *new_flags) |
2790 |
+ { |
2791 |
+ int i; |
2792 |
+ |
2793 |
+ if (mem->mm_node && placement->lpfn != 0 && |
2794 |
+ (mem->start < placement->fpfn || |
2795 |
+ mem->start + mem->num_pages > placement->lpfn)) |
2796 |
+- return -1; |
2797 |
++ return false; |
2798 |
+ |
2799 |
+ for (i = 0; i < placement->num_placement; i++) { |
2800 |
+- if ((placement->placement[i] & mem->placement & |
2801 |
+- TTM_PL_MASK_CACHING) && |
2802 |
+- (placement->placement[i] & mem->placement & |
2803 |
+- TTM_PL_MASK_MEM)) |
2804 |
+- return i; |
2805 |
++ *new_flags = placement->placement[i]; |
2806 |
++ if ((*new_flags & mem->placement & TTM_PL_MASK_CACHING) && |
2807 |
++ (*new_flags & mem->placement & TTM_PL_MASK_MEM)) |
2808 |
++ return true; |
2809 |
++ } |
2810 |
++ |
2811 |
++ for (i = 0; i < placement->num_busy_placement; i++) { |
2812 |
++ *new_flags = placement->busy_placement[i]; |
2813 |
++ if ((*new_flags & mem->placement & TTM_PL_MASK_CACHING) && |
2814 |
++ (*new_flags & mem->placement & TTM_PL_MASK_MEM)) |
2815 |
++ return true; |
2816 |
+ } |
2817 |
+- return -1; |
2818 |
++ |
2819 |
++ return false; |
2820 |
+ } |
2821 |
+ |
2822 |
+ int ttm_bo_validate(struct ttm_buffer_object *bo, |
2823 |
+@@ -1117,6 +1125,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, |
2824 |
+ bool no_wait_gpu) |
2825 |
+ { |
2826 |
+ int ret; |
2827 |
++ uint32_t new_flags; |
2828 |
+ |
2829 |
+ BUG_ON(!atomic_read(&bo->reserved)); |
2830 |
+ /* Check that range is valid */ |
2831 |
+@@ -1127,8 +1136,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, |
2832 |
+ /* |
2833 |
+ * Check whether we need to move buffer. |
2834 |
+ */ |
2835 |
+- ret = ttm_bo_mem_compat(placement, &bo->mem); |
2836 |
+- if (ret < 0) { |
2837 |
++ if (!ttm_bo_mem_compat(placement, &bo->mem, &new_flags)) { |
2838 |
+ ret = ttm_bo_move_buffer(bo, placement, interruptible, no_wait_reserve, no_wait_gpu); |
2839 |
+ if (ret) |
2840 |
+ return ret; |
2841 |
+@@ -1137,7 +1145,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, |
2842 |
+ * Use the access and other non-mapping-related flag bits from |
2843 |
+ * the compatible memory placement flags to the active flags |
2844 |
+ */ |
2845 |
+- ttm_flag_masked(&bo->mem.placement, placement->placement[ret], |
2846 |
++ ttm_flag_masked(&bo->mem.placement, new_flags, |
2847 |
+ ~TTM_PL_MASK_MEMTYPE); |
2848 |
+ } |
2849 |
+ /* |
2850 |
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c |
2851 |
+index 3c447bf317cb..6651cb328598 100644 |
2852 |
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c |
2853 |
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c |
2854 |
+@@ -147,7 +147,7 @@ static int vmw_fb_check_var(struct fb_var_screeninfo *var, |
2855 |
+ } |
2856 |
+ |
2857 |
+ if (!vmw_kms_validate_mode_vram(vmw_priv, |
2858 |
+- info->fix.line_length, |
2859 |
++ var->xres * var->bits_per_pixel/8, |
2860 |
+ var->yoffset + var->yres)) { |
2861 |
+ DRM_ERROR("Requested geom can not fit in framebuffer\n"); |
2862 |
+ return -EINVAL; |
2863 |
+@@ -162,6 +162,8 @@ static int vmw_fb_set_par(struct fb_info *info) |
2864 |
+ struct vmw_private *vmw_priv = par->vmw_priv; |
2865 |
+ int ret; |
2866 |
+ |
2867 |
++ info->fix.line_length = info->var.xres * info->var.bits_per_pixel/8; |
2868 |
++ |
2869 |
+ ret = vmw_kms_write_svga(vmw_priv, info->var.xres, info->var.yres, |
2870 |
+ info->fix.line_length, |
2871 |
+ par->bpp, par->depth); |
2872 |
+@@ -177,6 +179,7 @@ static int vmw_fb_set_par(struct fb_info *info) |
2873 |
+ vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y, info->var.yoffset); |
2874 |
+ vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, info->var.xres); |
2875 |
+ vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, info->var.yres); |
2876 |
++ vmw_write(vmw_priv, SVGA_REG_BYTES_PER_LINE, info->fix.line_length); |
2877 |
+ vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID); |
2878 |
+ } |
2879 |
+ |
2880 |
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c |
2881 |
+index 265284620980..3bfd74f1ad49 100644 |
2882 |
+--- a/drivers/hid/hid-logitech-dj.c |
2883 |
++++ b/drivers/hid/hid-logitech-dj.c |
2884 |
+@@ -474,28 +474,38 @@ static int logi_dj_recv_send_report(struct dj_receiver_dev *djrcv_dev, |
2885 |
+ |
2886 |
+ static int logi_dj_recv_query_paired_devices(struct dj_receiver_dev *djrcv_dev) |
2887 |
+ { |
2888 |
+- struct dj_report dj_report; |
2889 |
++ struct dj_report *dj_report; |
2890 |
++ int retval; |
2891 |
+ |
2892 |
+- memset(&dj_report, 0, sizeof(dj_report)); |
2893 |
+- dj_report.report_id = REPORT_ID_DJ_SHORT; |
2894 |
+- dj_report.device_index = 0xFF; |
2895 |
+- dj_report.report_type = REPORT_TYPE_CMD_GET_PAIRED_DEVICES; |
2896 |
+- return logi_dj_recv_send_report(djrcv_dev, &dj_report); |
2897 |
++ dj_report = kzalloc(sizeof(struct dj_report), GFP_KERNEL); |
2898 |
++ if (!dj_report) |
2899 |
++ return -ENOMEM; |
2900 |
++ dj_report->report_id = REPORT_ID_DJ_SHORT; |
2901 |
++ dj_report->device_index = 0xFF; |
2902 |
++ dj_report->report_type = REPORT_TYPE_CMD_GET_PAIRED_DEVICES; |
2903 |
++ retval = logi_dj_recv_send_report(djrcv_dev, dj_report); |
2904 |
++ kfree(dj_report); |
2905 |
++ return retval; |
2906 |
+ } |
2907 |
+ |
2908 |
+ |
2909 |
+ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev, |
2910 |
+ unsigned timeout) |
2911 |
+ { |
2912 |
+- struct dj_report dj_report; |
2913 |
++ struct dj_report *dj_report; |
2914 |
++ int retval; |
2915 |
+ |
2916 |
+- memset(&dj_report, 0, sizeof(dj_report)); |
2917 |
+- dj_report.report_id = REPORT_ID_DJ_SHORT; |
2918 |
+- dj_report.device_index = 0xFF; |
2919 |
+- dj_report.report_type = REPORT_TYPE_CMD_SWITCH; |
2920 |
+- dj_report.report_params[CMD_SWITCH_PARAM_DEVBITFIELD] = 0x3F; |
2921 |
+- dj_report.report_params[CMD_SWITCH_PARAM_TIMEOUT_SECONDS] = (u8)timeout; |
2922 |
+- return logi_dj_recv_send_report(djrcv_dev, &dj_report); |
2923 |
++ dj_report = kzalloc(sizeof(struct dj_report), GFP_KERNEL); |
2924 |
++ if (!dj_report) |
2925 |
++ return -ENOMEM; |
2926 |
++ dj_report->report_id = REPORT_ID_DJ_SHORT; |
2927 |
++ dj_report->device_index = 0xFF; |
2928 |
++ dj_report->report_type = REPORT_TYPE_CMD_SWITCH; |
2929 |
++ dj_report->report_params[CMD_SWITCH_PARAM_DEVBITFIELD] = 0x3F; |
2930 |
++ dj_report->report_params[CMD_SWITCH_PARAM_TIMEOUT_SECONDS] = (u8)timeout; |
2931 |
++ retval = logi_dj_recv_send_report(djrcv_dev, dj_report); |
2932 |
++ kfree(dj_report); |
2933 |
++ return retval; |
2934 |
+ } |
2935 |
+ |
2936 |
+ |
2937 |
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c |
2938 |
+index 8af25a097d75..d01edf3c7b28 100644 |
2939 |
+--- a/drivers/hv/ring_buffer.c |
2940 |
++++ b/drivers/hv/ring_buffer.c |
2941 |
+@@ -383,7 +383,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, |
2942 |
+ sizeof(u64)); |
2943 |
+ |
2944 |
+ /* Make sure we flush all writes before updating the writeIndex */ |
2945 |
+- smp_wmb(); |
2946 |
++ wmb(); |
2947 |
+ |
2948 |
+ /* Now, update the write location */ |
2949 |
+ hv_set_next_write_location(outring_info, next_write_location); |
2950 |
+@@ -485,7 +485,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, void *buffer, |
2951 |
+ /* Make sure all reads are done before we update the read index since */ |
2952 |
+ /* the writer may start writing to the read area once the read index */ |
2953 |
+ /*is updated */ |
2954 |
+- smp_mb(); |
2955 |
++ mb(); |
2956 |
+ |
2957 |
+ /* Update the read index */ |
2958 |
+ hv_set_next_read_location(inring_info, next_read_location); |
2959 |
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c |
2960 |
+index a220e5746d67..10619b34bd31 100644 |
2961 |
+--- a/drivers/hv/vmbus_drv.c |
2962 |
++++ b/drivers/hv/vmbus_drv.c |
2963 |
+@@ -466,7 +466,7 @@ static void vmbus_on_msg_dpc(unsigned long data) |
2964 |
+ * will not deliver any more messages since there is |
2965 |
+ * no empty slot |
2966 |
+ */ |
2967 |
+- smp_mb(); |
2968 |
++ mb(); |
2969 |
+ |
2970 |
+ if (msg->header.message_flags.msg_pending) { |
2971 |
+ /* |
2972 |
+diff --git a/drivers/hwmon/emc1403.c b/drivers/hwmon/emc1403.c |
2973 |
+index 149dcb0e148f..d5c33d5c0389 100644 |
2974 |
+--- a/drivers/hwmon/emc1403.c |
2975 |
++++ b/drivers/hwmon/emc1403.c |
2976 |
+@@ -161,7 +161,7 @@ static ssize_t store_hyst(struct device *dev, |
2977 |
+ if (retval < 0) |
2978 |
+ goto fail; |
2979 |
+ |
2980 |
+- hyst = val - retval * 1000; |
2981 |
++ hyst = retval * 1000 - val; |
2982 |
+ hyst = DIV_ROUND_CLOSEST(hyst, 1000); |
2983 |
+ if (hyst < 0 || hyst > 255) { |
2984 |
+ retval = -ERANGE; |
2985 |
+@@ -294,7 +294,7 @@ static int emc1403_detect(struct i2c_client *client, |
2986 |
+ } |
2987 |
+ |
2988 |
+ id = i2c_smbus_read_byte_data(client, THERMAL_REVISION_REG); |
2989 |
+- if (id != 0x01) |
2990 |
++ if (id < 0x01 || id > 0x04) |
2991 |
+ return -ENODEV; |
2992 |
+ |
2993 |
+ return 0; |
2994 |
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig |
2995 |
+index bc625f6c5b4c..9494910b330c 100644 |
2996 |
+--- a/drivers/i2c/busses/Kconfig |
2997 |
++++ b/drivers/i2c/busses/Kconfig |
2998 |
+@@ -138,6 +138,7 @@ config I2C_PIIX4 |
2999 |
+ ATI SB700 |
3000 |
+ ATI SB800 |
3001 |
+ AMD Hudson-2 |
3002 |
++ AMD CZ |
3003 |
+ Serverworks OSB4 |
3004 |
+ Serverworks CSB5 |
3005 |
+ Serverworks CSB6 |
3006 |
+diff --git a/drivers/i2c/busses/i2c-designware-core.c b/drivers/i2c/busses/i2c-designware-core.c |
3007 |
+index 3c2812f13d96..aadb3984f0d3 100644 |
3008 |
+--- a/drivers/i2c/busses/i2c-designware-core.c |
3009 |
++++ b/drivers/i2c/busses/i2c-designware-core.c |
3010 |
+@@ -346,6 +346,9 @@ static void i2c_dw_xfer_init(struct dw_i2c_dev *dev) |
3011 |
+ ic_con &= ~DW_IC_CON_10BITADDR_MASTER; |
3012 |
+ dw_writel(dev, ic_con, DW_IC_CON); |
3013 |
+ |
3014 |
++ /* enforce disabled interrupts (due to HW issues) */ |
3015 |
++ i2c_dw_disable_int(dev); |
3016 |
++ |
3017 |
+ /* Enable the adapter */ |
3018 |
+ dw_writel(dev, 1, DW_IC_ENABLE); |
3019 |
+ |
3020 |
+diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c |
3021 |
+index c14d48dd601a..a35697490681 100644 |
3022 |
+--- a/drivers/i2c/busses/i2c-piix4.c |
3023 |
++++ b/drivers/i2c/busses/i2c-piix4.c |
3024 |
+@@ -22,7 +22,7 @@ |
3025 |
+ Intel PIIX4, 440MX |
3026 |
+ Serverworks OSB4, CSB5, CSB6, HT-1000, HT-1100 |
3027 |
+ ATI IXP200, IXP300, IXP400, SB600, SB700, SB800 |
3028 |
+- AMD Hudson-2 |
3029 |
++ AMD Hudson-2, CZ |
3030 |
+ SMSC Victory66 |
3031 |
+ |
3032 |
+ Note: we assume there can only be one device, with one SMBus interface. |
3033 |
+@@ -481,6 +481,7 @@ static DEFINE_PCI_DEVICE_TABLE(piix4_ids) = { |
3034 |
+ { PCI_DEVICE(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP400_SMBUS) }, |
3035 |
+ { PCI_DEVICE(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_SBX00_SMBUS) }, |
3036 |
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_HUDSON2_SMBUS) }, |
3037 |
++ { PCI_DEVICE(PCI_VENDOR_ID_AMD, 0x790b) }, |
3038 |
+ { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS, |
3039 |
+ PCI_DEVICE_ID_SERVERWORKS_OSB4) }, |
3040 |
+ { PCI_DEVICE(PCI_VENDOR_ID_SERVERWORKS, |
3041 |
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c |
3042 |
+index df19f3d588cd..d47ca36b4dee 100644 |
3043 |
+--- a/drivers/i2c/busses/i2c-tegra.c |
3044 |
++++ b/drivers/i2c/busses/i2c-tegra.c |
3045 |
+@@ -341,7 +341,11 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev) |
3046 |
+ u32 val; |
3047 |
+ int err = 0; |
3048 |
+ |
3049 |
+- clk_enable(i2c_dev->clk); |
3050 |
++ err = clk_enable(i2c_dev->clk); |
3051 |
++ if (err < 0) { |
3052 |
++ dev_err(i2c_dev->dev, "Clock enable failed %d\n", err); |
3053 |
++ return err; |
3054 |
++ } |
3055 |
+ |
3056 |
+ tegra_periph_reset_assert(i2c_dev->clk); |
3057 |
+ udelay(2); |
3058 |
+@@ -543,7 +547,12 @@ static int tegra_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], |
3059 |
+ if (i2c_dev->is_suspended) |
3060 |
+ return -EBUSY; |
3061 |
+ |
3062 |
+- clk_enable(i2c_dev->clk); |
3063 |
++ ret = clk_enable(i2c_dev->clk); |
3064 |
++ if (ret < 0) { |
3065 |
++ dev_err(i2c_dev->dev, "Clock enable failed %d\n", ret); |
3066 |
++ return ret; |
3067 |
++ } |
3068 |
++ |
3069 |
+ for (i = 0; i < num; i++) { |
3070 |
+ int stop = (i == (num - 1)) ? 1 : 0; |
3071 |
+ ret = tegra_i2c_xfer_msg(i2c_dev, &msgs[i], stop); |
3072 |
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c |
3073 |
+index 014364ab689b..9b09185683f8 100644 |
3074 |
+--- a/drivers/idle/intel_idle.c |
3075 |
++++ b/drivers/idle/intel_idle.c |
3076 |
+@@ -443,10 +443,8 @@ static int intel_idle_probe(void) |
3077 |
+ |
3078 |
+ if (boot_cpu_has(X86_FEATURE_ARAT)) /* Always Reliable APIC Timer */ |
3079 |
+ lapic_timer_reliable_states = LAPIC_TIMER_ALWAYS_RELIABLE; |
3080 |
+- else { |
3081 |
++ else |
3082 |
+ on_each_cpu(__setup_broadcast_timer, (void *)true, 1); |
3083 |
+- register_cpu_notifier(&setup_broadcast_notifier); |
3084 |
+- } |
3085 |
+ |
3086 |
+ pr_debug(PREFIX "v" INTEL_IDLE_VERSION |
3087 |
+ " model 0x%X\n", boot_cpu_data.x86_model); |
3088 |
+@@ -612,6 +610,9 @@ static int __init intel_idle_init(void) |
3089 |
+ } |
3090 |
+ } |
3091 |
+ |
3092 |
++ if (lapic_timer_reliable_states != LAPIC_TIMER_ALWAYS_RELIABLE) |
3093 |
++ register_cpu_notifier(&setup_broadcast_notifier); |
3094 |
++ |
3095 |
+ return 0; |
3096 |
+ } |
3097 |
+ |
3098 |
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c |
3099 |
+index c1f7e9b65088..326e19459ee6 100644 |
3100 |
+--- a/drivers/input/mouse/synaptics.c |
3101 |
++++ b/drivers/input/mouse/synaptics.c |
3102 |
+@@ -53,14 +53,19 @@ |
3103 |
+ #define ABS_POS_BITS 13 |
3104 |
+ |
3105 |
+ /* |
3106 |
+- * Any position values from the hardware above the following limits are |
3107 |
+- * treated as "wrapped around negative" values that have been truncated to |
3108 |
+- * the 13-bit reporting range of the hardware. These are just reasonable |
3109 |
+- * guesses and can be adjusted if hardware is found that operates outside |
3110 |
+- * of these parameters. |
3111 |
++ * These values should represent the absolute maximum value that will |
3112 |
++ * be reported for a positive position value. Some Synaptics firmware |
3113 |
++ * uses this value to indicate a finger near the edge of the touchpad |
3114 |
++ * whose precise position cannot be determined. |
3115 |
++ * |
3116 |
++ * At least one touchpad is known to report positions in excess of this |
3117 |
++ * value which are actually negative values truncated to the 13-bit |
3118 |
++ * reporting range. These values have never been observed to be lower |
3119 |
++ * than 8184 (i.e. -8), so we treat all values greater than 8176 as |
3120 |
++ * negative and any other value as positive. |
3121 |
+ */ |
3122 |
+-#define X_MAX_POSITIVE (((1 << ABS_POS_BITS) + XMAX) / 2) |
3123 |
+-#define Y_MAX_POSITIVE (((1 << ABS_POS_BITS) + YMAX) / 2) |
3124 |
++#define X_MAX_POSITIVE 8176 |
3125 |
++#define Y_MAX_POSITIVE 8176 |
3126 |
+ |
3127 |
+ /* |
3128 |
+ * Synaptics touchpads report the y coordinate from bottom to top, which is |
3129 |
+@@ -583,11 +588,21 @@ static int synaptics_parse_hw_state(const unsigned char buf[], |
3130 |
+ hw->right = (buf[0] & 0x02) ? 1 : 0; |
3131 |
+ } |
3132 |
+ |
3133 |
+- /* Convert wrap-around values to negative */ |
3134 |
++ /* |
3135 |
++ * Convert wrap-around values to negative. (X|Y)_MAX_POSITIVE |
3136 |
++ * is used by some firmware to indicate a finger at the edge of |
3137 |
++ * the touchpad whose precise position cannot be determined, so |
3138 |
++ * convert these values to the maximum axis value. |
3139 |
++ */ |
3140 |
+ if (hw->x > X_MAX_POSITIVE) |
3141 |
+ hw->x -= 1 << ABS_POS_BITS; |
3142 |
++ else if (hw->x == X_MAX_POSITIVE) |
3143 |
++ hw->x = XMAX; |
3144 |
++ |
3145 |
+ if (hw->y > Y_MAX_POSITIVE) |
3146 |
+ hw->y -= 1 << ABS_POS_BITS; |
3147 |
++ else if (hw->y == Y_MAX_POSITIVE) |
3148 |
++ hw->y = YMAX; |
3149 |
+ |
3150 |
+ return 0; |
3151 |
+ } |
3152 |
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c |
3153 |
+index 4c260dc7910e..6f99500790b3 100644 |
3154 |
+--- a/drivers/md/dm-bufio.c |
3155 |
++++ b/drivers/md/dm-bufio.c |
3156 |
+@@ -321,6 +321,9 @@ static void __cache_size_refresh(void) |
3157 |
+ static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask, |
3158 |
+ enum data_mode *data_mode) |
3159 |
+ { |
3160 |
++ unsigned noio_flag; |
3161 |
++ void *ptr; |
3162 |
++ |
3163 |
+ if (c->block_size <= DM_BUFIO_BLOCK_SIZE_SLAB_LIMIT) { |
3164 |
+ *data_mode = DATA_MODE_SLAB; |
3165 |
+ return kmem_cache_alloc(DM_BUFIO_CACHE(c), gfp_mask); |
3166 |
+@@ -334,7 +337,28 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask, |
3167 |
+ } |
3168 |
+ |
3169 |
+ *data_mode = DATA_MODE_VMALLOC; |
3170 |
+- return __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL); |
3171 |
++ |
3172 |
++ /* |
3173 |
++ * __vmalloc allocates the data pages and auxiliary structures with |
3174 |
++ * gfp_flags that were specified, but pagetables are always allocated |
3175 |
++ * with GFP_KERNEL, no matter what was specified as gfp_mask. |
3176 |
++ * |
3177 |
++ * Consequently, we must set per-process flag PF_MEMALLOC_NOIO so that |
3178 |
++ * all allocations done by this process (including pagetables) are done |
3179 |
++ * as if GFP_NOIO was specified. |
3180 |
++ */ |
3181 |
++ |
3182 |
++ if (gfp_mask & __GFP_NORETRY) { |
3183 |
++ noio_flag = current->flags & PF_MEMALLOC; |
3184 |
++ current->flags |= PF_MEMALLOC; |
3185 |
++ } |
3186 |
++ |
3187 |
++ ptr = __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL); |
3188 |
++ |
3189 |
++ if (gfp_mask & __GFP_NORETRY) |
3190 |
++ current->flags = (current->flags & ~PF_MEMALLOC) | noio_flag; |
3191 |
++ |
3192 |
++ return ptr; |
3193 |
+ } |
3194 |
+ |
3195 |
+ /* |
3196 |
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c |
3197 |
+index 9147569545e2..d5fc3ec3639e 100644 |
3198 |
+--- a/drivers/md/dm-mpath.c |
3199 |
++++ b/drivers/md/dm-mpath.c |
3200 |
+@@ -84,6 +84,7 @@ struct multipath { |
3201 |
+ unsigned queue_io; /* Must we queue all I/O? */ |
3202 |
+ unsigned queue_if_no_path; /* Queue I/O if last path fails? */ |
3203 |
+ unsigned saved_queue_if_no_path;/* Saved state during suspension */ |
3204 |
++ unsigned pg_init_disabled:1; /* pg_init is not currently allowed */ |
3205 |
+ unsigned pg_init_retries; /* Number of times to retry pg_init */ |
3206 |
+ unsigned pg_init_count; /* Number of times pg_init called */ |
3207 |
+ unsigned pg_init_delay_msecs; /* Number of msecs before pg_init retry */ |
3208 |
+@@ -493,7 +494,8 @@ static void process_queued_ios(struct work_struct *work) |
3209 |
+ (!pgpath && !m->queue_if_no_path)) |
3210 |
+ must_queue = 0; |
3211 |
+ |
3212 |
+- if (m->pg_init_required && !m->pg_init_in_progress && pgpath) |
3213 |
++ if (m->pg_init_required && !m->pg_init_in_progress && pgpath && |
3214 |
++ !m->pg_init_disabled) |
3215 |
+ __pg_init_all_paths(m); |
3216 |
+ |
3217 |
+ out: |
3218 |
+@@ -907,10 +909,20 @@ static void multipath_wait_for_pg_init_completion(struct multipath *m) |
3219 |
+ |
3220 |
+ static void flush_multipath_work(struct multipath *m) |
3221 |
+ { |
3222 |
++ unsigned long flags; |
3223 |
++ |
3224 |
++ spin_lock_irqsave(&m->lock, flags); |
3225 |
++ m->pg_init_disabled = 1; |
3226 |
++ spin_unlock_irqrestore(&m->lock, flags); |
3227 |
++ |
3228 |
+ flush_workqueue(kmpath_handlerd); |
3229 |
+ multipath_wait_for_pg_init_completion(m); |
3230 |
+ flush_workqueue(kmultipathd); |
3231 |
+ flush_work_sync(&m->trigger_event); |
3232 |
++ |
3233 |
++ spin_lock_irqsave(&m->lock, flags); |
3234 |
++ m->pg_init_disabled = 0; |
3235 |
++ spin_unlock_irqrestore(&m->lock, flags); |
3236 |
+ } |
3237 |
+ |
3238 |
+ static void multipath_dtr(struct dm_target *ti) |
3239 |
+@@ -1129,7 +1141,7 @@ static int pg_init_limit_reached(struct multipath *m, struct pgpath *pgpath) |
3240 |
+ |
3241 |
+ spin_lock_irqsave(&m->lock, flags); |
3242 |
+ |
3243 |
+- if (m->pg_init_count <= m->pg_init_retries) |
3244 |
++ if (m->pg_init_count <= m->pg_init_retries && !m->pg_init_disabled) |
3245 |
+ m->pg_init_required = 1; |
3246 |
+ else |
3247 |
+ limit_reached = 1; |
3248 |
+@@ -1644,7 +1656,7 @@ out: |
3249 |
+ *---------------------------------------------------------------*/ |
3250 |
+ static struct target_type multipath_target = { |
3251 |
+ .name = "multipath", |
3252 |
+- .version = {1, 3, 0}, |
3253 |
++ .version = {1, 3, 2}, |
3254 |
+ .module = THIS_MODULE, |
3255 |
+ .ctr = multipath_ctr, |
3256 |
+ .dtr = multipath_dtr, |
3257 |
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c |
3258 |
+index db77ac693323..b092338d5966 100644 |
3259 |
+--- a/drivers/md/dm-snap.c |
3260 |
++++ b/drivers/md/dm-snap.c |
3261 |
+@@ -66,6 +66,18 @@ struct dm_snapshot { |
3262 |
+ |
3263 |
+ atomic_t pending_exceptions_count; |
3264 |
+ |
3265 |
++ /* Protected by "lock" */ |
3266 |
++ sector_t exception_start_sequence; |
3267 |
++ |
3268 |
++ /* Protected by kcopyd single-threaded callback */ |
3269 |
++ sector_t exception_complete_sequence; |
3270 |
++ |
3271 |
++ /* |
3272 |
++ * A list of pending exceptions that completed out of order. |
3273 |
++ * Protected by kcopyd single-threaded callback. |
3274 |
++ */ |
3275 |
++ struct list_head out_of_order_list; |
3276 |
++ |
3277 |
+ mempool_t *pending_pool; |
3278 |
+ |
3279 |
+ struct dm_exception_table pending; |
3280 |
+@@ -171,6 +183,14 @@ struct dm_snap_pending_exception { |
3281 |
+ */ |
3282 |
+ int started; |
3283 |
+ |
3284 |
++ /* There was copying error. */ |
3285 |
++ int copy_error; |
3286 |
++ |
3287 |
++ /* A sequence number, it is used for in-order completion. */ |
3288 |
++ sector_t exception_sequence; |
3289 |
++ |
3290 |
++ struct list_head out_of_order_entry; |
3291 |
++ |
3292 |
+ /* |
3293 |
+ * For writing a complete chunk, bypassing the copy. |
3294 |
+ */ |
3295 |
+@@ -1090,6 +1110,9 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) |
3296 |
+ s->valid = 1; |
3297 |
+ s->active = 0; |
3298 |
+ atomic_set(&s->pending_exceptions_count, 0); |
3299 |
++ s->exception_start_sequence = 0; |
3300 |
++ s->exception_complete_sequence = 0; |
3301 |
++ INIT_LIST_HEAD(&s->out_of_order_list); |
3302 |
+ init_rwsem(&s->lock); |
3303 |
+ INIT_LIST_HEAD(&s->list); |
3304 |
+ spin_lock_init(&s->pe_lock); |
3305 |
+@@ -1448,6 +1471,19 @@ static void commit_callback(void *context, int success) |
3306 |
+ pending_complete(pe, success); |
3307 |
+ } |
3308 |
+ |
3309 |
++static void complete_exception(struct dm_snap_pending_exception *pe) |
3310 |
++{ |
3311 |
++ struct dm_snapshot *s = pe->snap; |
3312 |
++ |
3313 |
++ if (unlikely(pe->copy_error)) |
3314 |
++ pending_complete(pe, 0); |
3315 |
++ |
3316 |
++ else |
3317 |
++ /* Update the metadata if we are persistent */ |
3318 |
++ s->store->type->commit_exception(s->store, &pe->e, |
3319 |
++ commit_callback, pe); |
3320 |
++} |
3321 |
++ |
3322 |
+ /* |
3323 |
+ * Called when the copy I/O has finished. kcopyd actually runs |
3324 |
+ * this code so don't block. |
3325 |
+@@ -1457,13 +1493,32 @@ static void copy_callback(int read_err, unsigned long write_err, void *context) |
3326 |
+ struct dm_snap_pending_exception *pe = context; |
3327 |
+ struct dm_snapshot *s = pe->snap; |
3328 |
+ |
3329 |
+- if (read_err || write_err) |
3330 |
+- pending_complete(pe, 0); |
3331 |
++ pe->copy_error = read_err || write_err; |
3332 |
+ |
3333 |
+- else |
3334 |
+- /* Update the metadata if we are persistent */ |
3335 |
+- s->store->type->commit_exception(s->store, &pe->e, |
3336 |
+- commit_callback, pe); |
3337 |
++ if (pe->exception_sequence == s->exception_complete_sequence) { |
3338 |
++ s->exception_complete_sequence++; |
3339 |
++ complete_exception(pe); |
3340 |
++ |
3341 |
++ while (!list_empty(&s->out_of_order_list)) { |
3342 |
++ pe = list_entry(s->out_of_order_list.next, |
3343 |
++ struct dm_snap_pending_exception, out_of_order_entry); |
3344 |
++ if (pe->exception_sequence != s->exception_complete_sequence) |
3345 |
++ break; |
3346 |
++ s->exception_complete_sequence++; |
3347 |
++ list_del(&pe->out_of_order_entry); |
3348 |
++ complete_exception(pe); |
3349 |
++ } |
3350 |
++ } else { |
3351 |
++ struct list_head *lh; |
3352 |
++ struct dm_snap_pending_exception *pe2; |
3353 |
++ |
3354 |
++ list_for_each_prev(lh, &s->out_of_order_list) { |
3355 |
++ pe2 = list_entry(lh, struct dm_snap_pending_exception, out_of_order_entry); |
3356 |
++ if (pe2->exception_sequence < pe->exception_sequence) |
3357 |
++ break; |
3358 |
++ } |
3359 |
++ list_add(&pe->out_of_order_entry, lh); |
3360 |
++ } |
3361 |
+ } |
3362 |
+ |
3363 |
+ /* |
3364 |
+@@ -1558,6 +1613,8 @@ __find_pending_exception(struct dm_snapshot *s, |
3365 |
+ return NULL; |
3366 |
+ } |
3367 |
+ |
3368 |
++ pe->exception_sequence = s->exception_start_sequence++; |
3369 |
++ |
3370 |
+ dm_insert_exception(&s->pending, &pe->e); |
3371 |
+ |
3372 |
+ return pe; |
3373 |
+@@ -2200,7 +2257,7 @@ static struct target_type origin_target = { |
3374 |
+ |
3375 |
+ static struct target_type snapshot_target = { |
3376 |
+ .name = "snapshot", |
3377 |
+- .version = {1, 10, 0}, |
3378 |
++ .version = {1, 10, 2}, |
3379 |
+ .module = THIS_MODULE, |
3380 |
+ .ctr = snapshot_ctr, |
3381 |
+ .dtr = snapshot_dtr, |
3382 |
+@@ -2323,3 +2380,5 @@ module_exit(dm_snapshot_exit); |
3383 |
+ MODULE_DESCRIPTION(DM_NAME " snapshot target"); |
3384 |
+ MODULE_AUTHOR("Joe Thornber"); |
3385 |
+ MODULE_LICENSE("GPL"); |
3386 |
++MODULE_ALIAS("dm-snapshot-origin"); |
3387 |
++MODULE_ALIAS("dm-snapshot-merge"); |
3388 |
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c |
3389 |
+index b253744fc3c8..e811e44dfcf7 100644 |
3390 |
+--- a/drivers/md/dm-thin.c |
3391 |
++++ b/drivers/md/dm-thin.c |
3392 |
+@@ -2472,7 +2472,7 @@ static struct target_type pool_target = { |
3393 |
+ .name = "thin-pool", |
3394 |
+ .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE | |
3395 |
+ DM_TARGET_IMMUTABLE, |
3396 |
+- .version = {1, 1, 0}, |
3397 |
++ .version = {1, 1, 1}, |
3398 |
+ .module = THIS_MODULE, |
3399 |
+ .ctr = pool_ctr, |
3400 |
+ .dtr = pool_dtr, |
3401 |
+@@ -2752,7 +2752,7 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits) |
3402 |
+ |
3403 |
+ static struct target_type thin_target = { |
3404 |
+ .name = "thin", |
3405 |
+- .version = {1, 1, 0}, |
3406 |
++ .version = {1, 1, 1}, |
3407 |
+ .module = THIS_MODULE, |
3408 |
+ .ctr = thin_ctr, |
3409 |
+ .dtr = thin_dtr, |
3410 |
+diff --git a/drivers/md/md.c b/drivers/md/md.c |
3411 |
+index e63ca864b35a..8590d2c256a6 100644 |
3412 |
+--- a/drivers/md/md.c |
3413 |
++++ b/drivers/md/md.c |
3414 |
+@@ -8167,7 +8167,8 @@ static int md_notify_reboot(struct notifier_block *this, |
3415 |
+ if (mddev_trylock(mddev)) { |
3416 |
+ if (mddev->pers) |
3417 |
+ __md_stop_writes(mddev); |
3418 |
+- mddev->safemode = 2; |
3419 |
++ if (mddev->persistent) |
3420 |
++ mddev->safemode = 2; |
3421 |
+ mddev_unlock(mddev); |
3422 |
+ } |
3423 |
+ need_delay = 1; |
3424 |
+diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c |
3425 |
+index c4f28133ef82..b88757cd0d1d 100644 |
3426 |
+--- a/drivers/md/persistent-data/dm-btree-remove.c |
3427 |
++++ b/drivers/md/persistent-data/dm-btree-remove.c |
3428 |
+@@ -139,15 +139,8 @@ struct child { |
3429 |
+ struct btree_node *n; |
3430 |
+ }; |
3431 |
+ |
3432 |
+-static struct dm_btree_value_type le64_type = { |
3433 |
+- .context = NULL, |
3434 |
+- .size = sizeof(__le64), |
3435 |
+- .inc = NULL, |
3436 |
+- .dec = NULL, |
3437 |
+- .equal = NULL |
3438 |
+-}; |
3439 |
+- |
3440 |
+-static int init_child(struct dm_btree_info *info, struct btree_node *parent, |
3441 |
++static int init_child(struct dm_btree_info *info, struct dm_btree_value_type *vt, |
3442 |
++ struct btree_node *parent, |
3443 |
+ unsigned index, struct child *result) |
3444 |
+ { |
3445 |
+ int r, inc; |
3446 |
+@@ -164,7 +157,7 @@ static int init_child(struct dm_btree_info *info, struct btree_node *parent, |
3447 |
+ result->n = dm_block_data(result->block); |
3448 |
+ |
3449 |
+ if (inc) |
3450 |
+- inc_children(info->tm, result->n, &le64_type); |
3451 |
++ inc_children(info->tm, result->n, vt); |
3452 |
+ |
3453 |
+ *((__le64 *) value_ptr(parent, index)) = |
3454 |
+ cpu_to_le64(dm_block_location(result->block)); |
3455 |
+@@ -236,7 +229,7 @@ static void __rebalance2(struct dm_btree_info *info, struct btree_node *parent, |
3456 |
+ } |
3457 |
+ |
3458 |
+ static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info, |
3459 |
+- unsigned left_index) |
3460 |
++ struct dm_btree_value_type *vt, unsigned left_index) |
3461 |
+ { |
3462 |
+ int r; |
3463 |
+ struct btree_node *parent; |
3464 |
+@@ -244,11 +237,11 @@ static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info, |
3465 |
+ |
3466 |
+ parent = dm_block_data(shadow_current(s)); |
3467 |
+ |
3468 |
+- r = init_child(info, parent, left_index, &left); |
3469 |
++ r = init_child(info, vt, parent, left_index, &left); |
3470 |
+ if (r) |
3471 |
+ return r; |
3472 |
+ |
3473 |
+- r = init_child(info, parent, left_index + 1, &right); |
3474 |
++ r = init_child(info, vt, parent, left_index + 1, &right); |
3475 |
+ if (r) { |
3476 |
+ exit_child(info, &left); |
3477 |
+ return r; |
3478 |
+@@ -368,7 +361,7 @@ static void __rebalance3(struct dm_btree_info *info, struct btree_node *parent, |
3479 |
+ } |
3480 |
+ |
3481 |
+ static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info, |
3482 |
+- unsigned left_index) |
3483 |
++ struct dm_btree_value_type *vt, unsigned left_index) |
3484 |
+ { |
3485 |
+ int r; |
3486 |
+ struct btree_node *parent = dm_block_data(shadow_current(s)); |
3487 |
+@@ -377,17 +370,17 @@ static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info, |
3488 |
+ /* |
3489 |
+ * FIXME: fill out an array? |
3490 |
+ */ |
3491 |
+- r = init_child(info, parent, left_index, &left); |
3492 |
++ r = init_child(info, vt, parent, left_index, &left); |
3493 |
+ if (r) |
3494 |
+ return r; |
3495 |
+ |
3496 |
+- r = init_child(info, parent, left_index + 1, ¢er); |
3497 |
++ r = init_child(info, vt, parent, left_index + 1, ¢er); |
3498 |
+ if (r) { |
3499 |
+ exit_child(info, &left); |
3500 |
+ return r; |
3501 |
+ } |
3502 |
+ |
3503 |
+- r = init_child(info, parent, left_index + 2, &right); |
3504 |
++ r = init_child(info, vt, parent, left_index + 2, &right); |
3505 |
+ if (r) { |
3506 |
+ exit_child(info, &left); |
3507 |
+ exit_child(info, ¢er); |
3508 |
+@@ -434,7 +427,8 @@ static int get_nr_entries(struct dm_transaction_manager *tm, |
3509 |
+ } |
3510 |
+ |
3511 |
+ static int rebalance_children(struct shadow_spine *s, |
3512 |
+- struct dm_btree_info *info, uint64_t key) |
3513 |
++ struct dm_btree_info *info, |
3514 |
++ struct dm_btree_value_type *vt, uint64_t key) |
3515 |
+ { |
3516 |
+ int i, r, has_left_sibling, has_right_sibling; |
3517 |
+ uint32_t child_entries; |
3518 |
+@@ -472,13 +466,13 @@ static int rebalance_children(struct shadow_spine *s, |
3519 |
+ has_right_sibling = i < (le32_to_cpu(n->header.nr_entries) - 1); |
3520 |
+ |
3521 |
+ if (!has_left_sibling) |
3522 |
+- r = rebalance2(s, info, i); |
3523 |
++ r = rebalance2(s, info, vt, i); |
3524 |
+ |
3525 |
+ else if (!has_right_sibling) |
3526 |
+- r = rebalance2(s, info, i - 1); |
3527 |
++ r = rebalance2(s, info, vt, i - 1); |
3528 |
+ |
3529 |
+ else |
3530 |
+- r = rebalance3(s, info, i - 1); |
3531 |
++ r = rebalance3(s, info, vt, i - 1); |
3532 |
+ |
3533 |
+ return r; |
3534 |
+ } |
3535 |
+@@ -529,7 +523,7 @@ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info, |
3536 |
+ if (le32_to_cpu(n->header.flags) & LEAF_NODE) |
3537 |
+ return do_leaf(n, key, index); |
3538 |
+ |
3539 |
+- r = rebalance_children(s, info, key); |
3540 |
++ r = rebalance_children(s, info, vt, key); |
3541 |
+ if (r) |
3542 |
+ break; |
3543 |
+ |
3544 |
+@@ -550,6 +544,14 @@ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info, |
3545 |
+ return r; |
3546 |
+ } |
3547 |
+ |
3548 |
++static struct dm_btree_value_type le64_type = { |
3549 |
++ .context = NULL, |
3550 |
++ .size = sizeof(__le64), |
3551 |
++ .inc = NULL, |
3552 |
++ .dec = NULL, |
3553 |
++ .equal = NULL |
3554 |
++}; |
3555 |
++ |
3556 |
+ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root, |
3557 |
+ uint64_t *keys, dm_block_t *new_root) |
3558 |
+ { |
3559 |
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c |
3560 |
+index 67a8393e3f86..149426cd1e84 100644 |
3561 |
+--- a/drivers/md/raid10.c |
3562 |
++++ b/drivers/md/raid10.c |
3563 |
+@@ -1419,14 +1419,16 @@ static int enough(struct r10conf *conf, int ignore) |
3564 |
+ do { |
3565 |
+ int n = conf->copies; |
3566 |
+ int cnt = 0; |
3567 |
++ int this = first; |
3568 |
+ while (n--) { |
3569 |
+- if (conf->mirrors[first].rdev && |
3570 |
+- first != ignore) |
3571 |
++ if (conf->mirrors[this].rdev && |
3572 |
++ this != ignore) |
3573 |
+ cnt++; |
3574 |
+- first = (first+1) % conf->raid_disks; |
3575 |
++ this = (this+1) % conf->raid_disks; |
3576 |
+ } |
3577 |
+ if (cnt == 0) |
3578 |
+ return 0; |
3579 |
++ first = (first + conf->near_copies) % conf->raid_disks; |
3580 |
+ } while (first != 0); |
3581 |
+ return 1; |
3582 |
+ } |
3583 |
+diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c |
3584 |
+index 6f9eb94e85b3..25968dcfa422 100644 |
3585 |
+--- a/drivers/media/media-device.c |
3586 |
++++ b/drivers/media/media-device.c |
3587 |
+@@ -90,6 +90,7 @@ static long media_device_enum_entities(struct media_device *mdev, |
3588 |
+ struct media_entity *ent; |
3589 |
+ struct media_entity_desc u_ent; |
3590 |
+ |
3591 |
++ memset(&u_ent, 0, sizeof(u_ent)); |
3592 |
+ if (copy_from_user(&u_ent.id, &uent->id, sizeof(u_ent.id))) |
3593 |
+ return -EFAULT; |
3594 |
+ |
3595 |
+diff --git a/drivers/misc/hpilo.c b/drivers/misc/hpilo.c |
3596 |
+index fffc227181b0..9c99680f645a 100644 |
3597 |
+--- a/drivers/misc/hpilo.c |
3598 |
++++ b/drivers/misc/hpilo.c |
3599 |
+@@ -735,7 +735,14 @@ static void ilo_remove(struct pci_dev *pdev) |
3600 |
+ free_irq(pdev->irq, ilo_hw); |
3601 |
+ ilo_unmap_device(pdev, ilo_hw); |
3602 |
+ pci_release_regions(pdev); |
3603 |
+- pci_disable_device(pdev); |
3604 |
++ /* |
3605 |
++ * pci_disable_device(pdev) used to be here. But this PCI device has |
3606 |
++ * two functions with interrupt lines connected to a single pin. The |
3607 |
++ * other one is a USB host controller. So when we disable the PIN here |
3608 |
++ * e.g. by rmmod hpilo, the controller stops working. It is because |
3609 |
++ * the interrupt link is disabled in ACPI since it is not refcounted |
3610 |
++ * yet. See acpi_pci_link_free_irq called from acpi_pci_irq_disable. |
3611 |
++ */ |
3612 |
+ kfree(ilo_hw); |
3613 |
+ ilo_hwdev[(minor / MAX_CCB)] = 0; |
3614 |
+ } |
3615 |
+@@ -820,7 +827,7 @@ unmap: |
3616 |
+ free_regions: |
3617 |
+ pci_release_regions(pdev); |
3618 |
+ disable: |
3619 |
+- pci_disable_device(pdev); |
3620 |
++/* pci_disable_device(pdev); see comment in ilo_remove */ |
3621 |
+ free: |
3622 |
+ kfree(ilo_hw); |
3623 |
+ out: |
3624 |
+diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c |
3625 |
+index 1924d247c1cb..797860ea3e04 100644 |
3626 |
+--- a/drivers/mtd/devices/m25p80.c |
3627 |
++++ b/drivers/mtd/devices/m25p80.c |
3628 |
+@@ -71,7 +71,7 @@ |
3629 |
+ |
3630 |
+ /* Define max times to check status register before we give up. */ |
3631 |
+ #define MAX_READY_WAIT_JIFFIES (40 * HZ) /* M25P16 specs 40s max chip erase */ |
3632 |
+-#define MAX_CMD_SIZE 5 |
3633 |
++#define MAX_CMD_SIZE 6 |
3634 |
+ |
3635 |
+ #ifdef CONFIG_M25PXX_USE_FAST_READ |
3636 |
+ #define OPCODE_READ OPCODE_FAST_READ |
3637 |
+@@ -843,14 +843,13 @@ static int __devinit m25p_probe(struct spi_device *spi) |
3638 |
+ } |
3639 |
+ } |
3640 |
+ |
3641 |
+- flash = kzalloc(sizeof *flash, GFP_KERNEL); |
3642 |
++ flash = devm_kzalloc(&spi->dev, sizeof(*flash), GFP_KERNEL); |
3643 |
+ if (!flash) |
3644 |
+ return -ENOMEM; |
3645 |
+- flash->command = kmalloc(MAX_CMD_SIZE + FAST_READ_DUMMY_BYTE, GFP_KERNEL); |
3646 |
+- if (!flash->command) { |
3647 |
+- kfree(flash); |
3648 |
++ |
3649 |
++ flash->command = devm_kzalloc(&spi->dev, MAX_CMD_SIZE, GFP_KERNEL); |
3650 |
++ if (!flash->command) |
3651 |
+ return -ENOMEM; |
3652 |
+- } |
3653 |
+ |
3654 |
+ flash->spi = spi; |
3655 |
+ mutex_init(&flash->lock); |
3656 |
+@@ -947,14 +946,10 @@ static int __devinit m25p_probe(struct spi_device *spi) |
3657 |
+ static int __devexit m25p_remove(struct spi_device *spi) |
3658 |
+ { |
3659 |
+ struct m25p *flash = dev_get_drvdata(&spi->dev); |
3660 |
+- int status; |
3661 |
+ |
3662 |
+ /* Clean up MTD stuff. */ |
3663 |
+- status = mtd_device_unregister(&flash->mtd); |
3664 |
+- if (status == 0) { |
3665 |
+- kfree(flash->command); |
3666 |
+- kfree(flash); |
3667 |
+- } |
3668 |
++ mtd_device_unregister(&flash->mtd); |
3669 |
++ |
3670 |
+ return 0; |
3671 |
+ } |
3672 |
+ |
3673 |
+diff --git a/drivers/mtd/ubi/scan.c b/drivers/mtd/ubi/scan.c |
3674 |
+index 12c43b44f815..4f71793f5505 100644 |
3675 |
+--- a/drivers/mtd/ubi/scan.c |
3676 |
++++ b/drivers/mtd/ubi/scan.c |
3677 |
+@@ -997,7 +997,7 @@ static int process_eb(struct ubi_device *ubi, struct ubi_scan_info *si, |
3678 |
+ return err; |
3679 |
+ goto adjust_mean_ec; |
3680 |
+ case UBI_IO_FF: |
3681 |
+- if (ec_err) |
3682 |
++ if (ec_err || bitflips) |
3683 |
+ err = add_to_list(si, pnum, ec, 1, &si->erase); |
3684 |
+ else |
3685 |
+ err = add_to_list(si, pnum, ec, 0, &si->free); |
3686 |
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c |
3687 |
+index 098581afdd9f..2402af322de6 100644 |
3688 |
+--- a/drivers/net/bonding/bond_main.c |
3689 |
++++ b/drivers/net/bonding/bond_main.c |
3690 |
+@@ -4930,6 +4930,7 @@ static int __init bonding_init(void) |
3691 |
+ out: |
3692 |
+ return res; |
3693 |
+ err: |
3694 |
++ bond_destroy_debugfs(); |
3695 |
+ rtnl_link_unregister(&bond_link_ops); |
3696 |
+ err_link: |
3697 |
+ unregister_pernet_subsys(&bond_net_ops); |
3698 |
+diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c |
3699 |
+index 77405b4e8636..91d1b5af982b 100644 |
3700 |
+--- a/drivers/net/can/c_can/c_can.c |
3701 |
++++ b/drivers/net/can/c_can/c_can.c |
3702 |
+@@ -446,8 +446,12 @@ static void c_can_setup_receive_object(struct net_device *dev, int iface, |
3703 |
+ |
3704 |
+ priv->write_reg(priv, &priv->regs->ifregs[iface].mask1, |
3705 |
+ IFX_WRITE_LOW_16BIT(mask)); |
3706 |
++ |
3707 |
++ /* According to C_CAN documentation, the reserved bit |
3708 |
++ * in IFx_MASK2 register is fixed 1 |
3709 |
++ */ |
3710 |
+ priv->write_reg(priv, &priv->regs->ifregs[iface].mask2, |
3711 |
+- IFX_WRITE_HIGH_16BIT(mask)); |
3712 |
++ IFX_WRITE_HIGH_16BIT(mask) | BIT(13)); |
3713 |
+ |
3714 |
+ priv->write_reg(priv, &priv->regs->ifregs[iface].arb1, |
3715 |
+ IFX_WRITE_LOW_16BIT(id)); |
3716 |
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c |
3717 |
+index c2309ec71369..2d3ad72958ff 100644 |
3718 |
+--- a/drivers/net/can/sja1000/sja1000.c |
3719 |
++++ b/drivers/net/can/sja1000/sja1000.c |
3720 |
+@@ -487,19 +487,19 @@ irqreturn_t sja1000_interrupt(int irq, void *dev_id) |
3721 |
+ uint8_t isrc, status; |
3722 |
+ int n = 0; |
3723 |
+ |
3724 |
+- /* Shared interrupts and IRQ off? */ |
3725 |
+- if (priv->read_reg(priv, REG_IER) == IRQ_OFF) |
3726 |
+- return IRQ_NONE; |
3727 |
+- |
3728 |
+ if (priv->pre_irq) |
3729 |
+ priv->pre_irq(priv); |
3730 |
+ |
3731 |
++ /* Shared interrupts and IRQ off? */ |
3732 |
++ if (priv->read_reg(priv, REG_IER) == IRQ_OFF) |
3733 |
++ goto out; |
3734 |
++ |
3735 |
+ while ((isrc = priv->read_reg(priv, REG_IR)) && (n < SJA1000_MAX_IRQ)) { |
3736 |
+- n++; |
3737 |
++ |
3738 |
+ status = priv->read_reg(priv, SJA1000_REG_SR); |
3739 |
+ /* check for absent controller due to hw unplug */ |
3740 |
+ if (status == 0xFF && sja1000_is_absent(priv)) |
3741 |
+- return IRQ_NONE; |
3742 |
++ goto out; |
3743 |
+ |
3744 |
+ if (isrc & IRQ_WUI) |
3745 |
+ netdev_warn(dev, "wakeup interrupt\n"); |
3746 |
+@@ -518,7 +518,7 @@ irqreturn_t sja1000_interrupt(int irq, void *dev_id) |
3747 |
+ status = priv->read_reg(priv, SJA1000_REG_SR); |
3748 |
+ /* check for absent controller */ |
3749 |
+ if (status == 0xFF && sja1000_is_absent(priv)) |
3750 |
+- return IRQ_NONE; |
3751 |
++ goto out; |
3752 |
+ } |
3753 |
+ } |
3754 |
+ if (isrc & (IRQ_DOI | IRQ_EI | IRQ_BEI | IRQ_EPI | IRQ_ALI)) { |
3755 |
+@@ -526,8 +526,9 @@ irqreturn_t sja1000_interrupt(int irq, void *dev_id) |
3756 |
+ if (sja1000_err(dev, isrc, status)) |
3757 |
+ break; |
3758 |
+ } |
3759 |
++ n++; |
3760 |
+ } |
3761 |
+- |
3762 |
++out: |
3763 |
+ if (priv->post_irq) |
3764 |
+ priv->post_irq(priv); |
3765 |
+ |
3766 |
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c |
3767 |
+index 513573321625..05ec7f1ed3f5 100644 |
3768 |
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c |
3769 |
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c |
3770 |
+@@ -1030,9 +1030,6 @@ static void bnx2x_set_one_vlan_mac_e1h(struct bnx2x *bp, |
3771 |
+ ETH_VLAN_FILTER_CLASSIFY, config); |
3772 |
+ } |
3773 |
+ |
3774 |
+-#define list_next_entry(pos, member) \ |
3775 |
+- list_entry((pos)->member.next, typeof(*(pos)), member) |
3776 |
+- |
3777 |
+ /** |
3778 |
+ * bnx2x_vlan_mac_restore - reconfigure next MAC/VLAN/VLAN-MAC element |
3779 |
+ * |
3780 |
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c |
3781 |
+index 237e2a4f58c0..cbc6a62084e1 100644 |
3782 |
+--- a/drivers/net/ethernet/broadcom/tg3.c |
3783 |
++++ b/drivers/net/ethernet/broadcom/tg3.c |
3784 |
+@@ -10861,7 +10861,9 @@ static int tg3_set_ringparam(struct net_device *dev, struct ethtool_ringparam *e |
3785 |
+ if (tg3_flag(tp, MAX_RXPEND_64) && |
3786 |
+ tp->rx_pending > 63) |
3787 |
+ tp->rx_pending = 63; |
3788 |
+- tp->rx_jumbo_pending = ering->rx_jumbo_pending; |
3789 |
++ |
3790 |
++ if (tg3_flag(tp, JUMBO_RING_ENABLE)) |
3791 |
++ tp->rx_jumbo_pending = ering->rx_jumbo_pending; |
3792 |
+ |
3793 |
+ for (i = 0; i < tp->irq_max; i++) |
3794 |
+ tp->napi[i].tx_pending = ering->tx_pending; |
3795 |
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h |
3796 |
+index 1ab8067b028b..21c058bdc1f8 100644 |
3797 |
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h |
3798 |
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h |
3799 |
+@@ -309,6 +309,7 @@ struct e1000_adapter { |
3800 |
+ */ |
3801 |
+ struct e1000_ring *tx_ring /* One per active queue */ |
3802 |
+ ____cacheline_aligned_in_smp; |
3803 |
++ u32 tx_fifo_limit; |
3804 |
+ |
3805 |
+ struct napi_struct napi; |
3806 |
+ |
3807 |
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c |
3808 |
+index c80b4b4e657d..e65f529b430d 100644 |
3809 |
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c |
3810 |
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c |
3811 |
+@@ -3498,6 +3498,15 @@ void e1000e_reset(struct e1000_adapter *adapter) |
3812 |
+ } |
3813 |
+ |
3814 |
+ /* |
3815 |
++ * Alignment of Tx data is on an arbitrary byte boundary with the |
3816 |
++ * maximum size per Tx descriptor limited only to the transmit |
3817 |
++ * allocation of the packet buffer minus 96 bytes with an upper |
3818 |
++ * limit of 24KB due to receive synchronization limitations. |
3819 |
++ */ |
3820 |
++ adapter->tx_fifo_limit = min_t(u32, ((er32(PBA) >> 16) << 10) - 96, |
3821 |
++ 24 << 10); |
3822 |
++ |
3823 |
++ /* |
3824 |
+ * Disable Adaptive Interrupt Moderation if 2 full packets cannot |
3825 |
+ * fit in receive buffer. |
3826 |
+ */ |
3827 |
+@@ -4766,12 +4775,9 @@ static bool e1000_tx_csum(struct e1000_ring *tx_ring, struct sk_buff *skb) |
3828 |
+ return 1; |
3829 |
+ } |
3830 |
+ |
3831 |
+-#define E1000_MAX_PER_TXD 8192 |
3832 |
+-#define E1000_MAX_TXD_PWR 12 |
3833 |
+- |
3834 |
+ static int e1000_tx_map(struct e1000_ring *tx_ring, struct sk_buff *skb, |
3835 |
+ unsigned int first, unsigned int max_per_txd, |
3836 |
+- unsigned int nr_frags, unsigned int mss) |
3837 |
++ unsigned int nr_frags) |
3838 |
+ { |
3839 |
+ struct e1000_adapter *adapter = tx_ring->adapter; |
3840 |
+ struct pci_dev *pdev = adapter->pdev; |
3841 |
+@@ -5004,20 +5010,19 @@ static int __e1000_maybe_stop_tx(struct e1000_ring *tx_ring, int size) |
3842 |
+ |
3843 |
+ static int e1000_maybe_stop_tx(struct e1000_ring *tx_ring, int size) |
3844 |
+ { |
3845 |
++ BUG_ON(size > tx_ring->count); |
3846 |
++ |
3847 |
+ if (e1000_desc_unused(tx_ring) >= size) |
3848 |
+ return 0; |
3849 |
+ return __e1000_maybe_stop_tx(tx_ring, size); |
3850 |
+ } |
3851 |
+ |
3852 |
+-#define TXD_USE_COUNT(S, X) (((S) >> (X)) + 1) |
3853 |
+ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb, |
3854 |
+ struct net_device *netdev) |
3855 |
+ { |
3856 |
+ struct e1000_adapter *adapter = netdev_priv(netdev); |
3857 |
+ struct e1000_ring *tx_ring = adapter->tx_ring; |
3858 |
+ unsigned int first; |
3859 |
+- unsigned int max_per_txd = E1000_MAX_PER_TXD; |
3860 |
+- unsigned int max_txd_pwr = E1000_MAX_TXD_PWR; |
3861 |
+ unsigned int tx_flags = 0; |
3862 |
+ unsigned int len = skb_headlen(skb); |
3863 |
+ unsigned int nr_frags; |
3864 |
+@@ -5037,18 +5042,8 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb, |
3865 |
+ } |
3866 |
+ |
3867 |
+ mss = skb_shinfo(skb)->gso_size; |
3868 |
+- /* |
3869 |
+- * The controller does a simple calculation to |
3870 |
+- * make sure there is enough room in the FIFO before |
3871 |
+- * initiating the DMA for each buffer. The calc is: |
3872 |
+- * 4 = ceil(buffer len/mss). To make sure we don't |
3873 |
+- * overrun the FIFO, adjust the max buffer len if mss |
3874 |
+- * drops. |
3875 |
+- */ |
3876 |
+ if (mss) { |
3877 |
+ u8 hdr_len; |
3878 |
+- max_per_txd = min(mss << 2, max_per_txd); |
3879 |
+- max_txd_pwr = fls(max_per_txd) - 1; |
3880 |
+ |
3881 |
+ /* |
3882 |
+ * TSO Workaround for 82571/2/3 Controllers -- if skb->data |
3883 |
+@@ -5078,12 +5073,12 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb, |
3884 |
+ count++; |
3885 |
+ count++; |
3886 |
+ |
3887 |
+- count += TXD_USE_COUNT(len, max_txd_pwr); |
3888 |
++ count += DIV_ROUND_UP(len, adapter->tx_fifo_limit); |
3889 |
+ |
3890 |
+ nr_frags = skb_shinfo(skb)->nr_frags; |
3891 |
+ for (f = 0; f < nr_frags; f++) |
3892 |
+- count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->frags[f]), |
3893 |
+- max_txd_pwr); |
3894 |
++ count += DIV_ROUND_UP(skb_frag_size(&skb_shinfo(skb)->frags[f]), |
3895 |
++ adapter->tx_fifo_limit); |
3896 |
+ |
3897 |
+ if (adapter->hw.mac.tx_pkt_filtering) |
3898 |
+ e1000_transfer_dhcp_info(adapter, skb); |
3899 |
+@@ -5125,13 +5120,16 @@ static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb, |
3900 |
+ tx_flags |= E1000_TX_FLAGS_NO_FCS; |
3901 |
+ |
3902 |
+ /* if count is 0 then mapping error has occurred */ |
3903 |
+- count = e1000_tx_map(tx_ring, skb, first, max_per_txd, nr_frags, mss); |
3904 |
++ count = e1000_tx_map(tx_ring, skb, first, adapter->tx_fifo_limit, |
3905 |
++ nr_frags); |
3906 |
+ if (count) { |
3907 |
+ netdev_sent_queue(netdev, skb->len); |
3908 |
+ e1000_tx_queue(tx_ring, tx_flags, count); |
3909 |
+ /* Make sure there is space in the ring for the next send. */ |
3910 |
+- e1000_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS + 2); |
3911 |
+- |
3912 |
++ e1000_maybe_stop_tx(tx_ring, |
3913 |
++ (MAX_SKB_FRAGS * |
3914 |
++ DIV_ROUND_UP(PAGE_SIZE, |
3915 |
++ adapter->tx_fifo_limit) + 2)); |
3916 |
+ } else { |
3917 |
+ dev_kfree_skb_any(skb); |
3918 |
+ tx_ring->buffer_info[first].time_stamp = 0; |
3919 |
+@@ -6303,8 +6301,8 @@ static int __devinit e1000_probe(struct pci_dev *pdev, |
3920 |
+ adapter->hw.phy.autoneg_advertised = 0x2f; |
3921 |
+ |
3922 |
+ /* ring size defaults */ |
3923 |
+- adapter->rx_ring->count = 256; |
3924 |
+- adapter->tx_ring->count = 256; |
3925 |
++ adapter->rx_ring->count = E1000_DEFAULT_RXD; |
3926 |
++ adapter->tx_ring->count = E1000_DEFAULT_TXD; |
3927 |
+ |
3928 |
+ /* |
3929 |
+ * Initial Wake on LAN setting - If APM wake is enabled in |
3930 |
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |
3931 |
+index 8f9554596c1e..861140975f08 100644 |
3932 |
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |
3933 |
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |
3934 |
+@@ -7464,12 +7464,15 @@ static int __init ixgbe_init_module(void) |
3935 |
+ pr_info("%s - version %s\n", ixgbe_driver_string, ixgbe_driver_version); |
3936 |
+ pr_info("%s\n", ixgbe_copyright); |
3937 |
+ |
3938 |
++ ret = pci_register_driver(&ixgbe_driver); |
3939 |
++ if (ret) |
3940 |
++ return ret; |
3941 |
++ |
3942 |
+ #ifdef CONFIG_IXGBE_DCA |
3943 |
+ dca_register_notify(&dca_notifier); |
3944 |
+ #endif |
3945 |
+ |
3946 |
+- ret = pci_register_driver(&ixgbe_driver); |
3947 |
+- return ret; |
3948 |
++ return 0; |
3949 |
+ } |
3950 |
+ |
3951 |
+ module_init(ixgbe_init_module); |
3952 |
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c |
3953 |
+index 71605231bcb2..c4bb95b051f3 100644 |
3954 |
+--- a/drivers/net/macvlan.c |
3955 |
++++ b/drivers/net/macvlan.c |
3956 |
+@@ -237,11 +237,9 @@ static int macvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev) |
3957 |
+ const struct macvlan_dev *vlan = netdev_priv(dev); |
3958 |
+ const struct macvlan_port *port = vlan->port; |
3959 |
+ const struct macvlan_dev *dest; |
3960 |
+- __u8 ip_summed = skb->ip_summed; |
3961 |
+ |
3962 |
+ if (vlan->mode == MACVLAN_MODE_BRIDGE) { |
3963 |
+ const struct ethhdr *eth = (void *)skb->data; |
3964 |
+- skb->ip_summed = CHECKSUM_UNNECESSARY; |
3965 |
+ |
3966 |
+ /* send to other bridge ports directly */ |
3967 |
+ if (is_multicast_ether_addr(eth->h_dest)) { |
3968 |
+@@ -259,7 +257,6 @@ static int macvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev) |
3969 |
+ } |
3970 |
+ |
3971 |
+ xmit_world: |
3972 |
+- skb->ip_summed = ip_summed; |
3973 |
+ skb->dev = vlan->lowerdev; |
3974 |
+ return dev_queue_xmit(skb); |
3975 |
+ } |
3976 |
+diff --git a/drivers/net/wimax/i2400m/usb-rx.c b/drivers/net/wimax/i2400m/usb-rx.c |
3977 |
+index e3257681e360..b78ee676e102 100644 |
3978 |
+--- a/drivers/net/wimax/i2400m/usb-rx.c |
3979 |
++++ b/drivers/net/wimax/i2400m/usb-rx.c |
3980 |
+@@ -277,7 +277,7 @@ retry: |
3981 |
+ d_printf(1, dev, "RX: size changed to %d, received %d, " |
3982 |
+ "copied %d, capacity %ld\n", |
3983 |
+ rx_size, read_size, rx_skb->len, |
3984 |
+- (long) (skb_end_pointer(new_skb) - new_skb->head)); |
3985 |
++ (long) skb_end_offset(new_skb)); |
3986 |
+ goto retry; |
3987 |
+ } |
3988 |
+ /* In most cases, it happens due to the hardware scheduling a |
3989 |
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c |
3990 |
+index 3d0aa4723ff9..1a5f275f8d6b 100644 |
3991 |
+--- a/drivers/net/wireless/ath/ath9k/xmit.c |
3992 |
++++ b/drivers/net/wireless/ath/ath9k/xmit.c |
3993 |
+@@ -1242,14 +1242,16 @@ void ath_tx_aggr_sleep(struct ieee80211_sta *sta, struct ath_softc *sc, |
3994 |
+ for (tidno = 0, tid = &an->tid[tidno]; |
3995 |
+ tidno < WME_NUM_TID; tidno++, tid++) { |
3996 |
+ |
3997 |
+- if (!tid->sched) |
3998 |
+- continue; |
3999 |
+- |
4000 |
+ ac = tid->ac; |
4001 |
+ txq = ac->txq; |
4002 |
+ |
4003 |
+ ath_txq_lock(sc, txq); |
4004 |
+ |
4005 |
++ if (!tid->sched) { |
4006 |
++ ath_txq_unlock(sc, txq); |
4007 |
++ continue; |
4008 |
++ } |
4009 |
++ |
4010 |
+ buffered = !skb_queue_empty(&tid->buf_q); |
4011 |
+ |
4012 |
+ tid->sched = false; |
4013 |
+diff --git a/drivers/net/wireless/b43/Kconfig b/drivers/net/wireless/b43/Kconfig |
4014 |
+index 3876c7ea54f4..af40211cc9dd 100644 |
4015 |
+--- a/drivers/net/wireless/b43/Kconfig |
4016 |
++++ b/drivers/net/wireless/b43/Kconfig |
4017 |
+@@ -28,7 +28,7 @@ config B43 |
4018 |
+ |
4019 |
+ config B43_BCMA |
4020 |
+ bool "Support for BCMA bus" |
4021 |
+- depends on B43 && BCMA |
4022 |
++ depends on B43 && (BCMA = y || BCMA = B43) |
4023 |
+ default y |
4024 |
+ |
4025 |
+ config B43_BCMA_EXTRA |
4026 |
+@@ -39,7 +39,7 @@ config B43_BCMA_EXTRA |
4027 |
+ |
4028 |
+ config B43_SSB |
4029 |
+ bool |
4030 |
+- depends on B43 && SSB |
4031 |
++ depends on B43 && (SSB = y || SSB = B43) |
4032 |
+ default y |
4033 |
+ |
4034 |
+ # Auto-select SSB PCI-HOST support, if possible |
4035 |
+diff --git a/drivers/net/wireless/rt2x00/rt2500usb.c b/drivers/net/wireless/rt2x00/rt2500usb.c |
4036 |
+index e0a7efccb73b..9e55debc3e52 100644 |
4037 |
+--- a/drivers/net/wireless/rt2x00/rt2500usb.c |
4038 |
++++ b/drivers/net/wireless/rt2x00/rt2500usb.c |
4039 |
+@@ -1921,7 +1921,7 @@ static struct usb_device_id rt2500usb_device_table[] = { |
4040 |
+ { USB_DEVICE(0x0b05, 0x1706) }, |
4041 |
+ { USB_DEVICE(0x0b05, 0x1707) }, |
4042 |
+ /* Belkin */ |
4043 |
+- { USB_DEVICE(0x050d, 0x7050) }, |
4044 |
++ { USB_DEVICE(0x050d, 0x7050) }, /* FCC ID: K7SF5D7050A ver. 2.x */ |
4045 |
+ { USB_DEVICE(0x050d, 0x7051) }, |
4046 |
+ /* Cisco Systems */ |
4047 |
+ { USB_DEVICE(0x13b1, 0x000d) }, |
4048 |
+diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c |
4049 |
+index 6eec862fea28..664e93d2a682 100644 |
4050 |
+--- a/drivers/net/wireless/rt2x00/rt2800usb.c |
4051 |
++++ b/drivers/net/wireless/rt2x00/rt2800usb.c |
4052 |
+@@ -1009,6 +1009,7 @@ static struct usb_device_id rt2800usb_device_table[] = { |
4053 |
+ { USB_DEVICE(0x07d1, 0x3c15) }, |
4054 |
+ { USB_DEVICE(0x07d1, 0x3c16) }, |
4055 |
+ { USB_DEVICE(0x2001, 0x3c1b) }, |
4056 |
++ { USB_DEVICE(0x2001, 0x3c1e) }, |
4057 |
+ /* Draytek */ |
4058 |
+ { USB_DEVICE(0x07fa, 0x7712) }, |
4059 |
+ /* DVICO */ |
4060 |
+@@ -1140,6 +1141,7 @@ static struct usb_device_id rt2800usb_device_table[] = { |
4061 |
+ { USB_DEVICE(0x177f, 0x0153) }, |
4062 |
+ { USB_DEVICE(0x177f, 0x0302) }, |
4063 |
+ { USB_DEVICE(0x177f, 0x0313) }, |
4064 |
++ { USB_DEVICE(0x177f, 0x0323) }, |
4065 |
+ /* U-Media */ |
4066 |
+ { USB_DEVICE(0x157e, 0x300e) }, |
4067 |
+ { USB_DEVICE(0x157e, 0x3013) }, |
4068 |
+diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c |
4069 |
+index 6701f2d71274..af247b06c842 100644 |
4070 |
+--- a/drivers/net/wireless/rt2x00/rt2x00mac.c |
4071 |
++++ b/drivers/net/wireless/rt2x00/rt2x00mac.c |
4072 |
+@@ -651,20 +651,18 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw, |
4073 |
+ bss_conf->bssid); |
4074 |
+ |
4075 |
+ /* |
4076 |
+- * Update the beacon. This is only required on USB devices. PCI |
4077 |
+- * devices fetch beacons periodically. |
4078 |
+- */ |
4079 |
+- if (changes & BSS_CHANGED_BEACON && rt2x00_is_usb(rt2x00dev)) |
4080 |
+- rt2x00queue_update_beacon(rt2x00dev, vif); |
4081 |
+- |
4082 |
+- /* |
4083 |
+ * Start/stop beaconing. |
4084 |
+ */ |
4085 |
+ if (changes & BSS_CHANGED_BEACON_ENABLED) { |
4086 |
+ if (!bss_conf->enable_beacon && intf->enable_beacon) { |
4087 |
+- rt2x00queue_clear_beacon(rt2x00dev, vif); |
4088 |
+ rt2x00dev->intf_beaconing--; |
4089 |
+ intf->enable_beacon = false; |
4090 |
++ /* |
4091 |
++ * Clear beacon in the H/W for this vif. This is needed |
4092 |
++ * to disable beaconing on this particular interface |
4093 |
++ * and keep it running on other interfaces. |
4094 |
++ */ |
4095 |
++ rt2x00queue_clear_beacon(rt2x00dev, vif); |
4096 |
+ |
4097 |
+ if (rt2x00dev->intf_beaconing == 0) { |
4098 |
+ /* |
4099 |
+@@ -675,11 +673,15 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw, |
4100 |
+ rt2x00queue_stop_queue(rt2x00dev->bcn); |
4101 |
+ mutex_unlock(&intf->beacon_skb_mutex); |
4102 |
+ } |
4103 |
+- |
4104 |
+- |
4105 |
+ } else if (bss_conf->enable_beacon && !intf->enable_beacon) { |
4106 |
+ rt2x00dev->intf_beaconing++; |
4107 |
+ intf->enable_beacon = true; |
4108 |
++ /* |
4109 |
++ * Upload beacon to the H/W. This is only required on |
4110 |
++ * USB devices. PCI devices fetch beacons periodically. |
4111 |
++ */ |
4112 |
++ if (rt2x00_is_usb(rt2x00dev)) |
4113 |
++ rt2x00queue_update_beacon(rt2x00dev, vif); |
4114 |
+ |
4115 |
+ if (rt2x00dev->intf_beaconing == 1) { |
4116 |
+ /* |
4117 |
+diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c |
4118 |
+index ffdd32e74b0b..ef4cd40aef39 100644 |
4119 |
+--- a/drivers/net/wireless/rt2x00/rt73usb.c |
4120 |
++++ b/drivers/net/wireless/rt2x00/rt73usb.c |
4121 |
+@@ -2422,6 +2422,7 @@ static struct usb_device_id rt73usb_device_table[] = { |
4122 |
+ { USB_DEVICE(0x0b05, 0x1723) }, |
4123 |
+ { USB_DEVICE(0x0b05, 0x1724) }, |
4124 |
+ /* Belkin */ |
4125 |
++ { USB_DEVICE(0x050d, 0x7050) }, /* FCC ID: K7SF5D7050B ver. 3.x */ |
4126 |
+ { USB_DEVICE(0x050d, 0x705a) }, |
4127 |
+ { USB_DEVICE(0x050d, 0x905b) }, |
4128 |
+ { USB_DEVICE(0x050d, 0x905c) }, |
4129 |
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c |
4130 |
+index a4387acbf220..0908a3bf2f68 100644 |
4131 |
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c |
4132 |
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c |
4133 |
+@@ -1001,7 +1001,7 @@ int rtl92cu_hw_init(struct ieee80211_hw *hw) |
4134 |
+ err = _rtl92cu_init_mac(hw); |
4135 |
+ if (err) { |
4136 |
+ RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "init mac failed!\n"); |
4137 |
+- return err; |
4138 |
++ goto exit; |
4139 |
+ } |
4140 |
+ err = rtl92c_download_fw(hw); |
4141 |
+ if (err) { |
4142 |
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c |
4143 |
+index 796afbf13eb4..fe50f14fba4e 100644 |
4144 |
+--- a/drivers/net/xen-netfront.c |
4145 |
++++ b/drivers/net/xen-netfront.c |
4146 |
+@@ -36,7 +36,7 @@ |
4147 |
+ #include <linux/skbuff.h> |
4148 |
+ #include <linux/ethtool.h> |
4149 |
+ #include <linux/if_ether.h> |
4150 |
+-#include <linux/tcp.h> |
4151 |
++#include <net/tcp.h> |
4152 |
+ #include <linux/udp.h> |
4153 |
+ #include <linux/moduleparam.h> |
4154 |
+ #include <linux/mm.h> |
4155 |
+@@ -492,6 +492,16 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) |
4156 |
+ unsigned int len = skb_headlen(skb); |
4157 |
+ unsigned long flags; |
4158 |
+ |
4159 |
++ /* If skb->len is too big for wire format, drop skb and alert |
4160 |
++ * user about misconfiguration. |
4161 |
++ */ |
4162 |
++ if (unlikely(skb->len > XEN_NETIF_MAX_TX_SIZE)) { |
4163 |
++ net_alert_ratelimited( |
4164 |
++ "xennet: skb->len = %u, too big for wire format\n", |
4165 |
++ skb->len); |
4166 |
++ goto drop; |
4167 |
++ } |
4168 |
++ |
4169 |
+ frags += DIV_ROUND_UP(offset + len, PAGE_SIZE); |
4170 |
+ if (unlikely(frags > MAX_SKB_FRAGS + 1)) { |
4171 |
+ printk(KERN_ALERT "xennet: skb rides the rocket: %d frags\n", |
4172 |
+@@ -1045,7 +1055,8 @@ err: |
4173 |
+ |
4174 |
+ static int xennet_change_mtu(struct net_device *dev, int mtu) |
4175 |
+ { |
4176 |
+- int max = xennet_can_sg(dev) ? 65535 - ETH_HLEN : ETH_DATA_LEN; |
4177 |
++ int max = xennet_can_sg(dev) ? |
4178 |
++ XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN; |
4179 |
+ |
4180 |
+ if (mtu > max) |
4181 |
+ return -EINVAL; |
4182 |
+@@ -1349,6 +1360,8 @@ static struct net_device * __devinit xennet_create_dev(struct xenbus_device *dev |
4183 |
+ SET_ETHTOOL_OPS(netdev, &xennet_ethtool_ops); |
4184 |
+ SET_NETDEV_DEV(netdev, &dev->dev); |
4185 |
+ |
4186 |
++ netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER); |
4187 |
++ |
4188 |
+ np->netdev = netdev; |
4189 |
+ |
4190 |
+ netif_carrier_off(netdev); |
4191 |
+diff --git a/drivers/pci/hotplug/shpchp.h b/drivers/pci/hotplug/shpchp.h |
4192 |
+index 1b69d955a31f..b849f995075a 100644 |
4193 |
+--- a/drivers/pci/hotplug/shpchp.h |
4194 |
++++ b/drivers/pci/hotplug/shpchp.h |
4195 |
+@@ -46,7 +46,6 @@ |
4196 |
+ extern bool shpchp_poll_mode; |
4197 |
+ extern int shpchp_poll_time; |
4198 |
+ extern bool shpchp_debug; |
4199 |
+-extern struct workqueue_struct *shpchp_wq; |
4200 |
+ |
4201 |
+ #define dbg(format, arg...) \ |
4202 |
+ do { \ |
4203 |
+@@ -90,6 +89,7 @@ struct slot { |
4204 |
+ struct list_head slot_list; |
4205 |
+ struct delayed_work work; /* work for button event */ |
4206 |
+ struct mutex lock; |
4207 |
++ struct workqueue_struct *wq; |
4208 |
+ u8 hp_slot; |
4209 |
+ }; |
4210 |
+ |
4211 |
+diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp_core.c |
4212 |
+index 5f1f0d93dc13..dd37452cca5e 100644 |
4213 |
+--- a/drivers/pci/hotplug/shpchp_core.c |
4214 |
++++ b/drivers/pci/hotplug/shpchp_core.c |
4215 |
+@@ -39,7 +39,6 @@ |
4216 |
+ bool shpchp_debug; |
4217 |
+ bool shpchp_poll_mode; |
4218 |
+ int shpchp_poll_time; |
4219 |
+-struct workqueue_struct *shpchp_wq; |
4220 |
+ |
4221 |
+ #define DRIVER_VERSION "0.4" |
4222 |
+ #define DRIVER_AUTHOR "Dan Zink <dan.zink@××××××.com>, Greg Kroah-Hartman <greg@×××××.com>, Dely Sy <dely.l.sy@×××××.com>" |
4223 |
+@@ -122,6 +121,14 @@ static int init_slots(struct controller *ctrl) |
4224 |
+ slot->device = ctrl->slot_device_offset + i; |
4225 |
+ slot->hpc_ops = ctrl->hpc_ops; |
4226 |
+ slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i); |
4227 |
++ |
4228 |
++ snprintf(name, sizeof(name), "shpchp-%d", slot->number); |
4229 |
++ slot->wq = alloc_workqueue(name, 0, 0); |
4230 |
++ if (!slot->wq) { |
4231 |
++ retval = -ENOMEM; |
4232 |
++ goto error_info; |
4233 |
++ } |
4234 |
++ |
4235 |
+ mutex_init(&slot->lock); |
4236 |
+ INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work); |
4237 |
+ |
4238 |
+@@ -141,7 +148,7 @@ static int init_slots(struct controller *ctrl) |
4239 |
+ if (retval) { |
4240 |
+ ctrl_err(ctrl, "pci_hp_register failed with error %d\n", |
4241 |
+ retval); |
4242 |
+- goto error_info; |
4243 |
++ goto error_slotwq; |
4244 |
+ } |
4245 |
+ |
4246 |
+ get_power_status(hotplug_slot, &info->power_status); |
4247 |
+@@ -153,6 +160,8 @@ static int init_slots(struct controller *ctrl) |
4248 |
+ } |
4249 |
+ |
4250 |
+ return 0; |
4251 |
++error_slotwq: |
4252 |
++ destroy_workqueue(slot->wq); |
4253 |
+ error_info: |
4254 |
+ kfree(info); |
4255 |
+ error_hpslot: |
4256 |
+@@ -173,7 +182,7 @@ void cleanup_slots(struct controller *ctrl) |
4257 |
+ slot = list_entry(tmp, struct slot, slot_list); |
4258 |
+ list_del(&slot->slot_list); |
4259 |
+ cancel_delayed_work(&slot->work); |
4260 |
+- flush_workqueue(shpchp_wq); |
4261 |
++ destroy_workqueue(slot->wq); |
4262 |
+ pci_hp_deregister(slot->hotplug_slot); |
4263 |
+ } |
4264 |
+ } |
4265 |
+@@ -356,18 +365,12 @@ static struct pci_driver shpc_driver = { |
4266 |
+ |
4267 |
+ static int __init shpcd_init(void) |
4268 |
+ { |
4269 |
+- int retval = 0; |
4270 |
+- |
4271 |
+- shpchp_wq = alloc_ordered_workqueue("shpchp", 0); |
4272 |
+- if (!shpchp_wq) |
4273 |
+- return -ENOMEM; |
4274 |
++ int retval; |
4275 |
+ |
4276 |
+ retval = pci_register_driver(&shpc_driver); |
4277 |
+ dbg("%s: pci_register_driver = %d\n", __func__, retval); |
4278 |
+ info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); |
4279 |
+- if (retval) { |
4280 |
+- destroy_workqueue(shpchp_wq); |
4281 |
+- } |
4282 |
++ |
4283 |
+ return retval; |
4284 |
+ } |
4285 |
+ |
4286 |
+@@ -375,7 +378,6 @@ static void __exit shpcd_cleanup(void) |
4287 |
+ { |
4288 |
+ dbg("unload_shpchpd()\n"); |
4289 |
+ pci_unregister_driver(&shpc_driver); |
4290 |
+- destroy_workqueue(shpchp_wq); |
4291 |
+ info(DRIVER_DESC " version: " DRIVER_VERSION " unloaded\n"); |
4292 |
+ } |
4293 |
+ |
4294 |
+diff --git a/drivers/pci/hotplug/shpchp_ctrl.c b/drivers/pci/hotplug/shpchp_ctrl.c |
4295 |
+index bba5b3e0bf8a..b888675a228a 100644 |
4296 |
+--- a/drivers/pci/hotplug/shpchp_ctrl.c |
4297 |
++++ b/drivers/pci/hotplug/shpchp_ctrl.c |
4298 |
+@@ -51,7 +51,7 @@ static int queue_interrupt_event(struct slot *p_slot, u32 event_type) |
4299 |
+ info->p_slot = p_slot; |
4300 |
+ INIT_WORK(&info->work, interrupt_event_handler); |
4301 |
+ |
4302 |
+- queue_work(shpchp_wq, &info->work); |
4303 |
++ queue_work(p_slot->wq, &info->work); |
4304 |
+ |
4305 |
+ return 0; |
4306 |
+ } |
4307 |
+@@ -285,8 +285,8 @@ static int board_added(struct slot *p_slot) |
4308 |
+ return WRONG_BUS_FREQUENCY; |
4309 |
+ } |
4310 |
+ |
4311 |
+- bsp = ctrl->pci_dev->bus->cur_bus_speed; |
4312 |
+- msp = ctrl->pci_dev->bus->max_bus_speed; |
4313 |
++ bsp = ctrl->pci_dev->subordinate->cur_bus_speed; |
4314 |
++ msp = ctrl->pci_dev->subordinate->max_bus_speed; |
4315 |
+ |
4316 |
+ /* Check if there are other slots or devices on the same bus */ |
4317 |
+ if (!list_empty(&ctrl->pci_dev->subordinate->devices)) |
4318 |
+@@ -456,7 +456,7 @@ void shpchp_queue_pushbutton_work(struct work_struct *work) |
4319 |
+ kfree(info); |
4320 |
+ goto out; |
4321 |
+ } |
4322 |
+- queue_work(shpchp_wq, &info->work); |
4323 |
++ queue_work(p_slot->wq, &info->work); |
4324 |
+ out: |
4325 |
+ mutex_unlock(&p_slot->lock); |
4326 |
+ } |
4327 |
+@@ -504,7 +504,7 @@ static void handle_button_press_event(struct slot *p_slot) |
4328 |
+ p_slot->hpc_ops->green_led_blink(p_slot); |
4329 |
+ p_slot->hpc_ops->set_attention_status(p_slot, 0); |
4330 |
+ |
4331 |
+- queue_delayed_work(shpchp_wq, &p_slot->work, 5*HZ); |
4332 |
++ queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ); |
4333 |
+ break; |
4334 |
+ case BLINKINGOFF_STATE: |
4335 |
+ case BLINKINGON_STATE: |
4336 |
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c |
4337 |
+index 474f22f304e4..c9ce611d46bd 100644 |
4338 |
+--- a/drivers/pci/pcie/aspm.c |
4339 |
++++ b/drivers/pci/pcie/aspm.c |
4340 |
+@@ -583,6 +583,9 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev) |
4341 |
+ struct pcie_link_state *link; |
4342 |
+ int blacklist = !!pcie_aspm_sanity_check(pdev); |
4343 |
+ |
4344 |
++ if (!aspm_support_enabled) |
4345 |
++ return; |
4346 |
++ |
4347 |
+ if (!pci_is_pcie(pdev) || pdev->link_state) |
4348 |
+ return; |
4349 |
+ if (pdev->pcie_type != PCI_EXP_TYPE_ROOT_PORT && |
4350 |
+diff --git a/drivers/platform/x86/msi-wmi.c b/drivers/platform/x86/msi-wmi.c |
4351 |
+index 2264331bd48e..b96766b61ea3 100644 |
4352 |
+--- a/drivers/platform/x86/msi-wmi.c |
4353 |
++++ b/drivers/platform/x86/msi-wmi.c |
4354 |
+@@ -176,7 +176,7 @@ static void msi_wmi_notify(u32 value, void *context) |
4355 |
+ pr_debug("Suppressed key event 0x%X - " |
4356 |
+ "Last press was %lld us ago\n", |
4357 |
+ key->code, ktime_to_us(diff)); |
4358 |
+- return; |
4359 |
++ goto msi_wmi_notify_exit; |
4360 |
+ } |
4361 |
+ last_pressed[key->code - SCANCODE_BASE] = cur; |
4362 |
+ |
4363 |
+@@ -195,6 +195,8 @@ static void msi_wmi_notify(u32 value, void *context) |
4364 |
+ pr_info("Unknown key pressed - %x\n", eventcode); |
4365 |
+ } else |
4366 |
+ pr_info("Unknown event received\n"); |
4367 |
++ |
4368 |
++msi_wmi_notify_exit: |
4369 |
+ kfree(response.pointer); |
4370 |
+ } |
4371 |
+ |
4372 |
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c |
4373 |
+index f08aee680f0a..aa232de6d48d 100644 |
4374 |
+--- a/drivers/platform/x86/thinkpad_acpi.c |
4375 |
++++ b/drivers/platform/x86/thinkpad_acpi.c |
4376 |
+@@ -3402,7 +3402,7 @@ static int __init hotkey_init(struct ibm_init_struct *iibm) |
4377 |
+ /* Do not issue duplicate brightness change events to |
4378 |
+ * userspace. tpacpi_detect_brightness_capabilities() must have |
4379 |
+ * been called before this point */ |
4380 |
+- if (tp_features.bright_acpimode && acpi_video_backlight_support()) { |
4381 |
++ if (acpi_video_backlight_support()) { |
4382 |
+ pr_info("This ThinkPad has standard ACPI backlight " |
4383 |
+ "brightness control, supported by the ACPI " |
4384 |
+ "video driver\n"); |
4385 |
+diff --git a/drivers/rapidio/devices/tsi721.c b/drivers/rapidio/devices/tsi721.c |
4386 |
+index 33471e11a738..84eab3fa9bca 100644 |
4387 |
+--- a/drivers/rapidio/devices/tsi721.c |
4388 |
++++ b/drivers/rapidio/devices/tsi721.c |
4389 |
+@@ -475,6 +475,10 @@ static irqreturn_t tsi721_irqhandler(int irq, void *ptr) |
4390 |
+ u32 intval; |
4391 |
+ u32 ch_inte; |
4392 |
+ |
4393 |
++ /* For MSI mode disable all device-level interrupts */ |
4394 |
++ if (priv->flags & TSI721_USING_MSI) |
4395 |
++ iowrite32(0, priv->regs + TSI721_DEV_INTE); |
4396 |
++ |
4397 |
+ dev_int = ioread32(priv->regs + TSI721_DEV_INT); |
4398 |
+ if (!dev_int) |
4399 |
+ return IRQ_NONE; |
4400 |
+@@ -548,6 +552,13 @@ static irqreturn_t tsi721_irqhandler(int irq, void *ptr) |
4401 |
+ tsi721_pw_handler(mport); |
4402 |
+ } |
4403 |
+ |
4404 |
++ /* For MSI mode re-enable device-level interrupts */ |
4405 |
++ if (priv->flags & TSI721_USING_MSI) { |
4406 |
++ dev_int = TSI721_DEV_INT_SR2PC_CH | TSI721_DEV_INT_SRIO | |
4407 |
++ TSI721_DEV_INT_SMSG_CH; |
4408 |
++ iowrite32(dev_int, priv->regs + TSI721_DEV_INTE); |
4409 |
++ } |
4410 |
++ |
4411 |
+ return IRQ_HANDLED; |
4412 |
+ } |
4413 |
+ |
4414 |
+diff --git a/drivers/regulator/max8997.c b/drivers/regulator/max8997.c |
4415 |
+index 17a58c56eebf..8350f50bb062 100644 |
4416 |
+--- a/drivers/regulator/max8997.c |
4417 |
++++ b/drivers/regulator/max8997.c |
4418 |
+@@ -71,26 +71,26 @@ struct voltage_map_desc { |
4419 |
+ unsigned int n_bits; |
4420 |
+ }; |
4421 |
+ |
4422 |
+-/* Voltage maps in mV */ |
4423 |
++/* Voltage maps in uV */ |
4424 |
+ static const struct voltage_map_desc ldo_voltage_map_desc = { |
4425 |
+- .min = 800, .max = 3950, .step = 50, .n_bits = 6, |
4426 |
++ .min = 800000, .max = 3950000, .step = 50000, .n_bits = 6, |
4427 |
+ }; /* LDO1 ~ 18, 21 all */ |
4428 |
+ |
4429 |
+ static const struct voltage_map_desc buck1245_voltage_map_desc = { |
4430 |
+- .min = 650, .max = 2225, .step = 25, .n_bits = 6, |
4431 |
++ .min = 650000, .max = 2225000, .step = 25000, .n_bits = 6, |
4432 |
+ }; /* Buck1, 2, 4, 5 */ |
4433 |
+ |
4434 |
+ static const struct voltage_map_desc buck37_voltage_map_desc = { |
4435 |
+- .min = 750, .max = 3900, .step = 50, .n_bits = 6, |
4436 |
++ .min = 750000, .max = 3900000, .step = 50000, .n_bits = 6, |
4437 |
+ }; /* Buck3, 7 */ |
4438 |
+ |
4439 |
+-/* current map in mA */ |
4440 |
++/* current map in uA */ |
4441 |
+ static const struct voltage_map_desc charger_current_map_desc = { |
4442 |
+- .min = 200, .max = 950, .step = 50, .n_bits = 4, |
4443 |
++ .min = 200000, .max = 950000, .step = 50000, .n_bits = 4, |
4444 |
+ }; |
4445 |
+ |
4446 |
+ static const struct voltage_map_desc topoff_current_map_desc = { |
4447 |
+- .min = 50, .max = 200, .step = 10, .n_bits = 4, |
4448 |
++ .min = 50000, .max = 200000, .step = 10000, .n_bits = 4, |
4449 |
+ }; |
4450 |
+ |
4451 |
+ static const struct voltage_map_desc *reg_voltage_map[] = { |
4452 |
+@@ -194,7 +194,7 @@ static int max8997_list_voltage(struct regulator_dev *rdev, |
4453 |
+ if (val > desc->max) |
4454 |
+ return -EINVAL; |
4455 |
+ |
4456 |
+- return val * 1000; |
4457 |
++ return val; |
4458 |
+ } |
4459 |
+ |
4460 |
+ static int max8997_get_enable_register(struct regulator_dev *rdev, |
4461 |
+@@ -496,7 +496,6 @@ static int max8997_set_voltage_ldobuck(struct regulator_dev *rdev, |
4462 |
+ { |
4463 |
+ struct max8997_data *max8997 = rdev_get_drvdata(rdev); |
4464 |
+ struct i2c_client *i2c = max8997->iodev->i2c; |
4465 |
+- int min_vol = min_uV / 1000, max_vol = max_uV / 1000; |
4466 |
+ const struct voltage_map_desc *desc; |
4467 |
+ int rid = rdev_get_id(rdev); |
4468 |
+ int reg, shift = 0, mask, ret; |
4469 |
+@@ -522,7 +521,7 @@ static int max8997_set_voltage_ldobuck(struct regulator_dev *rdev, |
4470 |
+ |
4471 |
+ desc = reg_voltage_map[rid]; |
4472 |
+ |
4473 |
+- i = max8997_get_voltage_proper_val(desc, min_vol, max_vol); |
4474 |
++ i = max8997_get_voltage_proper_val(desc, min_uV, max_uV); |
4475 |
+ if (i < 0) |
4476 |
+ return i; |
4477 |
+ |
4478 |
+@@ -541,7 +540,7 @@ static int max8997_set_voltage_ldobuck(struct regulator_dev *rdev, |
4479 |
+ /* If the voltage is increasing */ |
4480 |
+ if (org < i) |
4481 |
+ udelay(DIV_ROUND_UP(desc->step * (i - org), |
4482 |
+- max8997->ramp_delay)); |
4483 |
++ max8997->ramp_delay * 1000)); |
4484 |
+ } |
4485 |
+ |
4486 |
+ return ret; |
4487 |
+@@ -640,7 +639,6 @@ static int max8997_set_voltage_buck(struct regulator_dev *rdev, |
4488 |
+ const struct voltage_map_desc *desc; |
4489 |
+ int new_val, new_idx, damage, tmp_val, tmp_idx, tmp_dmg; |
4490 |
+ bool gpio_dvs_mode = false; |
4491 |
+- int min_vol = min_uV / 1000, max_vol = max_uV / 1000; |
4492 |
+ |
4493 |
+ if (rid < MAX8997_BUCK1 || rid > MAX8997_BUCK7) |
4494 |
+ return -EINVAL; |
4495 |
+@@ -665,7 +663,7 @@ static int max8997_set_voltage_buck(struct regulator_dev *rdev, |
4496 |
+ selector); |
4497 |
+ |
4498 |
+ desc = reg_voltage_map[rid]; |
4499 |
+- new_val = max8997_get_voltage_proper_val(desc, min_vol, max_vol); |
4500 |
++ new_val = max8997_get_voltage_proper_val(desc, min_uV, max_uV); |
4501 |
+ if (new_val < 0) |
4502 |
+ return new_val; |
4503 |
+ |
4504 |
+@@ -997,8 +995,8 @@ static __devinit int max8997_pmic_probe(struct platform_device *pdev) |
4505 |
+ max8997->buck1_vol[i] = ret = |
4506 |
+ max8997_get_voltage_proper_val( |
4507 |
+ &buck1245_voltage_map_desc, |
4508 |
+- pdata->buck1_voltage[i] / 1000, |
4509 |
+- pdata->buck1_voltage[i] / 1000 + |
4510 |
++ pdata->buck1_voltage[i], |
4511 |
++ pdata->buck1_voltage[i] + |
4512 |
+ buck1245_voltage_map_desc.step); |
4513 |
+ if (ret < 0) |
4514 |
+ goto err_alloc; |
4515 |
+@@ -1006,8 +1004,8 @@ static __devinit int max8997_pmic_probe(struct platform_device *pdev) |
4516 |
+ max8997->buck2_vol[i] = ret = |
4517 |
+ max8997_get_voltage_proper_val( |
4518 |
+ &buck1245_voltage_map_desc, |
4519 |
+- pdata->buck2_voltage[i] / 1000, |
4520 |
+- pdata->buck2_voltage[i] / 1000 + |
4521 |
++ pdata->buck2_voltage[i], |
4522 |
++ pdata->buck2_voltage[i] + |
4523 |
+ buck1245_voltage_map_desc.step); |
4524 |
+ if (ret < 0) |
4525 |
+ goto err_alloc; |
4526 |
+@@ -1015,8 +1013,8 @@ static __devinit int max8997_pmic_probe(struct platform_device *pdev) |
4527 |
+ max8997->buck5_vol[i] = ret = |
4528 |
+ max8997_get_voltage_proper_val( |
4529 |
+ &buck1245_voltage_map_desc, |
4530 |
+- pdata->buck5_voltage[i] / 1000, |
4531 |
+- pdata->buck5_voltage[i] / 1000 + |
4532 |
++ pdata->buck5_voltage[i], |
4533 |
++ pdata->buck5_voltage[i] + |
4534 |
+ buck1245_voltage_map_desc.step); |
4535 |
+ if (ret < 0) |
4536 |
+ goto err_alloc; |
4537 |
+diff --git a/drivers/regulator/max8998.c b/drivers/regulator/max8998.c |
4538 |
+index 5890265eeacc..130038368b73 100644 |
4539 |
+--- a/drivers/regulator/max8998.c |
4540 |
++++ b/drivers/regulator/max8998.c |
4541 |
+@@ -492,7 +492,7 @@ buck2_exit: |
4542 |
+ |
4543 |
+ difference = desc->min + desc->step*i - previous_vol/1000; |
4544 |
+ if (difference > 0) |
4545 |
+- udelay(difference / ((val & 0x0f) + 1)); |
4546 |
++ udelay(DIV_ROUND_UP(difference, (val & 0x0f) + 1)); |
4547 |
+ |
4548 |
+ return ret; |
4549 |
+ } |
4550 |
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c |
4551 |
+index f027c063fb20..65ef56fb9a66 100644 |
4552 |
+--- a/drivers/rtc/rtc-pl031.c |
4553 |
++++ b/drivers/rtc/rtc-pl031.c |
4554 |
+@@ -44,6 +44,7 @@ |
4555 |
+ #define RTC_YMR 0x34 /* Year match register */ |
4556 |
+ #define RTC_YLR 0x38 /* Year data load register */ |
4557 |
+ |
4558 |
++#define RTC_CR_EN (1 << 0) /* counter enable bit */ |
4559 |
+ #define RTC_CR_CWEN (1 << 26) /* Clockwatch enable bit */ |
4560 |
+ |
4561 |
+ #define RTC_TCR_EN (1 << 1) /* Periodic timer enable bit */ |
4562 |
+@@ -312,7 +313,7 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id) |
4563 |
+ int ret; |
4564 |
+ struct pl031_local *ldata; |
4565 |
+ struct rtc_class_ops *ops = id->data; |
4566 |
+- unsigned long time; |
4567 |
++ unsigned long time, data; |
4568 |
+ |
4569 |
+ ret = amba_request_regions(adev, NULL); |
4570 |
+ if (ret) |
4571 |
+@@ -339,10 +340,13 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id) |
4572 |
+ dev_dbg(&adev->dev, "designer ID = 0x%02x\n", ldata->hw_designer); |
4573 |
+ dev_dbg(&adev->dev, "revision = 0x%01x\n", ldata->hw_revision); |
4574 |
+ |
4575 |
++ data = readl(ldata->base + RTC_CR); |
4576 |
+ /* Enable the clockwatch on ST Variants */ |
4577 |
+ if (ldata->hw_designer == AMBA_VENDOR_ST) |
4578 |
+- writel(readl(ldata->base + RTC_CR) | RTC_CR_CWEN, |
4579 |
+- ldata->base + RTC_CR); |
4580 |
++ data |= RTC_CR_CWEN; |
4581 |
++ else |
4582 |
++ data |= RTC_CR_EN; |
4583 |
++ writel(data, ldata->base + RTC_CR); |
4584 |
+ |
4585 |
+ /* |
4586 |
+ * On ST PL031 variants, the RTC reset value does not provide correct |
4587 |
+diff --git a/drivers/staging/octeon/ethernet-tx.c b/drivers/staging/octeon/ethernet-tx.c |
4588 |
+index 91a97b3e45c6..5877b2c64e2a 100644 |
4589 |
+--- a/drivers/staging/octeon/ethernet-tx.c |
4590 |
++++ b/drivers/staging/octeon/ethernet-tx.c |
4591 |
+@@ -345,7 +345,7 @@ int cvm_oct_xmit(struct sk_buff *skb, struct net_device *dev) |
4592 |
+ } |
4593 |
+ if (unlikely |
4594 |
+ (skb->truesize != |
4595 |
+- sizeof(*skb) + skb_end_pointer(skb) - skb->head)) { |
4596 |
++ sizeof(*skb) + skb_end_offset(skb))) { |
4597 |
+ /* |
4598 |
+ printk("TX buffer truesize has been changed\n"); |
4599 |
+ */ |
4600 |
+diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c |
4601 |
+index b5130c8bcb64..e2f5c81e7548 100644 |
4602 |
+--- a/drivers/staging/speakup/speakup_soft.c |
4603 |
++++ b/drivers/staging/speakup/speakup_soft.c |
4604 |
+@@ -46,7 +46,7 @@ static int misc_registered; |
4605 |
+ static struct var_t vars[] = { |
4606 |
+ { CAPS_START, .u.s = {"\x01+3p" } }, |
4607 |
+ { CAPS_STOP, .u.s = {"\x01-3p" } }, |
4608 |
+- { RATE, .u.n = {"\x01%ds", 5, 0, 9, 0, 0, NULL } }, |
4609 |
++ { RATE, .u.n = {"\x01%ds", 2, 0, 9, 0, 0, NULL } }, |
4610 |
+ { PITCH, .u.n = {"\x01%dp", 5, 0, 9, 0, 0, NULL } }, |
4611 |
+ { VOL, .u.n = {"\x01%dv", 5, 0, 9, 0, 0, NULL } }, |
4612 |
+ { TONE, .u.n = {"\x01%dx", 1, 0, 2, 0, 0, NULL } }, |
4613 |
+diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c |
4614 |
+index 685d612a627b..cdac567dc4eb 100644 |
4615 |
+--- a/drivers/staging/zram/zram_drv.c |
4616 |
++++ b/drivers/staging/zram/zram_drv.c |
4617 |
+@@ -235,7 +235,7 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec, |
4618 |
+ |
4619 |
+ if (is_partial_io(bvec)) { |
4620 |
+ /* Use a temporary buffer to decompress the page */ |
4621 |
+- uncmem = kmalloc(PAGE_SIZE, GFP_KERNEL); |
4622 |
++ uncmem = kmalloc(PAGE_SIZE, GFP_NOIO); |
4623 |
+ if (!uncmem) { |
4624 |
+ pr_info("Error allocating temp memory!\n"); |
4625 |
+ return -ENOMEM; |
4626 |
+@@ -330,7 +330,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, |
4627 |
+ * This is a partial IO. We need to read the full page |
4628 |
+ * before to write the changes. |
4629 |
+ */ |
4630 |
+- uncmem = kmalloc(PAGE_SIZE, GFP_KERNEL); |
4631 |
++ uncmem = kmalloc(PAGE_SIZE, GFP_NOIO); |
4632 |
+ if (!uncmem) { |
4633 |
+ pr_info("Error allocating temp memory!\n"); |
4634 |
+ ret = -ENOMEM; |
4635 |
+@@ -535,13 +535,20 @@ out: |
4636 |
+ */ |
4637 |
+ static inline int valid_io_request(struct zram *zram, struct bio *bio) |
4638 |
+ { |
4639 |
+- if (unlikely( |
4640 |
+- (bio->bi_sector >= (zram->disksize >> SECTOR_SHIFT)) || |
4641 |
+- (bio->bi_sector & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1)) || |
4642 |
+- (bio->bi_size & (ZRAM_LOGICAL_BLOCK_SIZE - 1)))) { |
4643 |
++ u64 start, end, bound; |
4644 |
+ |
4645 |
++ /* unaligned request */ |
4646 |
++ if (unlikely(bio->bi_sector & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1))) |
4647 |
++ return 0; |
4648 |
++ if (unlikely(bio->bi_size & (ZRAM_LOGICAL_BLOCK_SIZE - 1))) |
4649 |
++ return 0; |
4650 |
++ |
4651 |
++ start = bio->bi_sector; |
4652 |
++ end = start + (bio->bi_size >> SECTOR_SHIFT); |
4653 |
++ bound = zram->disksize >> SECTOR_SHIFT; |
4654 |
++ /* out of range range */ |
4655 |
++ if (unlikely(start >= bound || end > bound || start > end)) |
4656 |
+ return 0; |
4657 |
+- } |
4658 |
+ |
4659 |
+ /* I/O request is valid */ |
4660 |
+ return 1; |
4661 |
+@@ -703,7 +710,7 @@ static const struct block_device_operations zram_devops = { |
4662 |
+ |
4663 |
+ static int create_device(struct zram *zram, int device_id) |
4664 |
+ { |
4665 |
+- int ret = 0; |
4666 |
++ int ret = -ENOMEM; |
4667 |
+ |
4668 |
+ init_rwsem(&zram->lock); |
4669 |
+ init_rwsem(&zram->init_lock); |
4670 |
+@@ -713,7 +720,6 @@ static int create_device(struct zram *zram, int device_id) |
4671 |
+ if (!zram->queue) { |
4672 |
+ pr_err("Error allocating disk queue for device %d\n", |
4673 |
+ device_id); |
4674 |
+- ret = -ENOMEM; |
4675 |
+ goto out; |
4676 |
+ } |
4677 |
+ |
4678 |
+@@ -723,11 +729,9 @@ static int create_device(struct zram *zram, int device_id) |
4679 |
+ /* gendisk structure */ |
4680 |
+ zram->disk = alloc_disk(1); |
4681 |
+ if (!zram->disk) { |
4682 |
+- blk_cleanup_queue(zram->queue); |
4683 |
+ pr_warning("Error allocating disk structure for device %d\n", |
4684 |
+ device_id); |
4685 |
+- ret = -ENOMEM; |
4686 |
+- goto out; |
4687 |
++ goto out_free_queue; |
4688 |
+ } |
4689 |
+ |
4690 |
+ zram->disk->major = zram_major; |
4691 |
+@@ -756,11 +760,17 @@ static int create_device(struct zram *zram, int device_id) |
4692 |
+ &zram_disk_attr_group); |
4693 |
+ if (ret < 0) { |
4694 |
+ pr_warning("Error creating sysfs group"); |
4695 |
+- goto out; |
4696 |
++ goto out_free_disk; |
4697 |
+ } |
4698 |
+ |
4699 |
+ zram->init_done = 0; |
4700 |
++ return 0; |
4701 |
+ |
4702 |
++out_free_disk: |
4703 |
++ del_gendisk(zram->disk); |
4704 |
++ put_disk(zram->disk); |
4705 |
++out_free_queue: |
4706 |
++ blk_cleanup_queue(zram->queue); |
4707 |
+ out: |
4708 |
+ return ret; |
4709 |
+ } |
4710 |
+@@ -841,9 +851,11 @@ static void __exit zram_exit(void) |
4711 |
+ for (i = 0; i < num_devices; i++) { |
4712 |
+ zram = &zram_devices[i]; |
4713 |
+ |
4714 |
++ get_disk(zram->disk); |
4715 |
+ destroy_device(zram); |
4716 |
+ if (zram->init_done) |
4717 |
+ zram_reset_device(zram); |
4718 |
++ put_disk(zram->disk); |
4719 |
+ } |
4720 |
+ |
4721 |
+ unregister_blkdev(zram_major, "zram"); |
4722 |
+diff --git a/drivers/staging/zram/zram_sysfs.c b/drivers/staging/zram/zram_sysfs.c |
4723 |
+index a7f377175525..826653fff70e 100644 |
4724 |
+--- a/drivers/staging/zram/zram_sysfs.c |
4725 |
++++ b/drivers/staging/zram/zram_sysfs.c |
4726 |
+@@ -95,6 +95,9 @@ static ssize_t reset_store(struct device *dev, |
4727 |
+ zram = dev_to_zram(dev); |
4728 |
+ bdev = bdget_disk(zram->disk, 0); |
4729 |
+ |
4730 |
++ if (!bdev) |
4731 |
++ return -ENOMEM; |
4732 |
++ |
4733 |
+ /* Do not reset an active device! */ |
4734 |
+ if (bdev->bd_holders) |
4735 |
+ return -EBUSY; |
4736 |
+@@ -107,8 +110,7 @@ static ssize_t reset_store(struct device *dev, |
4737 |
+ return -EINVAL; |
4738 |
+ |
4739 |
+ /* Make sure all pending I/O is finished */ |
4740 |
+- if (bdev) |
4741 |
+- fsync_bdev(bdev); |
4742 |
++ fsync_bdev(bdev); |
4743 |
+ |
4744 |
+ down_write(&zram->init_lock); |
4745 |
+ if (zram->init_done) |
4746 |
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c |
4747 |
+index acc0eab58468..53ff37bb17d0 100644 |
4748 |
+--- a/drivers/tty/n_gsm.c |
4749 |
++++ b/drivers/tty/n_gsm.c |
4750 |
+@@ -108,7 +108,7 @@ struct gsm_mux_net { |
4751 |
+ */ |
4752 |
+ |
4753 |
+ struct gsm_msg { |
4754 |
+- struct gsm_msg *next; |
4755 |
++ struct list_head list; |
4756 |
+ u8 addr; /* DLCI address + flags */ |
4757 |
+ u8 ctrl; /* Control byte + flags */ |
4758 |
+ unsigned int len; /* Length of data block (can be zero) */ |
4759 |
+@@ -245,8 +245,7 @@ struct gsm_mux { |
4760 |
+ unsigned int tx_bytes; /* TX data outstanding */ |
4761 |
+ #define TX_THRESH_HI 8192 |
4762 |
+ #define TX_THRESH_LO 2048 |
4763 |
+- struct gsm_msg *tx_head; /* Pending data packets */ |
4764 |
+- struct gsm_msg *tx_tail; |
4765 |
++ struct list_head tx_list; /* Pending data packets */ |
4766 |
+ |
4767 |
+ /* Control messages */ |
4768 |
+ struct timer_list t2_timer; /* Retransmit timer for commands */ |
4769 |
+@@ -663,7 +662,7 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len, |
4770 |
+ m->len = len; |
4771 |
+ m->addr = addr; |
4772 |
+ m->ctrl = ctrl; |
4773 |
+- m->next = NULL; |
4774 |
++ INIT_LIST_HEAD(&m->list); |
4775 |
+ return m; |
4776 |
+ } |
4777 |
+ |
4778 |
+@@ -673,22 +672,21 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len, |
4779 |
+ * |
4780 |
+ * The tty device has called us to indicate that room has appeared in |
4781 |
+ * the transmit queue. Ram more data into the pipe if we have any |
4782 |
++ * If we have been flow-stopped by a CMD_FCOFF, then we can only |
4783 |
++ * send messages on DLCI0 until CMD_FCON |
4784 |
+ * |
4785 |
+ * FIXME: lock against link layer control transmissions |
4786 |
+ */ |
4787 |
+ |
4788 |
+ static void gsm_data_kick(struct gsm_mux *gsm) |
4789 |
+ { |
4790 |
+- struct gsm_msg *msg = gsm->tx_head; |
4791 |
++ struct gsm_msg *msg, *nmsg; |
4792 |
+ int len; |
4793 |
+ int skip_sof = 0; |
4794 |
+ |
4795 |
+- /* FIXME: We need to apply this solely to data messages */ |
4796 |
+- if (gsm->constipated) |
4797 |
+- return; |
4798 |
+- |
4799 |
+- while (gsm->tx_head != NULL) { |
4800 |
+- msg = gsm->tx_head; |
4801 |
++ list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) { |
4802 |
++ if (gsm->constipated && msg->addr) |
4803 |
++ continue; |
4804 |
+ if (gsm->encoding != 0) { |
4805 |
+ gsm->txframe[0] = GSM1_SOF; |
4806 |
+ len = gsm_stuff_frame(msg->data, |
4807 |
+@@ -711,14 +709,13 @@ static void gsm_data_kick(struct gsm_mux *gsm) |
4808 |
+ len - skip_sof) < 0) |
4809 |
+ break; |
4810 |
+ /* FIXME: Can eliminate one SOF in many more cases */ |
4811 |
+- gsm->tx_head = msg->next; |
4812 |
+- if (gsm->tx_head == NULL) |
4813 |
+- gsm->tx_tail = NULL; |
4814 |
+ gsm->tx_bytes -= msg->len; |
4815 |
+- kfree(msg); |
4816 |
+ /* For a burst of frames skip the extra SOF within the |
4817 |
+ burst */ |
4818 |
+ skip_sof = 1; |
4819 |
++ |
4820 |
++ list_del(&msg->list); |
4821 |
++ kfree(msg); |
4822 |
+ } |
4823 |
+ } |
4824 |
+ |
4825 |
+@@ -768,11 +765,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg) |
4826 |
+ msg->data = dp; |
4827 |
+ |
4828 |
+ /* Add to the actual output queue */ |
4829 |
+- if (gsm->tx_tail) |
4830 |
+- gsm->tx_tail->next = msg; |
4831 |
+- else |
4832 |
+- gsm->tx_head = msg; |
4833 |
+- gsm->tx_tail = msg; |
4834 |
++ list_add_tail(&msg->list, &gsm->tx_list); |
4835 |
+ gsm->tx_bytes += msg->len; |
4836 |
+ gsm_data_kick(gsm); |
4837 |
+ } |
4838 |
+@@ -886,7 +879,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm, |
4839 |
+ if (len > gsm->mtu) { |
4840 |
+ if (dlci->adaption == 3) { |
4841 |
+ /* Over long frame, bin it */ |
4842 |
+- kfree_skb(dlci->skb); |
4843 |
++ dev_kfree_skb_any(dlci->skb); |
4844 |
+ dlci->skb = NULL; |
4845 |
+ return 0; |
4846 |
+ } |
4847 |
+@@ -915,7 +908,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm, |
4848 |
+ skb_pull(dlci->skb, len); |
4849 |
+ __gsm_data_queue(dlci, msg); |
4850 |
+ if (last) { |
4851 |
+- kfree_skb(dlci->skb); |
4852 |
++ dev_kfree_skb_any(dlci->skb); |
4853 |
+ dlci->skb = NULL; |
4854 |
+ } |
4855 |
+ return size; |
4856 |
+@@ -976,6 +969,9 @@ static void gsm_dlci_data_kick(struct gsm_dlci *dlci) |
4857 |
+ unsigned long flags; |
4858 |
+ int sweep; |
4859 |
+ |
4860 |
++ if (dlci->constipated) |
4861 |
++ return; |
4862 |
++ |
4863 |
+ spin_lock_irqsave(&dlci->gsm->tx_lock, flags); |
4864 |
+ /* If we have nothing running then we need to fire up */ |
4865 |
+ sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO); |
4866 |
+@@ -1033,6 +1029,7 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci, |
4867 |
+ { |
4868 |
+ int mlines = 0; |
4869 |
+ u8 brk = 0; |
4870 |
++ int fc; |
4871 |
+ |
4872 |
+ /* The modem status command can either contain one octet (v.24 signals) |
4873 |
+ or two octets (v.24 signals + break signals). The length field will |
4874 |
+@@ -1044,19 +1041,21 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci, |
4875 |
+ else { |
4876 |
+ brk = modem & 0x7f; |
4877 |
+ modem = (modem >> 7) & 0x7f; |
4878 |
+- }; |
4879 |
++ } |
4880 |
+ |
4881 |
+ /* Flow control/ready to communicate */ |
4882 |
+- if (modem & MDM_FC) { |
4883 |
++ fc = (modem & MDM_FC) || !(modem & MDM_RTR); |
4884 |
++ if (fc && !dlci->constipated) { |
4885 |
+ /* Need to throttle our output on this device */ |
4886 |
+ dlci->constipated = 1; |
4887 |
+- } |
4888 |
+- if (modem & MDM_RTC) { |
4889 |
+- mlines |= TIOCM_DSR | TIOCM_DTR; |
4890 |
++ } else if (!fc && dlci->constipated) { |
4891 |
+ dlci->constipated = 0; |
4892 |
+ gsm_dlci_data_kick(dlci); |
4893 |
+ } |
4894 |
++ |
4895 |
+ /* Map modem bits */ |
4896 |
++ if (modem & MDM_RTC) |
4897 |
++ mlines |= TIOCM_DSR | TIOCM_DTR; |
4898 |
+ if (modem & MDM_RTR) |
4899 |
+ mlines |= TIOCM_RTS | TIOCM_CTS; |
4900 |
+ if (modem & MDM_IC) |
4901 |
+@@ -1225,19 +1224,19 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command, |
4902 |
+ gsm_control_reply(gsm, CMD_TEST, data, clen); |
4903 |
+ break; |
4904 |
+ case CMD_FCON: |
4905 |
+- /* Modem wants us to STFU */ |
4906 |
+- gsm->constipated = 1; |
4907 |
+- gsm_control_reply(gsm, CMD_FCON, NULL, 0); |
4908 |
+- break; |
4909 |
+- case CMD_FCOFF: |
4910 |
+ /* Modem can accept data again */ |
4911 |
+ gsm->constipated = 0; |
4912 |
+- gsm_control_reply(gsm, CMD_FCOFF, NULL, 0); |
4913 |
++ gsm_control_reply(gsm, CMD_FCON, NULL, 0); |
4914 |
+ /* Kick the link in case it is idling */ |
4915 |
+ spin_lock_irqsave(&gsm->tx_lock, flags); |
4916 |
+ gsm_data_kick(gsm); |
4917 |
+ spin_unlock_irqrestore(&gsm->tx_lock, flags); |
4918 |
+ break; |
4919 |
++ case CMD_FCOFF: |
4920 |
++ /* Modem wants us to STFU */ |
4921 |
++ gsm->constipated = 1; |
4922 |
++ gsm_control_reply(gsm, CMD_FCOFF, NULL, 0); |
4923 |
++ break; |
4924 |
+ case CMD_MSC: |
4925 |
+ /* Out of band modem line change indicator for a DLCI */ |
4926 |
+ gsm_control_modem(gsm, data, clen); |
4927 |
+@@ -1689,7 +1688,7 @@ static void gsm_dlci_free(struct kref *ref) |
4928 |
+ dlci->gsm->dlci[dlci->addr] = NULL; |
4929 |
+ kfifo_free(dlci->fifo); |
4930 |
+ while ((dlci->skb = skb_dequeue(&dlci->skb_list))) |
4931 |
+- kfree_skb(dlci->skb); |
4932 |
++ dev_kfree_skb(dlci->skb); |
4933 |
+ kfree(dlci); |
4934 |
+ } |
4935 |
+ |
4936 |
+@@ -2040,7 +2039,7 @@ void gsm_cleanup_mux(struct gsm_mux *gsm) |
4937 |
+ { |
4938 |
+ int i; |
4939 |
+ struct gsm_dlci *dlci = gsm->dlci[0]; |
4940 |
+- struct gsm_msg *txq; |
4941 |
++ struct gsm_msg *txq, *ntxq; |
4942 |
+ struct gsm_control *gc; |
4943 |
+ |
4944 |
+ gsm->dead = 1; |
4945 |
+@@ -2075,11 +2074,9 @@ void gsm_cleanup_mux(struct gsm_mux *gsm) |
4946 |
+ if (gsm->dlci[i]) |
4947 |
+ gsm_dlci_release(gsm->dlci[i]); |
4948 |
+ /* Now wipe the queues */ |
4949 |
+- for (txq = gsm->tx_head; txq != NULL; txq = gsm->tx_head) { |
4950 |
+- gsm->tx_head = txq->next; |
4951 |
++ list_for_each_entry_safe(txq, ntxq, &gsm->tx_list, list) |
4952 |
+ kfree(txq); |
4953 |
+- } |
4954 |
+- gsm->tx_tail = NULL; |
4955 |
++ INIT_LIST_HEAD(&gsm->tx_list); |
4956 |
+ } |
4957 |
+ EXPORT_SYMBOL_GPL(gsm_cleanup_mux); |
4958 |
+ |
4959 |
+@@ -2190,6 +2187,7 @@ struct gsm_mux *gsm_alloc_mux(void) |
4960 |
+ } |
4961 |
+ spin_lock_init(&gsm->lock); |
4962 |
+ kref_init(&gsm->ref); |
4963 |
++ INIT_LIST_HEAD(&gsm->tx_list); |
4964 |
+ |
4965 |
+ gsm->t1 = T1; |
4966 |
+ gsm->t2 = T2; |
4967 |
+@@ -2306,7 +2304,7 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp, |
4968 |
+ gsm->error(gsm, *dp, flags); |
4969 |
+ break; |
4970 |
+ default: |
4971 |
+- WARN_ONCE("%s: unknown flag %d\n", |
4972 |
++ WARN_ONCE(1, "%s: unknown flag %d\n", |
4973 |
+ tty_name(tty, buf), flags); |
4974 |
+ break; |
4975 |
+ } |
4976 |
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c |
4977 |
+index 0de7ed788631..88620e1ad5fc 100644 |
4978 |
+--- a/drivers/tty/serial/imx.c |
4979 |
++++ b/drivers/tty/serial/imx.c |
4980 |
+@@ -131,6 +131,7 @@ |
4981 |
+ #define UCR4_OREN (1<<1) /* Receiver overrun interrupt enable */ |
4982 |
+ #define UCR4_DREN (1<<0) /* Recv data ready interrupt enable */ |
4983 |
+ #define UFCR_RXTL_SHF 0 /* Receiver trigger level shift */ |
4984 |
++#define UFCR_DCEDTE (1<<6) /* DCE/DTE mode select */ |
4985 |
+ #define UFCR_RFDIV (7<<7) /* Reference freq divider mask */ |
4986 |
+ #define UFCR_RFDIV_REG(x) (((x) < 7 ? 6 - (x) : 6) << 7) |
4987 |
+ #define UFCR_TXTL_SHF 10 /* Transmitter trigger level shift */ |
4988 |
+@@ -666,22 +667,11 @@ static void imx_break_ctl(struct uart_port *port, int break_state) |
4989 |
+ static int imx_setup_ufcr(struct imx_port *sport, unsigned int mode) |
4990 |
+ { |
4991 |
+ unsigned int val; |
4992 |
+- unsigned int ufcr_rfdiv; |
4993 |
+- |
4994 |
+- /* set receiver / transmitter trigger level. |
4995 |
+- * RFDIV is set such way to satisfy requested uartclk value |
4996 |
+- */ |
4997 |
+- val = TXTL << 10 | RXTL; |
4998 |
+- ufcr_rfdiv = (clk_get_rate(sport->clk) + sport->port.uartclk / 2) |
4999 |
+- / sport->port.uartclk; |
5000 |
+- |
5001 |
+- if(!ufcr_rfdiv) |
5002 |
+- ufcr_rfdiv = 1; |
5003 |
+- |
5004 |
+- val |= UFCR_RFDIV_REG(ufcr_rfdiv); |
5005 |
+ |
5006 |
++ /* set receiver / transmitter trigger level */ |
5007 |
++ val = readl(sport->port.membase + UFCR) & (UFCR_RFDIV | UFCR_DCEDTE); |
5008 |
++ val |= TXTL << UFCR_TXTL_SHF | RXTL; |
5009 |
+ writel(val, sport->port.membase + UFCR); |
5010 |
+- |
5011 |
+ return 0; |
5012 |
+ } |
5013 |
+ |
5014 |
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c |
5015 |
+index 4a9dd86bce8e..f509888b7ad0 100644 |
5016 |
+--- a/drivers/usb/class/cdc-acm.c |
5017 |
++++ b/drivers/usb/class/cdc-acm.c |
5018 |
+@@ -1587,13 +1587,27 @@ static const struct usb_device_id acm_ids[] = { |
5019 |
+ }, |
5020 |
+ /* Motorola H24 HSPA module: */ |
5021 |
+ { USB_DEVICE(0x22b8, 0x2d91) }, /* modem */ |
5022 |
+- { USB_DEVICE(0x22b8, 0x2d92) }, /* modem + diagnostics */ |
5023 |
+- { USB_DEVICE(0x22b8, 0x2d93) }, /* modem + AT port */ |
5024 |
+- { USB_DEVICE(0x22b8, 0x2d95) }, /* modem + AT port + diagnostics */ |
5025 |
+- { USB_DEVICE(0x22b8, 0x2d96) }, /* modem + NMEA */ |
5026 |
+- { USB_DEVICE(0x22b8, 0x2d97) }, /* modem + diagnostics + NMEA */ |
5027 |
+- { USB_DEVICE(0x22b8, 0x2d99) }, /* modem + AT port + NMEA */ |
5028 |
+- { USB_DEVICE(0x22b8, 0x2d9a) }, /* modem + AT port + diagnostics + NMEA */ |
5029 |
++ { USB_DEVICE(0x22b8, 0x2d92), /* modem + diagnostics */ |
5030 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5031 |
++ }, |
5032 |
++ { USB_DEVICE(0x22b8, 0x2d93), /* modem + AT port */ |
5033 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5034 |
++ }, |
5035 |
++ { USB_DEVICE(0x22b8, 0x2d95), /* modem + AT port + diagnostics */ |
5036 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5037 |
++ }, |
5038 |
++ { USB_DEVICE(0x22b8, 0x2d96), /* modem + NMEA */ |
5039 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5040 |
++ }, |
5041 |
++ { USB_DEVICE(0x22b8, 0x2d97), /* modem + diagnostics + NMEA */ |
5042 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5043 |
++ }, |
5044 |
++ { USB_DEVICE(0x22b8, 0x2d99), /* modem + AT port + NMEA */ |
5045 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5046 |
++ }, |
5047 |
++ { USB_DEVICE(0x22b8, 0x2d9a), /* modem + AT port + diagnostics + NMEA */ |
5048 |
++ .driver_info = NO_UNION_NORMAL, /* handle only modem interface */ |
5049 |
++ }, |
5050 |
+ |
5051 |
+ { USB_DEVICE(0x0572, 0x1329), /* Hummingbird huc56s (Conexant) */ |
5052 |
+ .driver_info = NO_UNION_NORMAL, /* union descriptor misplaced on |
5053 |
+diff --git a/drivers/usb/gadget/at91_udc.c b/drivers/usb/gadget/at91_udc.c |
5054 |
+index be6952e2fc5a..bf5671c74607 100644 |
5055 |
+--- a/drivers/usb/gadget/at91_udc.c |
5056 |
++++ b/drivers/usb/gadget/at91_udc.c |
5057 |
+@@ -1741,16 +1741,6 @@ static int __devinit at91udc_probe(struct platform_device *pdev) |
5058 |
+ return -ENODEV; |
5059 |
+ } |
5060 |
+ |
5061 |
+- if (pdev->num_resources != 2) { |
5062 |
+- DBG("invalid num_resources\n"); |
5063 |
+- return -ENODEV; |
5064 |
+- } |
5065 |
+- if ((pdev->resource[0].flags != IORESOURCE_MEM) |
5066 |
+- || (pdev->resource[1].flags != IORESOURCE_IRQ)) { |
5067 |
+- DBG("invalid resource type\n"); |
5068 |
+- return -ENODEV; |
5069 |
+- } |
5070 |
+- |
5071 |
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0); |
5072 |
+ if (!res) |
5073 |
+ return -ENXIO; |
5074 |
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c |
5075 |
+index b53065b53fc9..3bd9bd49ed9a 100644 |
5076 |
+--- a/drivers/usb/serial/cp210x.c |
5077 |
++++ b/drivers/usb/serial/cp210x.c |
5078 |
+@@ -110,6 +110,7 @@ static const struct usb_device_id id_table[] = { |
5079 |
+ { USB_DEVICE(0x10C4, 0x8218) }, /* Lipowsky Industrie Elektronik GmbH, HARP-1 */ |
5080 |
+ { USB_DEVICE(0x10C4, 0x822B) }, /* Modem EDGE(GSM) Comander 2 */ |
5081 |
+ { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ |
5082 |
++ { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ |
5083 |
+ { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ |
5084 |
+ { USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */ |
5085 |
+ { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */ |
5086 |
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c |
5087 |
+index 156cf593d84f..3b2dfe9c5212 100644 |
5088 |
+--- a/drivers/usb/serial/ftdi_sio.c |
5089 |
++++ b/drivers/usb/serial/ftdi_sio.c |
5090 |
+@@ -920,6 +920,39 @@ static struct usb_device_id id_table_combined [] = { |
5091 |
+ { USB_DEVICE(FTDI_VID, FTDI_Z3X_PID) }, |
5092 |
+ /* Cressi Devices */ |
5093 |
+ { USB_DEVICE(FTDI_VID, FTDI_CRESSI_PID) }, |
5094 |
++ /* Brainboxes Devices */ |
5095 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_001_PID) }, |
5096 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_012_PID) }, |
5097 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_023_PID) }, |
5098 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_VX_034_PID) }, |
5099 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_101_PID) }, |
5100 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_1_PID) }, |
5101 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_2_PID) }, |
5102 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_3_PID) }, |
5103 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_4_PID) }, |
5104 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_5_PID) }, |
5105 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_6_PID) }, |
5106 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_7_PID) }, |
5107 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_160_8_PID) }, |
5108 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_257_PID) }, |
5109 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_1_PID) }, |
5110 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_2_PID) }, |
5111 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_3_PID) }, |
5112 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_279_4_PID) }, |
5113 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_313_PID) }, |
5114 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_324_PID) }, |
5115 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_1_PID) }, |
5116 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_346_2_PID) }, |
5117 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_357_PID) }, |
5118 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_606_1_PID) }, |
5119 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_606_2_PID) }, |
5120 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_606_3_PID) }, |
5121 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_701_1_PID) }, |
5122 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_701_2_PID) }, |
5123 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_1_PID) }, |
5124 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) }, |
5125 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) }, |
5126 |
++ { USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) }, |
5127 |
+ { }, /* Optional parameter entry */ |
5128 |
+ { } /* Terminating entry */ |
5129 |
+ }; |
5130 |
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h |
5131 |
+index e599fbfcde5f..993c93df6874 100644 |
5132 |
+--- a/drivers/usb/serial/ftdi_sio_ids.h |
5133 |
++++ b/drivers/usb/serial/ftdi_sio_ids.h |
5134 |
+@@ -1326,3 +1326,40 @@ |
5135 |
+ * Manufacturer: Cressi |
5136 |
+ */ |
5137 |
+ #define FTDI_CRESSI_PID 0x87d0 |
5138 |
++ |
5139 |
++/* |
5140 |
++ * Brainboxes devices |
5141 |
++ */ |
5142 |
++#define BRAINBOXES_VID 0x05d1 |
5143 |
++#define BRAINBOXES_VX_001_PID 0x1001 /* VX-001 ExpressCard 1 Port RS232 */ |
5144 |
++#define BRAINBOXES_VX_012_PID 0x1002 /* VX-012 ExpressCard 2 Port RS232 */ |
5145 |
++#define BRAINBOXES_VX_023_PID 0x1003 /* VX-023 ExpressCard 1 Port RS422/485 */ |
5146 |
++#define BRAINBOXES_VX_034_PID 0x1004 /* VX-034 ExpressCard 2 Port RS422/485 */ |
5147 |
++#define BRAINBOXES_US_101_PID 0x1011 /* US-101 1xRS232 */ |
5148 |
++#define BRAINBOXES_US_324_PID 0x1013 /* US-324 1xRS422/485 1Mbaud */ |
5149 |
++#define BRAINBOXES_US_606_1_PID 0x2001 /* US-606 6 Port RS232 Serial Port 1 and 2 */ |
5150 |
++#define BRAINBOXES_US_606_2_PID 0x2002 /* US-606 6 Port RS232 Serial Port 3 and 4 */ |
5151 |
++#define BRAINBOXES_US_606_3_PID 0x2003 /* US-606 6 Port RS232 Serial Port 4 and 6 */ |
5152 |
++#define BRAINBOXES_US_701_1_PID 0x2011 /* US-701 4xRS232 1Mbaud Port 1 and 2 */ |
5153 |
++#define BRAINBOXES_US_701_2_PID 0x2012 /* US-701 4xRS422 1Mbaud Port 3 and 4 */ |
5154 |
++#define BRAINBOXES_US_279_1_PID 0x2021 /* US-279 8xRS422 1Mbaud Port 1 and 2 */ |
5155 |
++#define BRAINBOXES_US_279_2_PID 0x2022 /* US-279 8xRS422 1Mbaud Port 3 and 4 */ |
5156 |
++#define BRAINBOXES_US_279_3_PID 0x2023 /* US-279 8xRS422 1Mbaud Port 5 and 6 */ |
5157 |
++#define BRAINBOXES_US_279_4_PID 0x2024 /* US-279 8xRS422 1Mbaud Port 7 and 8 */ |
5158 |
++#define BRAINBOXES_US_346_1_PID 0x3011 /* US-346 4xRS422/485 1Mbaud Port 1 and 2 */ |
5159 |
++#define BRAINBOXES_US_346_2_PID 0x3012 /* US-346 4xRS422/485 1Mbaud Port 3 and 4 */ |
5160 |
++#define BRAINBOXES_US_257_PID 0x5001 /* US-257 2xRS232 1Mbaud */ |
5161 |
++#define BRAINBOXES_US_313_PID 0x6001 /* US-313 2xRS422/485 1Mbaud */ |
5162 |
++#define BRAINBOXES_US_357_PID 0x7001 /* US_357 1xRS232/422/485 */ |
5163 |
++#define BRAINBOXES_US_842_1_PID 0x8001 /* US-842 8xRS422/485 1Mbaud Port 1 and 2 */ |
5164 |
++#define BRAINBOXES_US_842_2_PID 0x8002 /* US-842 8xRS422/485 1Mbaud Port 3 and 4 */ |
5165 |
++#define BRAINBOXES_US_842_3_PID 0x8003 /* US-842 8xRS422/485 1Mbaud Port 5 and 6 */ |
5166 |
++#define BRAINBOXES_US_842_4_PID 0x8004 /* US-842 8xRS422/485 1Mbaud Port 7 and 8 */ |
5167 |
++#define BRAINBOXES_US_160_1_PID 0x9001 /* US-160 16xRS232 1Mbaud Port 1 and 2 */ |
5168 |
++#define BRAINBOXES_US_160_2_PID 0x9002 /* US-160 16xRS232 1Mbaud Port 3 and 4 */ |
5169 |
++#define BRAINBOXES_US_160_3_PID 0x9003 /* US-160 16xRS232 1Mbaud Port 5 and 6 */ |
5170 |
++#define BRAINBOXES_US_160_4_PID 0x9004 /* US-160 16xRS232 1Mbaud Port 7 and 8 */ |
5171 |
++#define BRAINBOXES_US_160_5_PID 0x9005 /* US-160 16xRS232 1Mbaud Port 9 and 10 */ |
5172 |
++#define BRAINBOXES_US_160_6_PID 0x9006 /* US-160 16xRS232 1Mbaud Port 11 and 12 */ |
5173 |
++#define BRAINBOXES_US_160_7_PID 0x9007 /* US-160 16xRS232 1Mbaud Port 13 and 14 */ |
5174 |
++#define BRAINBOXES_US_160_8_PID 0x9008 /* US-160 16xRS232 1Mbaud Port 15 and 16 */ |
5175 |
+diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c |
5176 |
+index 64eb8799f1ed..5cca1b3cb083 100644 |
5177 |
+--- a/drivers/usb/serial/io_ti.c |
5178 |
++++ b/drivers/usb/serial/io_ti.c |
5179 |
+@@ -29,6 +29,7 @@ |
5180 |
+ #include <linux/spinlock.h> |
5181 |
+ #include <linux/mutex.h> |
5182 |
+ #include <linux/serial.h> |
5183 |
++#include <linux/swab.h> |
5184 |
+ #include <linux/kfifo.h> |
5185 |
+ #include <linux/ioctl.h> |
5186 |
+ #include <linux/firmware.h> |
5187 |
+@@ -298,7 +299,7 @@ static int read_download_mem(struct usb_device *dev, int start_address, |
5188 |
+ { |
5189 |
+ int status = 0; |
5190 |
+ __u8 read_length; |
5191 |
+- __be16 be_start_address; |
5192 |
++ u16 be_start_address; |
5193 |
+ |
5194 |
+ dbg("%s - @ %x for %d", __func__, start_address, length); |
5195 |
+ |
5196 |
+@@ -315,10 +316,14 @@ static int read_download_mem(struct usb_device *dev, int start_address, |
5197 |
+ dbg("%s - @ %x for %d", __func__, |
5198 |
+ start_address, read_length); |
5199 |
+ } |
5200 |
+- be_start_address = cpu_to_be16(start_address); |
5201 |
++ /* |
5202 |
++ * NOTE: Must use swab as wIndex is sent in little-endian |
5203 |
++ * byte order regardless of host byte order. |
5204 |
++ */ |
5205 |
++ be_start_address = swab16((u16)start_address); |
5206 |
+ status = ti_vread_sync(dev, UMPC_MEMORY_READ, |
5207 |
+ (__u16)address_type, |
5208 |
+- (__force __u16)be_start_address, |
5209 |
++ be_start_address, |
5210 |
+ buffer, read_length); |
5211 |
+ |
5212 |
+ if (status) { |
5213 |
+@@ -418,7 +423,7 @@ static int write_i2c_mem(struct edgeport_serial *serial, |
5214 |
+ { |
5215 |
+ int status = 0; |
5216 |
+ int write_length; |
5217 |
+- __be16 be_start_address; |
5218 |
++ u16 be_start_address; |
5219 |
+ |
5220 |
+ /* We can only send a maximum of 1 aligned byte page at a time */ |
5221 |
+ |
5222 |
+@@ -434,11 +439,16 @@ static int write_i2c_mem(struct edgeport_serial *serial, |
5223 |
+ usb_serial_debug_data(debug, &serial->serial->dev->dev, |
5224 |
+ __func__, write_length, buffer); |
5225 |
+ |
5226 |
+- /* Write first page */ |
5227 |
+- be_start_address = cpu_to_be16(start_address); |
5228 |
++ /* |
5229 |
++ * Write first page. |
5230 |
++ * |
5231 |
++ * NOTE: Must use swab as wIndex is sent in little-endian byte order |
5232 |
++ * regardless of host byte order. |
5233 |
++ */ |
5234 |
++ be_start_address = swab16((u16)start_address); |
5235 |
+ status = ti_vsend_sync(serial->serial->dev, |
5236 |
+ UMPC_MEMORY_WRITE, (__u16)address_type, |
5237 |
+- (__force __u16)be_start_address, |
5238 |
++ be_start_address, |
5239 |
+ buffer, write_length); |
5240 |
+ if (status) { |
5241 |
+ dbg("%s - ERROR %d", __func__, status); |
5242 |
+@@ -462,11 +472,16 @@ static int write_i2c_mem(struct edgeport_serial *serial, |
5243 |
+ usb_serial_debug_data(debug, &serial->serial->dev->dev, |
5244 |
+ __func__, write_length, buffer); |
5245 |
+ |
5246 |
+- /* Write next page */ |
5247 |
+- be_start_address = cpu_to_be16(start_address); |
5248 |
++ /* |
5249 |
++ * Write next page. |
5250 |
++ * |
5251 |
++ * NOTE: Must use swab as wIndex is sent in little-endian byte |
5252 |
++ * order regardless of host byte order. |
5253 |
++ */ |
5254 |
++ be_start_address = swab16((u16)start_address); |
5255 |
+ status = ti_vsend_sync(serial->serial->dev, UMPC_MEMORY_WRITE, |
5256 |
+ (__u16)address_type, |
5257 |
+- (__force __u16)be_start_address, |
5258 |
++ be_start_address, |
5259 |
+ buffer, write_length); |
5260 |
+ if (status) { |
5261 |
+ dev_err(&serial->serial->dev->dev, "%s - ERROR %d\n", |
5262 |
+@@ -673,8 +688,8 @@ static int get_descriptor_addr(struct edgeport_serial *serial, |
5263 |
+ if (rom_desc->Type == desc_type) |
5264 |
+ return start_address; |
5265 |
+ |
5266 |
+- start_address = start_address + sizeof(struct ti_i2c_desc) |
5267 |
+- + rom_desc->Size; |
5268 |
++ start_address = start_address + sizeof(struct ti_i2c_desc) + |
5269 |
++ le16_to_cpu(rom_desc->Size); |
5270 |
+ |
5271 |
+ } while ((start_address < TI_MAX_I2C_SIZE) && rom_desc->Type); |
5272 |
+ |
5273 |
+@@ -687,7 +702,7 @@ static int valid_csum(struct ti_i2c_desc *rom_desc, __u8 *buffer) |
5274 |
+ __u16 i; |
5275 |
+ __u8 cs = 0; |
5276 |
+ |
5277 |
+- for (i = 0; i < rom_desc->Size; i++) |
5278 |
++ for (i = 0; i < le16_to_cpu(rom_desc->Size); i++) |
5279 |
+ cs = (__u8)(cs + buffer[i]); |
5280 |
+ |
5281 |
+ if (cs != rom_desc->CheckSum) { |
5282 |
+@@ -741,7 +756,7 @@ static int check_i2c_image(struct edgeport_serial *serial) |
5283 |
+ break; |
5284 |
+ |
5285 |
+ if ((start_address + sizeof(struct ti_i2c_desc) + |
5286 |
+- rom_desc->Size) > TI_MAX_I2C_SIZE) { |
5287 |
++ le16_to_cpu(rom_desc->Size)) > TI_MAX_I2C_SIZE) { |
5288 |
+ status = -ENODEV; |
5289 |
+ dbg("%s - structure too big, erroring out.", __func__); |
5290 |
+ break; |
5291 |
+@@ -756,7 +771,8 @@ static int check_i2c_image(struct edgeport_serial *serial) |
5292 |
+ /* Read the descriptor data */ |
5293 |
+ status = read_rom(serial, start_address + |
5294 |
+ sizeof(struct ti_i2c_desc), |
5295 |
+- rom_desc->Size, buffer); |
5296 |
++ le16_to_cpu(rom_desc->Size), |
5297 |
++ buffer); |
5298 |
+ if (status) |
5299 |
+ break; |
5300 |
+ |
5301 |
+@@ -765,7 +781,7 @@ static int check_i2c_image(struct edgeport_serial *serial) |
5302 |
+ break; |
5303 |
+ } |
5304 |
+ start_address = start_address + sizeof(struct ti_i2c_desc) + |
5305 |
+- rom_desc->Size; |
5306 |
++ le16_to_cpu(rom_desc->Size); |
5307 |
+ |
5308 |
+ } while ((rom_desc->Type != I2C_DESC_TYPE_ION) && |
5309 |
+ (start_address < TI_MAX_I2C_SIZE)); |
5310 |
+@@ -804,7 +820,7 @@ static int get_manuf_info(struct edgeport_serial *serial, __u8 *buffer) |
5311 |
+ |
5312 |
+ /* Read the descriptor data */ |
5313 |
+ status = read_rom(serial, start_address+sizeof(struct ti_i2c_desc), |
5314 |
+- rom_desc->Size, buffer); |
5315 |
++ le16_to_cpu(rom_desc->Size), buffer); |
5316 |
+ if (status) |
5317 |
+ goto exit; |
5318 |
+ |
5319 |
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c |
5320 |
+index 8b5c8e5d78d8..e9c113dab5c4 100644 |
5321 |
+--- a/drivers/usb/serial/option.c |
5322 |
++++ b/drivers/usb/serial/option.c |
5323 |
+@@ -234,8 +234,31 @@ static void option_instat_callback(struct urb *urb); |
5324 |
+ #define QUALCOMM_VENDOR_ID 0x05C6 |
5325 |
+ |
5326 |
+ #define CMOTECH_VENDOR_ID 0x16d8 |
5327 |
+-#define CMOTECH_PRODUCT_6008 0x6008 |
5328 |
+-#define CMOTECH_PRODUCT_6280 0x6280 |
5329 |
++#define CMOTECH_PRODUCT_6001 0x6001 |
5330 |
++#define CMOTECH_PRODUCT_CMU_300 0x6002 |
5331 |
++#define CMOTECH_PRODUCT_6003 0x6003 |
5332 |
++#define CMOTECH_PRODUCT_6004 0x6004 |
5333 |
++#define CMOTECH_PRODUCT_6005 0x6005 |
5334 |
++#define CMOTECH_PRODUCT_CGU_628A 0x6006 |
5335 |
++#define CMOTECH_PRODUCT_CHE_628S 0x6007 |
5336 |
++#define CMOTECH_PRODUCT_CMU_301 0x6008 |
5337 |
++#define CMOTECH_PRODUCT_CHU_628 0x6280 |
5338 |
++#define CMOTECH_PRODUCT_CHU_628S 0x6281 |
5339 |
++#define CMOTECH_PRODUCT_CDU_680 0x6803 |
5340 |
++#define CMOTECH_PRODUCT_CDU_685A 0x6804 |
5341 |
++#define CMOTECH_PRODUCT_CHU_720S 0x7001 |
5342 |
++#define CMOTECH_PRODUCT_7002 0x7002 |
5343 |
++#define CMOTECH_PRODUCT_CHU_629K 0x7003 |
5344 |
++#define CMOTECH_PRODUCT_7004 0x7004 |
5345 |
++#define CMOTECH_PRODUCT_7005 0x7005 |
5346 |
++#define CMOTECH_PRODUCT_CGU_629 0x7006 |
5347 |
++#define CMOTECH_PRODUCT_CHU_629S 0x700a |
5348 |
++#define CMOTECH_PRODUCT_CHU_720I 0x7211 |
5349 |
++#define CMOTECH_PRODUCT_7212 0x7212 |
5350 |
++#define CMOTECH_PRODUCT_7213 0x7213 |
5351 |
++#define CMOTECH_PRODUCT_7251 0x7251 |
5352 |
++#define CMOTECH_PRODUCT_7252 0x7252 |
5353 |
++#define CMOTECH_PRODUCT_7253 0x7253 |
5354 |
+ |
5355 |
+ #define TELIT_VENDOR_ID 0x1bc7 |
5356 |
+ #define TELIT_PRODUCT_UC864E 0x1003 |
5357 |
+@@ -243,6 +266,7 @@ static void option_instat_callback(struct urb *urb); |
5358 |
+ #define TELIT_PRODUCT_CC864_DUAL 0x1005 |
5359 |
+ #define TELIT_PRODUCT_CC864_SINGLE 0x1006 |
5360 |
+ #define TELIT_PRODUCT_DE910_DUAL 0x1010 |
5361 |
++#define TELIT_PRODUCT_UE910_V2 0x1012 |
5362 |
+ #define TELIT_PRODUCT_LE920 0x1200 |
5363 |
+ |
5364 |
+ /* ZTE PRODUCTS */ |
5365 |
+@@ -291,6 +315,7 @@ static void option_instat_callback(struct urb *urb); |
5366 |
+ #define ALCATEL_PRODUCT_X060S_X200 0x0000 |
5367 |
+ #define ALCATEL_PRODUCT_X220_X500D 0x0017 |
5368 |
+ #define ALCATEL_PRODUCT_L100V 0x011e |
5369 |
++#define ALCATEL_PRODUCT_L800MA 0x0203 |
5370 |
+ |
5371 |
+ #define PIRELLI_VENDOR_ID 0x1266 |
5372 |
+ #define PIRELLI_PRODUCT_C100_1 0x1002 |
5373 |
+@@ -353,6 +378,7 @@ static void option_instat_callback(struct urb *urb); |
5374 |
+ #define OLIVETTI_PRODUCT_OLICARD100 0xc000 |
5375 |
+ #define OLIVETTI_PRODUCT_OLICARD145 0xc003 |
5376 |
+ #define OLIVETTI_PRODUCT_OLICARD200 0xc005 |
5377 |
++#define OLIVETTI_PRODUCT_OLICARD500 0xc00b |
5378 |
+ |
5379 |
+ /* Celot products */ |
5380 |
+ #define CELOT_VENDOR_ID 0x211f |
5381 |
+@@ -514,6 +540,10 @@ static const struct option_blacklist_info huawei_cdc12_blacklist = { |
5382 |
+ .reserved = BIT(1) | BIT(2), |
5383 |
+ }; |
5384 |
+ |
5385 |
++static const struct option_blacklist_info net_intf0_blacklist = { |
5386 |
++ .reserved = BIT(0), |
5387 |
++}; |
5388 |
++ |
5389 |
+ static const struct option_blacklist_info net_intf1_blacklist = { |
5390 |
+ .reserved = BIT(1), |
5391 |
+ }; |
5392 |
+@@ -1048,13 +1078,53 @@ static const struct usb_device_id option_ids[] = { |
5393 |
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */ |
5394 |
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */ |
5395 |
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */ |
5396 |
+- { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6280) }, /* BP3-USB & BP3-EXT HSDPA */ |
5397 |
+- { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6008) }, |
5398 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, |
5399 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) }, |
5400 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003), |
5401 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5402 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6004) }, |
5403 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6005) }, |
5404 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CGU_628A) }, |
5405 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHE_628S), |
5406 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5407 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_301), |
5408 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5409 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_628), |
5410 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5411 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_628S) }, |
5412 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CDU_680) }, |
5413 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CDU_685A) }, |
5414 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_720S), |
5415 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5416 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7002), |
5417 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5418 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_629K), |
5419 |
++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, |
5420 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7004), |
5421 |
++ .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, |
5422 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7005) }, |
5423 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CGU_629), |
5424 |
++ .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, |
5425 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_629S), |
5426 |
++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, |
5427 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_720I), |
5428 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5429 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7212), |
5430 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5431 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7213), |
5432 |
++ .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, |
5433 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7251), |
5434 |
++ .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, |
5435 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7252), |
5436 |
++ .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, |
5437 |
++ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7253), |
5438 |
++ .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, |
5439 |
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864E) }, |
5440 |
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864G) }, |
5441 |
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_DUAL) }, |
5442 |
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) }, |
5443 |
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) }, |
5444 |
++ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) }, |
5445 |
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920), |
5446 |
+ .driver_info = (kernel_ulong_t)&telit_le920_blacklist }, |
5447 |
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ |
5448 |
+@@ -1518,6 +1588,8 @@ static const struct usb_device_id option_ids[] = { |
5449 |
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, |
5450 |
+ { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_L100V), |
5451 |
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, |
5452 |
++ { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_L800MA), |
5453 |
++ .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, |
5454 |
+ { USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) }, |
5455 |
+ { USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) }, |
5456 |
+ { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14), |
5457 |
+@@ -1563,6 +1635,9 @@ static const struct usb_device_id option_ids[] = { |
5458 |
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), |
5459 |
+ .driver_info = (kernel_ulong_t)&net_intf6_blacklist |
5460 |
+ }, |
5461 |
++ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD500), |
5462 |
++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist |
5463 |
++ }, |
5464 |
+ { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ |
5465 |
+ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ |
5466 |
+ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) }, |
5467 |
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c |
5468 |
+index 8ec15c2540b8..3f5e4a73ddd5 100644 |
5469 |
+--- a/drivers/usb/serial/sierra.c |
5470 |
++++ b/drivers/usb/serial/sierra.c |
5471 |
+@@ -305,7 +305,6 @@ static const struct usb_device_id id_table[] = { |
5472 |
+ { USB_DEVICE(0x0f3d, 0x68A3), /* Airprime/Sierra Wireless Direct IP modems */ |
5473 |
+ .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist |
5474 |
+ }, |
5475 |
+- { USB_DEVICE(0x413C, 0x08133) }, /* Dell Computer Corp. Wireless 5720 VZW Mobile Broadband (EVDO Rev-A) Minicard GPS Port */ |
5476 |
+ |
5477 |
+ { } |
5478 |
+ }; |
5479 |
+diff --git a/drivers/usb/storage/shuttle_usbat.c b/drivers/usb/storage/shuttle_usbat.c |
5480 |
+index fa1ceebc465c..f3248fbd0b3d 100644 |
5481 |
+--- a/drivers/usb/storage/shuttle_usbat.c |
5482 |
++++ b/drivers/usb/storage/shuttle_usbat.c |
5483 |
+@@ -1846,7 +1846,7 @@ static int usbat_probe(struct usb_interface *intf, |
5484 |
+ us->transport_name = "Shuttle USBAT"; |
5485 |
+ us->transport = usbat_flash_transport; |
5486 |
+ us->transport_reset = usb_stor_CB_reset; |
5487 |
+- us->max_lun = 1; |
5488 |
++ us->max_lun = 0; |
5489 |
+ |
5490 |
+ result = usb_stor_probe2(us); |
5491 |
+ return result; |
5492 |
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h |
5493 |
+index bbe9adb0eb66..1d9fc30f3ac3 100644 |
5494 |
+--- a/drivers/usb/storage/unusual_devs.h |
5495 |
++++ b/drivers/usb/storage/unusual_devs.h |
5496 |
+@@ -226,6 +226,20 @@ UNUSUAL_DEV( 0x0421, 0x0495, 0x0370, 0x0370, |
5497 |
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL, |
5498 |
+ US_FL_MAX_SECTORS_64 ), |
5499 |
+ |
5500 |
++/* Reported by Daniele Forsi <dforsi@×××××.com> */ |
5501 |
++UNUSUAL_DEV( 0x0421, 0x04b9, 0x0350, 0x0350, |
5502 |
++ "Nokia", |
5503 |
++ "5300", |
5504 |
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL, |
5505 |
++ US_FL_MAX_SECTORS_64 ), |
5506 |
++ |
5507 |
++/* Patch submitted by Victor A. Santos <victoraur.santos@×××××.com> */ |
5508 |
++UNUSUAL_DEV( 0x0421, 0x05af, 0x0742, 0x0742, |
5509 |
++ "Nokia", |
5510 |
++ "305", |
5511 |
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL, |
5512 |
++ US_FL_MAX_SECTORS_64), |
5513 |
++ |
5514 |
+ /* Patch submitted by Mikhail Zolotaryov <lebon@×××××××××.ua> */ |
5515 |
+ UNUSUAL_DEV( 0x0421, 0x06aa, 0x1110, 0x1110, |
5516 |
+ "Nokia", |
5517 |
+diff --git a/drivers/video/backlight/atmel-pwm-bl.c b/drivers/video/backlight/atmel-pwm-bl.c |
5518 |
+index 4d2bbd893e88..dab3a0c9f480 100644 |
5519 |
+--- a/drivers/video/backlight/atmel-pwm-bl.c |
5520 |
++++ b/drivers/video/backlight/atmel-pwm-bl.c |
5521 |
+@@ -211,7 +211,8 @@ static int __exit atmel_pwm_bl_remove(struct platform_device *pdev) |
5522 |
+ struct atmel_pwm_bl *pwmbl = platform_get_drvdata(pdev); |
5523 |
+ |
5524 |
+ if (pwmbl->gpio_on != -1) { |
5525 |
+- gpio_set_value(pwmbl->gpio_on, 0); |
5526 |
++ gpio_set_value(pwmbl->gpio_on, |
5527 |
++ 0 ^ pwmbl->pdata->on_active_low); |
5528 |
+ gpio_free(pwmbl->gpio_on); |
5529 |
+ } |
5530 |
+ pwm_channel_disable(&pwmbl->pwmc); |
5531 |
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c |
5532 |
+index 5855d17d19ac..9d8feac67637 100644 |
5533 |
+--- a/drivers/video/console/vgacon.c |
5534 |
++++ b/drivers/video/console/vgacon.c |
5535 |
+@@ -42,6 +42,7 @@ |
5536 |
+ #include <linux/kd.h> |
5537 |
+ #include <linux/slab.h> |
5538 |
+ #include <linux/vt_kern.h> |
5539 |
++#include <linux/sched.h> |
5540 |
+ #include <linux/selection.h> |
5541 |
+ #include <linux/spinlock.h> |
5542 |
+ #include <linux/ioport.h> |
5543 |
+@@ -1124,11 +1125,15 @@ static int vgacon_do_font_op(struct vgastate *state,char *arg,int set,int ch512) |
5544 |
+ |
5545 |
+ if (arg) { |
5546 |
+ if (set) |
5547 |
+- for (i = 0; i < cmapsz; i++) |
5548 |
++ for (i = 0; i < cmapsz; i++) { |
5549 |
+ vga_writeb(arg[i], charmap + i); |
5550 |
++ cond_resched(); |
5551 |
++ } |
5552 |
+ else |
5553 |
+- for (i = 0; i < cmapsz; i++) |
5554 |
++ for (i = 0; i < cmapsz; i++) { |
5555 |
+ arg[i] = vga_readb(charmap + i); |
5556 |
++ cond_resched(); |
5557 |
++ } |
5558 |
+ |
5559 |
+ /* |
5560 |
+ * In 512-character mode, the character map is not contiguous if |
5561 |
+@@ -1139,11 +1144,15 @@ static int vgacon_do_font_op(struct vgastate *state,char *arg,int set,int ch512) |
5562 |
+ charmap += 2 * cmapsz; |
5563 |
+ arg += cmapsz; |
5564 |
+ if (set) |
5565 |
+- for (i = 0; i < cmapsz; i++) |
5566 |
++ for (i = 0; i < cmapsz; i++) { |
5567 |
+ vga_writeb(arg[i], charmap + i); |
5568 |
++ cond_resched(); |
5569 |
++ } |
5570 |
+ else |
5571 |
+- for (i = 0; i < cmapsz; i++) |
5572 |
++ for (i = 0; i < cmapsz; i++) { |
5573 |
+ arg[i] = vga_readb(charmap + i); |
5574 |
++ cond_resched(); |
5575 |
++ } |
5576 |
+ } |
5577 |
+ } |
5578 |
+ |
5579 |
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c |
5580 |
+index 0e3c0924cc3a..b4d2438da9a5 100644 |
5581 |
+--- a/fs/cachefiles/rdwr.c |
5582 |
++++ b/fs/cachefiles/rdwr.c |
5583 |
+@@ -918,7 +918,7 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page) |
5584 |
+ * own time */ |
5585 |
+ dget(object->backer); |
5586 |
+ mntget(cache->mnt); |
5587 |
+- file = dentry_open(object->backer, cache->mnt, O_RDWR, |
5588 |
++ file = dentry_open(object->backer, cache->mnt, O_RDWR | O_LARGEFILE, |
5589 |
+ cache->cache_cred); |
5590 |
+ if (IS_ERR(file)) { |
5591 |
+ ret = PTR_ERR(file); |
5592 |
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c |
5593 |
+index d7561e03870c..c0f65e84873e 100644 |
5594 |
+--- a/fs/cifs/cifsfs.c |
5595 |
++++ b/fs/cifs/cifsfs.c |
5596 |
+@@ -87,6 +87,30 @@ extern mempool_t *cifs_mid_poolp; |
5597 |
+ |
5598 |
+ struct workqueue_struct *cifsiod_wq; |
5599 |
+ |
5600 |
++/* |
5601 |
++ * Bumps refcount for cifs super block. |
5602 |
++ * Note that it should be only called if a referece to VFS super block is |
5603 |
++ * already held, e.g. in open-type syscalls context. Otherwise it can race with |
5604 |
++ * atomic_dec_and_test in deactivate_locked_super. |
5605 |
++ */ |
5606 |
++void |
5607 |
++cifs_sb_active(struct super_block *sb) |
5608 |
++{ |
5609 |
++ struct cifs_sb_info *server = CIFS_SB(sb); |
5610 |
++ |
5611 |
++ if (atomic_inc_return(&server->active) == 1) |
5612 |
++ atomic_inc(&sb->s_active); |
5613 |
++} |
5614 |
++ |
5615 |
++void |
5616 |
++cifs_sb_deactive(struct super_block *sb) |
5617 |
++{ |
5618 |
++ struct cifs_sb_info *server = CIFS_SB(sb); |
5619 |
++ |
5620 |
++ if (atomic_dec_and_test(&server->active)) |
5621 |
++ deactivate_super(sb); |
5622 |
++} |
5623 |
++ |
5624 |
+ static int |
5625 |
+ cifs_read_super(struct super_block *sb) |
5626 |
+ { |
5627 |
+diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h |
5628 |
+index 65365358c976..f71176578278 100644 |
5629 |
+--- a/fs/cifs/cifsfs.h |
5630 |
++++ b/fs/cifs/cifsfs.h |
5631 |
+@@ -41,6 +41,10 @@ extern struct file_system_type cifs_fs_type; |
5632 |
+ extern const struct address_space_operations cifs_addr_ops; |
5633 |
+ extern const struct address_space_operations cifs_addr_ops_smallbuf; |
5634 |
+ |
5635 |
++/* Functions related to super block operations */ |
5636 |
++extern void cifs_sb_active(struct super_block *sb); |
5637 |
++extern void cifs_sb_deactive(struct super_block *sb); |
5638 |
++ |
5639 |
+ /* Functions related to inodes */ |
5640 |
+ extern const struct inode_operations cifs_dir_inode_ops; |
5641 |
+ extern struct inode *cifs_root_iget(struct super_block *); |
5642 |
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c |
5643 |
+index 3a75ee5a6b33..6e609819c62a 100644 |
5644 |
+--- a/fs/cifs/cifssmb.c |
5645 |
++++ b/fs/cifs/cifssmb.c |
5646 |
+@@ -3454,11 +3454,13 @@ static __u16 ACL_to_cifs_posix(char *parm_data, const char *pACL, |
5647 |
+ return 0; |
5648 |
+ } |
5649 |
+ cifs_acl->version = cpu_to_le16(1); |
5650 |
+- if (acl_type == ACL_TYPE_ACCESS) |
5651 |
++ if (acl_type == ACL_TYPE_ACCESS) { |
5652 |
+ cifs_acl->access_entry_count = cpu_to_le16(count); |
5653 |
+- else if (acl_type == ACL_TYPE_DEFAULT) |
5654 |
++ cifs_acl->default_entry_count = __constant_cpu_to_le16(0xFFFF); |
5655 |
++ } else if (acl_type == ACL_TYPE_DEFAULT) { |
5656 |
+ cifs_acl->default_entry_count = cpu_to_le16(count); |
5657 |
+- else { |
5658 |
++ cifs_acl->access_entry_count = __constant_cpu_to_le16(0xFFFF); |
5659 |
++ } else { |
5660 |
+ cFYI(1, "unknown ACL type %d", acl_type); |
5661 |
+ return 0; |
5662 |
+ } |
5663 |
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c |
5664 |
+index 9ace37521510..0898d99b5f7b 100644 |
5665 |
+--- a/fs/cifs/file.c |
5666 |
++++ b/fs/cifs/file.c |
5667 |
+@@ -265,6 +265,8 @@ cifs_new_fileinfo(__u16 fileHandle, struct file *file, |
5668 |
+ mutex_init(&pCifsFile->fh_mutex); |
5669 |
+ INIT_WORK(&pCifsFile->oplock_break, cifs_oplock_break); |
5670 |
+ |
5671 |
++ cifs_sb_active(inode->i_sb); |
5672 |
++ |
5673 |
+ spin_lock(&cifs_file_list_lock); |
5674 |
+ list_add(&pCifsFile->tlist, &(tlink_tcon(tlink)->openFileList)); |
5675 |
+ /* if readable file instance put first in list*/ |
5676 |
+@@ -293,7 +295,8 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file) |
5677 |
+ struct inode *inode = cifs_file->dentry->d_inode; |
5678 |
+ struct cifs_tcon *tcon = tlink_tcon(cifs_file->tlink); |
5679 |
+ struct cifsInodeInfo *cifsi = CIFS_I(inode); |
5680 |
+- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); |
5681 |
++ struct super_block *sb = inode->i_sb; |
5682 |
++ struct cifs_sb_info *cifs_sb = CIFS_SB(sb); |
5683 |
+ struct cifsLockInfo *li, *tmp; |
5684 |
+ |
5685 |
+ spin_lock(&cifs_file_list_lock); |
5686 |
+@@ -345,6 +348,7 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file) |
5687 |
+ |
5688 |
+ cifs_put_tlink(cifs_file->tlink); |
5689 |
+ dput(cifs_file->dentry); |
5690 |
++ cifs_sb_deactive(sb); |
5691 |
+ kfree(cifs_file); |
5692 |
+ } |
5693 |
+ |
5694 |
+@@ -882,7 +886,7 @@ cifs_push_mandatory_locks(struct cifsFileInfo *cfile) |
5695 |
+ if (!buf) { |
5696 |
+ mutex_unlock(&cinode->lock_mutex); |
5697 |
+ FreeXid(xid); |
5698 |
+- return rc; |
5699 |
++ return -ENOMEM; |
5700 |
+ } |
5701 |
+ |
5702 |
+ for (i = 0; i < 2; i++) { |
5703 |
+diff --git a/fs/ecryptfs/keystore.c b/fs/ecryptfs/keystore.c |
5704 |
+index 2333203a120b..d28fc348afe7 100644 |
5705 |
+--- a/fs/ecryptfs/keystore.c |
5706 |
++++ b/fs/ecryptfs/keystore.c |
5707 |
+@@ -1149,7 +1149,7 @@ decrypt_pki_encrypted_session_key(struct ecryptfs_auth_tok *auth_tok, |
5708 |
+ struct ecryptfs_msg_ctx *msg_ctx; |
5709 |
+ struct ecryptfs_message *msg = NULL; |
5710 |
+ char *auth_tok_sig; |
5711 |
+- char *payload; |
5712 |
++ char *payload = NULL; |
5713 |
+ size_t payload_len; |
5714 |
+ int rc; |
5715 |
+ |
5716 |
+@@ -1204,6 +1204,7 @@ decrypt_pki_encrypted_session_key(struct ecryptfs_auth_tok *auth_tok, |
5717 |
+ out: |
5718 |
+ if (msg) |
5719 |
+ kfree(msg); |
5720 |
++ kfree(payload); |
5721 |
+ return rc; |
5722 |
+ } |
5723 |
+ |
5724 |
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c |
5725 |
+index 95bfc243992c..27c2969a9d02 100644 |
5726 |
+--- a/fs/nfs/callback_xdr.c |
5727 |
++++ b/fs/nfs/callback_xdr.c |
5728 |
+@@ -455,9 +455,9 @@ static __be32 decode_cb_sequence_args(struct svc_rqst *rqstp, |
5729 |
+ args->csa_nrclists = ntohl(*p++); |
5730 |
+ args->csa_rclists = NULL; |
5731 |
+ if (args->csa_nrclists) { |
5732 |
+- args->csa_rclists = kmalloc(args->csa_nrclists * |
5733 |
+- sizeof(*args->csa_rclists), |
5734 |
+- GFP_KERNEL); |
5735 |
++ args->csa_rclists = kmalloc_array(args->csa_nrclists, |
5736 |
++ sizeof(*args->csa_rclists), |
5737 |
++ GFP_KERNEL); |
5738 |
+ if (unlikely(args->csa_rclists == NULL)) |
5739 |
+ goto out; |
5740 |
+ |
5741 |
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c |
5742 |
+index d5faa264ecc2..934bb1ca8335 100644 |
5743 |
+--- a/fs/nfs/nfs4proc.c |
5744 |
++++ b/fs/nfs/nfs4proc.c |
5745 |
+@@ -3724,7 +3724,8 @@ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t bu |
5746 |
+ .rpc_argp = &args, |
5747 |
+ .rpc_resp = &res, |
5748 |
+ }; |
5749 |
+- int ret = -ENOMEM, npages, i, acl_len = 0; |
5750 |
++ int ret = -ENOMEM, npages, i; |
5751 |
++ size_t acl_len = 0; |
5752 |
+ |
5753 |
+ npages = (buflen + PAGE_SIZE - 1) >> PAGE_SHIFT; |
5754 |
+ /* As long as we're doing a round trip to the server anyway, |
5755 |
+@@ -3910,8 +3911,7 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server, |
5756 |
+ dprintk("%s ERROR %d, Reset session\n", __func__, |
5757 |
+ task->tk_status); |
5758 |
+ nfs4_schedule_session_recovery(clp->cl_session); |
5759 |
+- task->tk_status = 0; |
5760 |
+- return -EAGAIN; |
5761 |
++ goto wait_on_recovery; |
5762 |
+ #endif /* CONFIG_NFS_V4_1 */ |
5763 |
+ case -NFS4ERR_DELAY: |
5764 |
+ nfs_inc_server_stats(server, NFSIOS_DELAY); |
5765 |
+@@ -6084,7 +6084,8 @@ int nfs4_proc_layoutget(struct nfs4_layoutget *lgp, gfp_t gfp_flags) |
5766 |
+ status = nfs4_wait_for_completion_rpc_task(task); |
5767 |
+ if (status == 0) |
5768 |
+ status = task->tk_status; |
5769 |
+- if (status == 0) |
5770 |
++ /* if layoutp->len is 0, nfs4_layoutget_prepare called rpc_exit */ |
5771 |
++ if (status == 0 && lgp->res.layoutp->len) |
5772 |
+ status = pnfs_layout_process(lgp); |
5773 |
+ rpc_put_task(task); |
5774 |
+ dprintk("<-- %s status=%d\n", __func__, status); |
5775 |
+@@ -6297,22 +6298,8 @@ nfs4_layoutcommit_done(struct rpc_task *task, void *calldata) |
5776 |
+ static void nfs4_layoutcommit_release(void *calldata) |
5777 |
+ { |
5778 |
+ struct nfs4_layoutcommit_data *data = calldata; |
5779 |
+- struct pnfs_layout_segment *lseg, *tmp; |
5780 |
+- unsigned long *bitlock = &NFS_I(data->args.inode)->flags; |
5781 |
+ |
5782 |
+ pnfs_cleanup_layoutcommit(data); |
5783 |
+- /* Matched by references in pnfs_set_layoutcommit */ |
5784 |
+- list_for_each_entry_safe(lseg, tmp, &data->lseg_list, pls_lc_list) { |
5785 |
+- list_del_init(&lseg->pls_lc_list); |
5786 |
+- if (test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, |
5787 |
+- &lseg->pls_flags)) |
5788 |
+- put_lseg(lseg); |
5789 |
+- } |
5790 |
+- |
5791 |
+- clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock); |
5792 |
+- smp_mb__after_clear_bit(); |
5793 |
+- wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING); |
5794 |
+- |
5795 |
+ put_rpccred(data->cred); |
5796 |
+ kfree(data); |
5797 |
+ } |
5798 |
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c |
5799 |
+index e46579471ccc..461816beff13 100644 |
5800 |
+--- a/fs/nfs/nfs4state.c |
5801 |
++++ b/fs/nfs/nfs4state.c |
5802 |
+@@ -1651,8 +1651,18 @@ static int nfs4_reset_session(struct nfs_client *clp) |
5803 |
+ |
5804 |
+ nfs4_begin_drain_session(clp); |
5805 |
+ status = nfs4_proc_destroy_session(clp->cl_session); |
5806 |
+- if (status && status != -NFS4ERR_BADSESSION && |
5807 |
+- status != -NFS4ERR_DEADSESSION) { |
5808 |
++ switch (status) { |
5809 |
++ case 0: |
5810 |
++ case -NFS4ERR_BADSESSION: |
5811 |
++ case -NFS4ERR_DEADSESSION: |
5812 |
++ break; |
5813 |
++ case -NFS4ERR_BACK_CHAN_BUSY: |
5814 |
++ case -NFS4ERR_DELAY: |
5815 |
++ set_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state); |
5816 |
++ status = 0; |
5817 |
++ ssleep(1); |
5818 |
++ goto out; |
5819 |
++ default: |
5820 |
+ status = nfs4_recovery_handle_error(clp, status); |
5821 |
+ goto out; |
5822 |
+ } |
5823 |
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c |
5824 |
+index 059e2c350ad7..ae3118212bd6 100644 |
5825 |
+--- a/fs/nfs/pnfs.c |
5826 |
++++ b/fs/nfs/pnfs.c |
5827 |
+@@ -1381,11 +1381,27 @@ static void pnfs_list_write_lseg(struct inode *inode, struct list_head *listp) |
5828 |
+ |
5829 |
+ list_for_each_entry(lseg, &NFS_I(inode)->layout->plh_segs, pls_list) { |
5830 |
+ if (lseg->pls_range.iomode == IOMODE_RW && |
5831 |
+- test_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags)) |
5832 |
++ test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags)) |
5833 |
+ list_add(&lseg->pls_lc_list, listp); |
5834 |
+ } |
5835 |
+ } |
5836 |
+ |
5837 |
++static void pnfs_list_write_lseg_done(struct inode *inode, struct list_head *listp) |
5838 |
++{ |
5839 |
++ struct pnfs_layout_segment *lseg, *tmp; |
5840 |
++ unsigned long *bitlock = &NFS_I(inode)->flags; |
5841 |
++ |
5842 |
++ /* Matched by references in pnfs_set_layoutcommit */ |
5843 |
++ list_for_each_entry_safe(lseg, tmp, listp, pls_lc_list) { |
5844 |
++ list_del_init(&lseg->pls_lc_list); |
5845 |
++ put_lseg(lseg); |
5846 |
++ } |
5847 |
++ |
5848 |
++ clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock); |
5849 |
++ smp_mb__after_clear_bit(); |
5850 |
++ wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING); |
5851 |
++} |
5852 |
++ |
5853 |
+ void pnfs_set_lo_fail(struct pnfs_layout_segment *lseg) |
5854 |
+ { |
5855 |
+ if (lseg->pls_range.iomode == IOMODE_RW) { |
5856 |
+@@ -1434,6 +1450,7 @@ void pnfs_cleanup_layoutcommit(struct nfs4_layoutcommit_data *data) |
5857 |
+ |
5858 |
+ if (nfss->pnfs_curr_ld->cleanup_layoutcommit) |
5859 |
+ nfss->pnfs_curr_ld->cleanup_layoutcommit(data); |
5860 |
++ pnfs_list_write_lseg_done(data->args.inode, &data->lseg_list); |
5861 |
+ } |
5862 |
+ |
5863 |
+ /* |
5864 |
+diff --git a/fs/nfsd/nfs4acl.c b/fs/nfsd/nfs4acl.c |
5865 |
+index 9c51aff02ae2..435a9be1265e 100644 |
5866 |
+--- a/fs/nfsd/nfs4acl.c |
5867 |
++++ b/fs/nfsd/nfs4acl.c |
5868 |
+@@ -373,8 +373,10 @@ sort_pacl(struct posix_acl *pacl) |
5869 |
+ * by uid/gid. */ |
5870 |
+ int i, j; |
5871 |
+ |
5872 |
+- if (pacl->a_count <= 4) |
5873 |
+- return; /* no users or groups */ |
5874 |
++ /* no users or groups */ |
5875 |
++ if (!pacl || pacl->a_count <= 4) |
5876 |
++ return; |
5877 |
++ |
5878 |
+ i = 1; |
5879 |
+ while (pacl->a_entries[i].e_tag == ACL_USER) |
5880 |
+ i++; |
5881 |
+@@ -498,13 +500,12 @@ posix_state_to_acl(struct posix_acl_state *state, unsigned int flags) |
5882 |
+ |
5883 |
+ /* |
5884 |
+ * ACLs with no ACEs are treated differently in the inheritable |
5885 |
+- * and effective cases: when there are no inheritable ACEs, we |
5886 |
+- * set a zero-length default posix acl: |
5887 |
++ * and effective cases: when there are no inheritable ACEs, |
5888 |
++ * calls ->set_acl with a NULL ACL structure. |
5889 |
+ */ |
5890 |
+- if (state->empty && (flags & NFS4_ACL_TYPE_DEFAULT)) { |
5891 |
+- pacl = posix_acl_alloc(0, GFP_KERNEL); |
5892 |
+- return pacl ? pacl : ERR_PTR(-ENOMEM); |
5893 |
+- } |
5894 |
++ if (state->empty && (flags & NFS4_ACL_TYPE_DEFAULT)) |
5895 |
++ return NULL; |
5896 |
++ |
5897 |
+ /* |
5898 |
+ * When there are no effective ACEs, the following will end |
5899 |
+ * up setting a 3-element effective posix ACL with all |
5900 |
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c |
5901 |
+index e5404945f854..97a142cde23b 100644 |
5902 |
+--- a/fs/nfsd/nfs4proc.c |
5903 |
++++ b/fs/nfsd/nfs4proc.c |
5904 |
+@@ -904,14 +904,14 @@ nfsd4_write(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, |
5905 |
+ |
5906 |
+ nfs4_lock_state(); |
5907 |
+ status = nfs4_preprocess_stateid_op(cstate, stateid, WR_STATE, &filp); |
5908 |
+- if (filp) |
5909 |
+- get_file(filp); |
5910 |
+- nfs4_unlock_state(); |
5911 |
+- |
5912 |
+ if (status) { |
5913 |
++ nfs4_unlock_state(); |
5914 |
+ dprintk("NFSD: nfsd4_write: couldn't process stateid!\n"); |
5915 |
+ return status; |
5916 |
+ } |
5917 |
++ if (filp) |
5918 |
++ get_file(filp); |
5919 |
++ nfs4_unlock_state(); |
5920 |
+ |
5921 |
+ cnt = write->wr_buflen; |
5922 |
+ write->wr_how_written = write->wr_stable_how; |
5923 |
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c |
5924 |
+index f90b197ceffa..28e5648c9cc4 100644 |
5925 |
+--- a/fs/nfsd/nfs4state.c |
5926 |
++++ b/fs/nfsd/nfs4state.c |
5927 |
+@@ -3476,9 +3476,16 @@ out: |
5928 |
+ static __be32 |
5929 |
+ nfsd4_free_lock_stateid(struct nfs4_ol_stateid *stp) |
5930 |
+ { |
5931 |
+- if (check_for_locks(stp->st_file, lockowner(stp->st_stateowner))) |
5932 |
++ struct nfs4_lockowner *lo = lockowner(stp->st_stateowner); |
5933 |
++ |
5934 |
++ if (check_for_locks(stp->st_file, lo)) |
5935 |
+ return nfserr_locks_held; |
5936 |
+- release_lock_stateid(stp); |
5937 |
++ /* |
5938 |
++ * Currently there's a 1-1 lock stateid<->lockowner |
5939 |
++ * correspondance, and we have to delete the lockowner when we |
5940 |
++ * delete the lock stateid: |
5941 |
++ */ |
5942 |
++ unhash_lockowner(lo); |
5943 |
+ return nfs_ok; |
5944 |
+ } |
5945 |
+ |
5946 |
+@@ -3918,6 +3925,10 @@ static bool same_lockowner_ino(struct nfs4_lockowner *lo, struct inode *inode, c |
5947 |
+ |
5948 |
+ if (!same_owner_str(&lo->lo_owner, owner, clid)) |
5949 |
+ return false; |
5950 |
++ if (list_empty(&lo->lo_owner.so_stateids)) { |
5951 |
++ WARN_ON_ONCE(1); |
5952 |
++ return false; |
5953 |
++ } |
5954 |
+ lst = list_first_entry(&lo->lo_owner.so_stateids, |
5955 |
+ struct nfs4_ol_stateid, st_perstateowner); |
5956 |
+ return lst->st_file->fi_inode == inode; |
5957 |
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c |
5958 |
+index a0f92ec4b38e..6eaa2e2335dc 100644 |
5959 |
+--- a/fs/nfsd/nfs4xdr.c |
5960 |
++++ b/fs/nfsd/nfs4xdr.c |
5961 |
+@@ -161,8 +161,8 @@ static __be32 *read_buf(struct nfsd4_compoundargs *argp, u32 nbytes) |
5962 |
+ */ |
5963 |
+ memcpy(p, argp->p, avail); |
5964 |
+ /* step to next page */ |
5965 |
+- argp->pagelist++; |
5966 |
+ argp->p = page_address(argp->pagelist[0]); |
5967 |
++ argp->pagelist++; |
5968 |
+ if (argp->pagelen < PAGE_SIZE) { |
5969 |
+ argp->end = argp->p + (argp->pagelen>>2); |
5970 |
+ argp->pagelen = 0; |
5971 |
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c |
5972 |
+index 638306219ae8..e3abfde27863 100644 |
5973 |
+--- a/fs/nfsd/vfs.c |
5974 |
++++ b/fs/nfsd/vfs.c |
5975 |
+@@ -828,9 +828,10 @@ nfsd_open(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, |
5976 |
+ } |
5977 |
+ *filp = dentry_open(dget(dentry), mntget(fhp->fh_export->ex_path.mnt), |
5978 |
+ flags, current_cred()); |
5979 |
+- if (IS_ERR(*filp)) |
5980 |
++ if (IS_ERR(*filp)) { |
5981 |
+ host_err = PTR_ERR(*filp); |
5982 |
+- else { |
5983 |
++ *filp = NULL; |
5984 |
++ } else { |
5985 |
+ host_err = ima_file_check(*filp, may_flags); |
5986 |
+ |
5987 |
+ if (may_flags & NFSD_MAY_64BIT_COOKIE) |
5988 |
+diff --git a/fs/posix_acl.c b/fs/posix_acl.c |
5989 |
+index 5e325a42e33d..64496020bf19 100644 |
5990 |
+--- a/fs/posix_acl.c |
5991 |
++++ b/fs/posix_acl.c |
5992 |
+@@ -155,6 +155,12 @@ posix_acl_equiv_mode(const struct posix_acl *acl, umode_t *mode_p) |
5993 |
+ umode_t mode = 0; |
5994 |
+ int not_equiv = 0; |
5995 |
+ |
5996 |
++ /* |
5997 |
++ * A null ACL can always be presented as mode bits. |
5998 |
++ */ |
5999 |
++ if (!acl) |
6000 |
++ return 0; |
6001 |
++ |
6002 |
+ FOREACH_ACL_ENTRY(pa, acl, pe) { |
6003 |
+ switch (pa->e_tag) { |
6004 |
+ case ACL_USER_OBJ: |
6005 |
+diff --git a/fs/stat.c b/fs/stat.c |
6006 |
+index dc6d0be300ca..88b36c770762 100644 |
6007 |
+--- a/fs/stat.c |
6008 |
++++ b/fs/stat.c |
6009 |
+@@ -57,12 +57,13 @@ EXPORT_SYMBOL(vfs_getattr); |
6010 |
+ |
6011 |
+ int vfs_fstat(unsigned int fd, struct kstat *stat) |
6012 |
+ { |
6013 |
+- struct file *f = fget_raw(fd); |
6014 |
++ int fput_needed; |
6015 |
++ struct file *f = fget_light(fd, &fput_needed); |
6016 |
+ int error = -EBADF; |
6017 |
+ |
6018 |
+ if (f) { |
6019 |
+ error = vfs_getattr(f->f_path.mnt, f->f_path.dentry, stat); |
6020 |
+- fput(f); |
6021 |
++ fput_light(f, fput_needed); |
6022 |
+ } |
6023 |
+ return error; |
6024 |
+ } |
6025 |
+diff --git a/include/drm/drm_mode.h b/include/drm/drm_mode.h |
6026 |
+index 9242310b47cd..cbf2d9ada2ea 100644 |
6027 |
+--- a/include/drm/drm_mode.h |
6028 |
++++ b/include/drm/drm_mode.h |
6029 |
+@@ -223,6 +223,8 @@ struct drm_mode_get_connector { |
6030 |
+ __u32 connection; |
6031 |
+ __u32 mm_width, mm_height; /**< HxW in millimeters */ |
6032 |
+ __u32 subpixel; |
6033 |
++ |
6034 |
++ __u32 pad; |
6035 |
+ }; |
6036 |
+ |
6037 |
+ #define DRM_MODE_PROP_PENDING (1<<0) |
6038 |
+diff --git a/include/linux/compiler-intel.h b/include/linux/compiler-intel.h |
6039 |
+index d8e636e5607d..cba9593c4047 100644 |
6040 |
+--- a/include/linux/compiler-intel.h |
6041 |
++++ b/include/linux/compiler-intel.h |
6042 |
+@@ -27,5 +27,3 @@ |
6043 |
+ #define __must_be_array(a) 0 |
6044 |
+ |
6045 |
+ #endif |
6046 |
+- |
6047 |
+-#define uninitialized_var(x) x |
6048 |
+diff --git a/include/linux/efi.h b/include/linux/efi.h |
6049 |
+index eee8b0b190ea..6bf839418dd4 100644 |
6050 |
+--- a/include/linux/efi.h |
6051 |
++++ b/include/linux/efi.h |
6052 |
+@@ -29,7 +29,12 @@ |
6053 |
+ #define EFI_UNSUPPORTED ( 3 | (1UL << (BITS_PER_LONG-1))) |
6054 |
+ #define EFI_BAD_BUFFER_SIZE ( 4 | (1UL << (BITS_PER_LONG-1))) |
6055 |
+ #define EFI_BUFFER_TOO_SMALL ( 5 | (1UL << (BITS_PER_LONG-1))) |
6056 |
++#define EFI_NOT_READY ( 6 | (1UL << (BITS_PER_LONG-1))) |
6057 |
++#define EFI_DEVICE_ERROR ( 7 | (1UL << (BITS_PER_LONG-1))) |
6058 |
++#define EFI_WRITE_PROTECTED ( 8 | (1UL << (BITS_PER_LONG-1))) |
6059 |
++#define EFI_OUT_OF_RESOURCES ( 9 | (1UL << (BITS_PER_LONG-1))) |
6060 |
+ #define EFI_NOT_FOUND (14 | (1UL << (BITS_PER_LONG-1))) |
6061 |
++#define EFI_SECURITY_VIOLATION (26 | (1UL << (BITS_PER_LONG-1))) |
6062 |
+ |
6063 |
+ typedef unsigned long efi_status_t; |
6064 |
+ typedef u8 efi_bool_t; |
6065 |
+@@ -257,6 +262,7 @@ typedef efi_status_t efi_query_capsule_caps_t(efi_capsule_header_t **capsules, |
6066 |
+ unsigned long count, |
6067 |
+ u64 *max_size, |
6068 |
+ int *reset_type); |
6069 |
++typedef efi_status_t efi_query_variable_store_t(u32 attributes, unsigned long size); |
6070 |
+ |
6071 |
+ /* |
6072 |
+ * EFI Configuration Table and GUID definitions |
6073 |
+@@ -498,8 +504,14 @@ extern void efi_gettimeofday (struct timespec *ts); |
6074 |
+ extern void efi_enter_virtual_mode (void); /* switch EFI to virtual mode, if possible */ |
6075 |
+ #ifdef CONFIG_X86 |
6076 |
+ extern void efi_free_boot_services(void); |
6077 |
++extern efi_status_t efi_query_variable_store(u32 attributes, unsigned long size); |
6078 |
+ #else |
6079 |
+ static inline void efi_free_boot_services(void) {} |
6080 |
++ |
6081 |
++static inline efi_status_t efi_query_variable_store(u32 attributes, unsigned long size) |
6082 |
++{ |
6083 |
++ return EFI_SUCCESS; |
6084 |
++} |
6085 |
+ #endif |
6086 |
+ extern u64 efi_get_iobase (void); |
6087 |
+ extern u32 efi_mem_type (unsigned long phys_addr); |
6088 |
+@@ -652,6 +664,7 @@ struct efivar_operations { |
6089 |
+ efi_get_variable_t *get_variable; |
6090 |
+ efi_get_next_variable_t *get_next_variable; |
6091 |
+ efi_set_variable_t *set_variable; |
6092 |
++ efi_query_variable_store_t *query_variable_store; |
6093 |
+ }; |
6094 |
+ |
6095 |
+ struct efivars { |
6096 |
+@@ -660,7 +673,8 @@ struct efivars { |
6097 |
+ * 1) ->list - adds, removals, reads, writes |
6098 |
+ * 2) ops.[gs]et_variable() calls. |
6099 |
+ * It must not be held when creating sysfs entries or calling kmalloc. |
6100 |
+- * ops.get_next_variable() is only called from register_efivars(), |
6101 |
++ * ops.get_next_variable() is only called from register_efivars() |
6102 |
++ * or efivar_update_sysfs_entries(), |
6103 |
+ * which is protected by the BKL, so that path is safe. |
6104 |
+ */ |
6105 |
+ spinlock_t lock; |
6106 |
+diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h |
6107 |
+index f80ca4a6ed96..bfbcd439e2c6 100644 |
6108 |
+--- a/include/linux/ftrace.h |
6109 |
++++ b/include/linux/ftrace.h |
6110 |
+@@ -374,6 +374,7 @@ extern int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr); |
6111 |
+ extern int ftrace_arch_read_dyn_info(char *buf, int size); |
6112 |
+ |
6113 |
+ extern int skip_trace(unsigned long ip); |
6114 |
++extern void ftrace_module_init(struct module *mod); |
6115 |
+ |
6116 |
+ extern void ftrace_disable_daemon(void); |
6117 |
+ extern void ftrace_enable_daemon(void); |
6118 |
+@@ -383,6 +384,7 @@ static inline int ftrace_force_update(void) { return 0; } |
6119 |
+ static inline void ftrace_disable_daemon(void) { } |
6120 |
+ static inline void ftrace_enable_daemon(void) { } |
6121 |
+ static inline void ftrace_release_mod(struct module *mod) {} |
6122 |
++static inline void ftrace_module_init(struct module *mod) {} |
6123 |
+ static inline int register_ftrace_command(struct ftrace_func_command *cmd) |
6124 |
+ { |
6125 |
+ return -EINVAL; |
6126 |
+diff --git a/include/linux/list.h b/include/linux/list.h |
6127 |
+index cc6d2aa6b415..1712c7e1f56b 100644 |
6128 |
+--- a/include/linux/list.h |
6129 |
++++ b/include/linux/list.h |
6130 |
+@@ -362,6 +362,22 @@ static inline void list_splice_tail_init(struct list_head *list, |
6131 |
+ list_entry((ptr)->next, type, member) |
6132 |
+ |
6133 |
+ /** |
6134 |
++ * list_next_entry - get the next element in list |
6135 |
++ * @pos: the type * to cursor |
6136 |
++ * @member: the name of the list_struct within the struct. |
6137 |
++ */ |
6138 |
++#define list_next_entry(pos, member) \ |
6139 |
++ list_entry((pos)->member.next, typeof(*(pos)), member) |
6140 |
++ |
6141 |
++/** |
6142 |
++ * list_prev_entry - get the prev element in list |
6143 |
++ * @pos: the type * to cursor |
6144 |
++ * @member: the name of the list_struct within the struct. |
6145 |
++ */ |
6146 |
++#define list_prev_entry(pos, member) \ |
6147 |
++ list_entry((pos)->member.prev, typeof(*(pos)), member) |
6148 |
++ |
6149 |
++/** |
6150 |
+ * list_for_each - iterate over a list |
6151 |
+ * @pos: the &struct list_head to use as a loop cursor. |
6152 |
+ * @head: the head for your list. |
6153 |
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h |
6154 |
+index b35752fb2ad8..16cb641bb89f 100644 |
6155 |
+--- a/include/linux/mm_types.h |
6156 |
++++ b/include/linux/mm_types.h |
6157 |
+@@ -306,6 +306,7 @@ struct mm_struct { |
6158 |
+ void (*unmap_area) (struct mm_struct *mm, unsigned long addr); |
6159 |
+ #endif |
6160 |
+ unsigned long mmap_base; /* base of mmap area */ |
6161 |
++ unsigned long mmap_legacy_base; /* base of mmap area in bottom-up allocations */ |
6162 |
+ unsigned long task_size; /* size of task vm space */ |
6163 |
+ unsigned long cached_hole_size; /* if non-zero, the largest hole below free_area_cache */ |
6164 |
+ unsigned long free_area_cache; /* first hole of size cached_hole_size or larger */ |
6165 |
+diff --git a/include/linux/net.h b/include/linux/net.h |
6166 |
+index d40ccb796e8d..16b499620a0a 100644 |
6167 |
+--- a/include/linux/net.h |
6168 |
++++ b/include/linux/net.h |
6169 |
+@@ -282,6 +282,29 @@ do { \ |
6170 |
+ #define net_dbg_ratelimited(fmt, ...) \ |
6171 |
+ net_ratelimited_function(pr_debug, fmt, ##__VA_ARGS__) |
6172 |
+ |
6173 |
++#define net_ratelimited_function(function, ...) \ |
6174 |
++do { \ |
6175 |
++ if (net_ratelimit()) \ |
6176 |
++ function(__VA_ARGS__); \ |
6177 |
++} while (0) |
6178 |
++ |
6179 |
++#define net_emerg_ratelimited(fmt, ...) \ |
6180 |
++ net_ratelimited_function(pr_emerg, fmt, ##__VA_ARGS__) |
6181 |
++#define net_alert_ratelimited(fmt, ...) \ |
6182 |
++ net_ratelimited_function(pr_alert, fmt, ##__VA_ARGS__) |
6183 |
++#define net_crit_ratelimited(fmt, ...) \ |
6184 |
++ net_ratelimited_function(pr_crit, fmt, ##__VA_ARGS__) |
6185 |
++#define net_err_ratelimited(fmt, ...) \ |
6186 |
++ net_ratelimited_function(pr_err, fmt, ##__VA_ARGS__) |
6187 |
++#define net_notice_ratelimited(fmt, ...) \ |
6188 |
++ net_ratelimited_function(pr_notice, fmt, ##__VA_ARGS__) |
6189 |
++#define net_warn_ratelimited(fmt, ...) \ |
6190 |
++ net_ratelimited_function(pr_warn, fmt, ##__VA_ARGS__) |
6191 |
++#define net_info_ratelimited(fmt, ...) \ |
6192 |
++ net_ratelimited_function(pr_info, fmt, ##__VA_ARGS__) |
6193 |
++#define net_dbg_ratelimited(fmt, ...) \ |
6194 |
++ net_ratelimited_function(pr_debug, fmt, ##__VA_ARGS__) |
6195 |
++ |
6196 |
+ #define net_random() random32() |
6197 |
+ #define net_srandom(seed) srandom32((__force u32)seed) |
6198 |
+ |
6199 |
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h |
6200 |
+index e45ffad543be..47c3d045c7fd 100644 |
6201 |
+--- a/include/linux/perf_event.h |
6202 |
++++ b/include/linux/perf_event.h |
6203 |
+@@ -391,13 +391,15 @@ struct perf_event_mmap_page { |
6204 |
+ /* |
6205 |
+ * Control data for the mmap() data buffer. |
6206 |
+ * |
6207 |
+- * User-space reading the @data_head value should issue an rmb(), on |
6208 |
+- * SMP capable platforms, after reading this value -- see |
6209 |
+- * perf_event_wakeup(). |
6210 |
++ * User-space reading the @data_head value should issue an smp_rmb(), |
6211 |
++ * after reading this value. |
6212 |
+ * |
6213 |
+ * When the mapping is PROT_WRITE the @data_tail value should be |
6214 |
+- * written by userspace to reflect the last read data. In this case |
6215 |
+- * the kernel will not over-write unread data. |
6216 |
++ * written by userspace to reflect the last read data, after issueing |
6217 |
++ * an smp_mb() to separate the data read from the ->data_tail store. |
6218 |
++ * In this case the kernel will not over-write unread data. |
6219 |
++ * |
6220 |
++ * See perf_output_put_handle() for the data ordering. |
6221 |
+ */ |
6222 |
+ __u64 data_head; /* head in the data section */ |
6223 |
+ __u64 data_tail; /* user-space written tail */ |
6224 |
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h |
6225 |
+index 3a7b87e1fd89..0884db3d315e 100644 |
6226 |
+--- a/include/linux/skbuff.h |
6227 |
++++ b/include/linux/skbuff.h |
6228 |
+@@ -640,11 +640,21 @@ static inline unsigned char *skb_end_pointer(const struct sk_buff *skb) |
6229 |
+ { |
6230 |
+ return skb->head + skb->end; |
6231 |
+ } |
6232 |
++ |
6233 |
++static inline unsigned int skb_end_offset(const struct sk_buff *skb) |
6234 |
++{ |
6235 |
++ return skb->end; |
6236 |
++} |
6237 |
+ #else |
6238 |
+ static inline unsigned char *skb_end_pointer(const struct sk_buff *skb) |
6239 |
+ { |
6240 |
+ return skb->end; |
6241 |
+ } |
6242 |
++ |
6243 |
++static inline unsigned int skb_end_offset(const struct sk_buff *skb) |
6244 |
++{ |
6245 |
++ return skb->end - skb->head; |
6246 |
++} |
6247 |
+ #endif |
6248 |
+ |
6249 |
+ /* Internal */ |
6250 |
+@@ -2574,7 +2584,7 @@ static inline bool skb_is_recycleable(const struct sk_buff *skb, int skb_size) |
6251 |
+ return false; |
6252 |
+ |
6253 |
+ skb_size = SKB_DATA_ALIGN(skb_size + NET_SKB_PAD); |
6254 |
+- if (skb_end_pointer(skb) - skb->head < skb_size) |
6255 |
++ if (skb_end_offset(skb) < skb_size) |
6256 |
+ return false; |
6257 |
+ |
6258 |
+ if (skb_shared(skb) || skb_cloned(skb)) |
6259 |
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h |
6260 |
+index 2ad92ca4e6f3..7cd3caab0394 100644 |
6261 |
+--- a/include/net/ip6_route.h |
6262 |
++++ b/include/net/ip6_route.h |
6263 |
+@@ -34,6 +34,11 @@ struct route_info { |
6264 |
+ #define RT6_LOOKUP_F_SRCPREF_PUBLIC 0x00000010 |
6265 |
+ #define RT6_LOOKUP_F_SRCPREF_COA 0x00000020 |
6266 |
+ |
6267 |
++/* We do not (yet ?) support IPv6 jumbograms (RFC 2675) |
6268 |
++ * Unlike IPv4, hdr->seg_len doesn't include the IPv6 header |
6269 |
++ */ |
6270 |
++#define IP6_MAX_MTU (0xFFFF + sizeof(struct ipv6hdr)) |
6271 |
++ |
6272 |
+ /* |
6273 |
+ * rt6_srcprefs2flags() and rt6_flags2srcprefs() translate |
6274 |
+ * between IPV6_ADDR_PREFERENCES socket option values |
6275 |
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h |
6276 |
+index 9210bdc7bd8d..aa12b395b2b7 100644 |
6277 |
+--- a/include/net/mac80211.h |
6278 |
++++ b/include/net/mac80211.h |
6279 |
+@@ -1174,6 +1174,10 @@ enum sta_notify_cmd { |
6280 |
+ * @IEEE80211_HW_SCAN_WHILE_IDLE: The device can do hw scan while |
6281 |
+ * being idle (i.e. mac80211 doesn't have to go idle-off during the |
6282 |
+ * the scan). |
6283 |
++ * |
6284 |
++ * @IEEE80211_HW_TEARDOWN_AGGR_ON_BAR_FAIL: On this hardware TX BA session |
6285 |
++ * should be tear down once BAR frame will not be acked. |
6286 |
++ * |
6287 |
+ */ |
6288 |
+ enum ieee80211_hw_flags { |
6289 |
+ IEEE80211_HW_HAS_RATE_CONTROL = 1<<0, |
6290 |
+@@ -1201,6 +1205,7 @@ enum ieee80211_hw_flags { |
6291 |
+ IEEE80211_HW_AP_LINK_PS = 1<<22, |
6292 |
+ IEEE80211_HW_TX_AMPDU_SETUP_IN_HW = 1<<23, |
6293 |
+ IEEE80211_HW_SCAN_WHILE_IDLE = 1<<24, |
6294 |
++ IEEE80211_HW_TEARDOWN_AGGR_ON_BAR_FAIL = 1<<26, |
6295 |
+ }; |
6296 |
+ |
6297 |
+ /** |
6298 |
+diff --git a/include/trace/events/module.h b/include/trace/events/module.h |
6299 |
+index 161932737416..ca298c7157ae 100644 |
6300 |
+--- a/include/trace/events/module.h |
6301 |
++++ b/include/trace/events/module.h |
6302 |
+@@ -78,7 +78,7 @@ DECLARE_EVENT_CLASS(module_refcnt, |
6303 |
+ |
6304 |
+ TP_fast_assign( |
6305 |
+ __entry->ip = ip; |
6306 |
+- __entry->refcnt = __this_cpu_read(mod->refptr->incs) + __this_cpu_read(mod->refptr->decs); |
6307 |
++ __entry->refcnt = __this_cpu_read(mod->refptr->incs) - __this_cpu_read(mod->refptr->decs); |
6308 |
+ __assign_str(name, mod->name); |
6309 |
+ ), |
6310 |
+ |
6311 |
+diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h |
6312 |
+index a36c87a6623d..d4635cd786b4 100644 |
6313 |
+--- a/include/xen/interface/io/netif.h |
6314 |
++++ b/include/xen/interface/io/netif.h |
6315 |
+@@ -65,6 +65,7 @@ |
6316 |
+ #define _XEN_NETTXF_extra_info (3) |
6317 |
+ #define XEN_NETTXF_extra_info (1U<<_XEN_NETTXF_extra_info) |
6318 |
+ |
6319 |
++#define XEN_NETIF_MAX_TX_SIZE 0xFFFF |
6320 |
+ struct xen_netif_tx_request { |
6321 |
+ grant_ref_t gref; /* Reference to buffer page */ |
6322 |
+ uint16_t offset; /* Offset within buffer page */ |
6323 |
+diff --git a/kernel/events/core.c b/kernel/events/core.c |
6324 |
+index eba82e2d34e9..e39346fb2e91 100644 |
6325 |
+--- a/kernel/events/core.c |
6326 |
++++ b/kernel/events/core.c |
6327 |
+@@ -1973,9 +1973,6 @@ static void __perf_event_sync_stat(struct perf_event *event, |
6328 |
+ perf_event_update_userpage(next_event); |
6329 |
+ } |
6330 |
+ |
6331 |
+-#define list_next_entry(pos, member) \ |
6332 |
+- list_entry(pos->member.next, typeof(*pos), member) |
6333 |
+- |
6334 |
+ static void perf_event_sync_stat(struct perf_event_context *ctx, |
6335 |
+ struct perf_event_context *next_ctx) |
6336 |
+ { |
6337 |
+@@ -5874,6 +5871,7 @@ skip_type: |
6338 |
+ if (pmu->pmu_cpu_context) |
6339 |
+ goto got_cpu_context; |
6340 |
+ |
6341 |
++ ret = -ENOMEM; |
6342 |
+ pmu->pmu_cpu_context = alloc_percpu(struct perf_cpu_context); |
6343 |
+ if (!pmu->pmu_cpu_context) |
6344 |
+ goto free_dev; |
6345 |
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c |
6346 |
+index 6ddaba43fb7a..4636ecc26d75 100644 |
6347 |
+--- a/kernel/events/ring_buffer.c |
6348 |
++++ b/kernel/events/ring_buffer.c |
6349 |
+@@ -75,10 +75,31 @@ again: |
6350 |
+ goto out; |
6351 |
+ |
6352 |
+ /* |
6353 |
+- * Publish the known good head. Rely on the full barrier implied |
6354 |
+- * by atomic_dec_and_test() order the rb->head read and this |
6355 |
+- * write. |
6356 |
++ * Since the mmap() consumer (userspace) can run on a different CPU: |
6357 |
++ * |
6358 |
++ * kernel user |
6359 |
++ * |
6360 |
++ * READ ->data_tail READ ->data_head |
6361 |
++ * smp_mb() (A) smp_rmb() (C) |
6362 |
++ * WRITE $data READ $data |
6363 |
++ * smp_wmb() (B) smp_mb() (D) |
6364 |
++ * STORE ->data_head WRITE ->data_tail |
6365 |
++ * |
6366 |
++ * Where A pairs with D, and B pairs with C. |
6367 |
++ * |
6368 |
++ * I don't think A needs to be a full barrier because we won't in fact |
6369 |
++ * write data until we see the store from userspace. So we simply don't |
6370 |
++ * issue the data WRITE until we observe it. Be conservative for now. |
6371 |
++ * |
6372 |
++ * OTOH, D needs to be a full barrier since it separates the data READ |
6373 |
++ * from the tail WRITE. |
6374 |
++ * |
6375 |
++ * For B a WMB is sufficient since it separates two WRITEs, and for C |
6376 |
++ * an RMB is sufficient since it separates two READs. |
6377 |
++ * |
6378 |
++ * See perf_output_begin(). |
6379 |
+ */ |
6380 |
++ smp_wmb(); |
6381 |
+ rb->user_page->data_head = head; |
6382 |
+ |
6383 |
+ /* |
6384 |
+@@ -142,9 +163,11 @@ int perf_output_begin(struct perf_output_handle *handle, |
6385 |
+ * Userspace could choose to issue a mb() before updating the |
6386 |
+ * tail pointer. So that all reads will be completed before the |
6387 |
+ * write is issued. |
6388 |
++ * |
6389 |
++ * See perf_output_put_handle(). |
6390 |
+ */ |
6391 |
+ tail = ACCESS_ONCE(rb->user_page->data_tail); |
6392 |
+- smp_rmb(); |
6393 |
++ smp_mb(); |
6394 |
+ offset = head = local_read(&rb->head); |
6395 |
+ head += size; |
6396 |
+ if (unlikely(!perf_output_space(rb, tail, offset, head))) |
6397 |
+diff --git a/kernel/futex.c b/kernel/futex.c |
6398 |
+index e564a9a3ea2a..9396b7b853f7 100644 |
6399 |
+--- a/kernel/futex.c |
6400 |
++++ b/kernel/futex.c |
6401 |
+@@ -588,6 +588,55 @@ void exit_pi_state_list(struct task_struct *curr) |
6402 |
+ raw_spin_unlock_irq(&curr->pi_lock); |
6403 |
+ } |
6404 |
+ |
6405 |
++/* |
6406 |
++ * We need to check the following states: |
6407 |
++ * |
6408 |
++ * Waiter | pi_state | pi->owner | uTID | uODIED | ? |
6409 |
++ * |
6410 |
++ * [1] NULL | --- | --- | 0 | 0/1 | Valid |
6411 |
++ * [2] NULL | --- | --- | >0 | 0/1 | Valid |
6412 |
++ * |
6413 |
++ * [3] Found | NULL | -- | Any | 0/1 | Invalid |
6414 |
++ * |
6415 |
++ * [4] Found | Found | NULL | 0 | 1 | Valid |
6416 |
++ * [5] Found | Found | NULL | >0 | 1 | Invalid |
6417 |
++ * |
6418 |
++ * [6] Found | Found | task | 0 | 1 | Valid |
6419 |
++ * |
6420 |
++ * [7] Found | Found | NULL | Any | 0 | Invalid |
6421 |
++ * |
6422 |
++ * [8] Found | Found | task | ==taskTID | 0/1 | Valid |
6423 |
++ * [9] Found | Found | task | 0 | 0 | Invalid |
6424 |
++ * [10] Found | Found | task | !=taskTID | 0/1 | Invalid |
6425 |
++ * |
6426 |
++ * [1] Indicates that the kernel can acquire the futex atomically. We |
6427 |
++ * came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit. |
6428 |
++ * |
6429 |
++ * [2] Valid, if TID does not belong to a kernel thread. If no matching |
6430 |
++ * thread is found then it indicates that the owner TID has died. |
6431 |
++ * |
6432 |
++ * [3] Invalid. The waiter is queued on a non PI futex |
6433 |
++ * |
6434 |
++ * [4] Valid state after exit_robust_list(), which sets the user space |
6435 |
++ * value to FUTEX_WAITERS | FUTEX_OWNER_DIED. |
6436 |
++ * |
6437 |
++ * [5] The user space value got manipulated between exit_robust_list() |
6438 |
++ * and exit_pi_state_list() |
6439 |
++ * |
6440 |
++ * [6] Valid state after exit_pi_state_list() which sets the new owner in |
6441 |
++ * the pi_state but cannot access the user space value. |
6442 |
++ * |
6443 |
++ * [7] pi_state->owner can only be NULL when the OWNER_DIED bit is set. |
6444 |
++ * |
6445 |
++ * [8] Owner and user space value match |
6446 |
++ * |
6447 |
++ * [9] There is no transient state which sets the user space TID to 0 |
6448 |
++ * except exit_robust_list(), but this is indicated by the |
6449 |
++ * FUTEX_OWNER_DIED bit. See [4] |
6450 |
++ * |
6451 |
++ * [10] There is no transient state which leaves owner and user space |
6452 |
++ * TID out of sync. |
6453 |
++ */ |
6454 |
+ static int |
6455 |
+ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, |
6456 |
+ union futex_key *key, struct futex_pi_state **ps) |
6457 |
+@@ -603,12 +652,13 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, |
6458 |
+ plist_for_each_entry_safe(this, next, head, list) { |
6459 |
+ if (match_futex(&this->key, key)) { |
6460 |
+ /* |
6461 |
+- * Another waiter already exists - bump up |
6462 |
+- * the refcount and return its pi_state: |
6463 |
++ * Sanity check the waiter before increasing |
6464 |
++ * the refcount and attaching to it. |
6465 |
+ */ |
6466 |
+ pi_state = this->pi_state; |
6467 |
+ /* |
6468 |
+- * Userspace might have messed up non-PI and PI futexes |
6469 |
++ * Userspace might have messed up non-PI and |
6470 |
++ * PI futexes [3] |
6471 |
+ */ |
6472 |
+ if (unlikely(!pi_state)) |
6473 |
+ return -EINVAL; |
6474 |
+@@ -616,34 +666,70 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, |
6475 |
+ WARN_ON(!atomic_read(&pi_state->refcount)); |
6476 |
+ |
6477 |
+ /* |
6478 |
+- * When pi_state->owner is NULL then the owner died |
6479 |
+- * and another waiter is on the fly. pi_state->owner |
6480 |
+- * is fixed up by the task which acquires |
6481 |
+- * pi_state->rt_mutex. |
6482 |
+- * |
6483 |
+- * We do not check for pid == 0 which can happen when |
6484 |
+- * the owner died and robust_list_exit() cleared the |
6485 |
+- * TID. |
6486 |
++ * Handle the owner died case: |
6487 |
+ */ |
6488 |
+- if (pid && pi_state->owner) { |
6489 |
++ if (uval & FUTEX_OWNER_DIED) { |
6490 |
++ /* |
6491 |
++ * exit_pi_state_list sets owner to NULL and |
6492 |
++ * wakes the topmost waiter. The task which |
6493 |
++ * acquires the pi_state->rt_mutex will fixup |
6494 |
++ * owner. |
6495 |
++ */ |
6496 |
++ if (!pi_state->owner) { |
6497 |
++ /* |
6498 |
++ * No pi state owner, but the user |
6499 |
++ * space TID is not 0. Inconsistent |
6500 |
++ * state. [5] |
6501 |
++ */ |
6502 |
++ if (pid) |
6503 |
++ return -EINVAL; |
6504 |
++ /* |
6505 |
++ * Take a ref on the state and |
6506 |
++ * return. [4] |
6507 |
++ */ |
6508 |
++ goto out_state; |
6509 |
++ } |
6510 |
++ |
6511 |
+ /* |
6512 |
+- * Bail out if user space manipulated the |
6513 |
+- * futex value. |
6514 |
++ * If TID is 0, then either the dying owner |
6515 |
++ * has not yet executed exit_pi_state_list() |
6516 |
++ * or some waiter acquired the rtmutex in the |
6517 |
++ * pi state, but did not yet fixup the TID in |
6518 |
++ * user space. |
6519 |
++ * |
6520 |
++ * Take a ref on the state and return. [6] |
6521 |
+ */ |
6522 |
+- if (pid != task_pid_vnr(pi_state->owner)) |
6523 |
++ if (!pid) |
6524 |
++ goto out_state; |
6525 |
++ } else { |
6526 |
++ /* |
6527 |
++ * If the owner died bit is not set, |
6528 |
++ * then the pi_state must have an |
6529 |
++ * owner. [7] |
6530 |
++ */ |
6531 |
++ if (!pi_state->owner) |
6532 |
+ return -EINVAL; |
6533 |
+ } |
6534 |
+ |
6535 |
++ /* |
6536 |
++ * Bail out if user space manipulated the |
6537 |
++ * futex value. If pi state exists then the |
6538 |
++ * owner TID must be the same as the user |
6539 |
++ * space TID. [9/10] |
6540 |
++ */ |
6541 |
++ if (pid != task_pid_vnr(pi_state->owner)) |
6542 |
++ return -EINVAL; |
6543 |
++ |
6544 |
++ out_state: |
6545 |
+ atomic_inc(&pi_state->refcount); |
6546 |
+ *ps = pi_state; |
6547 |
+- |
6548 |
+ return 0; |
6549 |
+ } |
6550 |
+ } |
6551 |
+ |
6552 |
+ /* |
6553 |
+ * We are the first waiter - try to look up the real owner and attach |
6554 |
+- * the new pi_state to it, but bail out when TID = 0 |
6555 |
++ * the new pi_state to it, but bail out when TID = 0 [1] |
6556 |
+ */ |
6557 |
+ if (!pid) |
6558 |
+ return -ESRCH; |
6559 |
+@@ -651,6 +737,11 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, |
6560 |
+ if (!p) |
6561 |
+ return -ESRCH; |
6562 |
+ |
6563 |
++ if (!p->mm) { |
6564 |
++ put_task_struct(p); |
6565 |
++ return -EPERM; |
6566 |
++ } |
6567 |
++ |
6568 |
+ /* |
6569 |
+ * We need to look at the task state flags to figure out, |
6570 |
+ * whether the task is exiting. To protect against the do_exit |
6571 |
+@@ -671,6 +762,9 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, |
6572 |
+ return ret; |
6573 |
+ } |
6574 |
+ |
6575 |
++ /* |
6576 |
++ * No existing pi state. First waiter. [2] |
6577 |
++ */ |
6578 |
+ pi_state = alloc_pi_state(); |
6579 |
+ |
6580 |
+ /* |
6581 |
+@@ -742,10 +836,18 @@ retry: |
6582 |
+ return -EDEADLK; |
6583 |
+ |
6584 |
+ /* |
6585 |
+- * Surprise - we got the lock. Just return to userspace: |
6586 |
++ * Surprise - we got the lock, but we do not trust user space at all. |
6587 |
+ */ |
6588 |
+- if (unlikely(!curval)) |
6589 |
+- return 1; |
6590 |
++ if (unlikely(!curval)) { |
6591 |
++ /* |
6592 |
++ * We verify whether there is kernel state for this |
6593 |
++ * futex. If not, we can safely assume, that the 0 -> |
6594 |
++ * TID transition is correct. If state exists, we do |
6595 |
++ * not bother to fixup the user space state as it was |
6596 |
++ * corrupted already. |
6597 |
++ */ |
6598 |
++ return futex_top_waiter(hb, key) ? -EINVAL : 1; |
6599 |
++ } |
6600 |
+ |
6601 |
+ uval = curval; |
6602 |
+ |
6603 |
+@@ -875,6 +977,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) |
6604 |
+ struct task_struct *new_owner; |
6605 |
+ struct futex_pi_state *pi_state = this->pi_state; |
6606 |
+ u32 uninitialized_var(curval), newval; |
6607 |
++ int ret = 0; |
6608 |
+ |
6609 |
+ if (!pi_state) |
6610 |
+ return -EINVAL; |
6611 |
+@@ -898,23 +1001,19 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) |
6612 |
+ new_owner = this->task; |
6613 |
+ |
6614 |
+ /* |
6615 |
+- * We pass it to the next owner. (The WAITERS bit is always |
6616 |
+- * kept enabled while there is PI state around. We must also |
6617 |
+- * preserve the owner died bit.) |
6618 |
++ * We pass it to the next owner. The WAITERS bit is always |
6619 |
++ * kept enabled while there is PI state around. We cleanup the |
6620 |
++ * owner died bit, because we are the owner. |
6621 |
+ */ |
6622 |
+- if (!(uval & FUTEX_OWNER_DIED)) { |
6623 |
+- int ret = 0; |
6624 |
++ newval = FUTEX_WAITERS | task_pid_vnr(new_owner); |
6625 |
+ |
6626 |
+- newval = FUTEX_WAITERS | task_pid_vnr(new_owner); |
6627 |
+- |
6628 |
+- if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) |
6629 |
+- ret = -EFAULT; |
6630 |
+- else if (curval != uval) |
6631 |
+- ret = -EINVAL; |
6632 |
+- if (ret) { |
6633 |
+- raw_spin_unlock(&pi_state->pi_mutex.wait_lock); |
6634 |
+- return ret; |
6635 |
+- } |
6636 |
++ if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) |
6637 |
++ ret = -EFAULT; |
6638 |
++ else if (curval != uval) |
6639 |
++ ret = -EINVAL; |
6640 |
++ if (ret) { |
6641 |
++ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); |
6642 |
++ return ret; |
6643 |
+ } |
6644 |
+ |
6645 |
+ raw_spin_lock_irq(&pi_state->owner->pi_lock); |
6646 |
+@@ -1193,7 +1292,7 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key, |
6647 |
+ * |
6648 |
+ * Returns: |
6649 |
+ * 0 - failed to acquire the lock atomicly |
6650 |
+- * 1 - acquired the lock |
6651 |
++ * >0 - acquired the lock, return value is vpid of the top_waiter |
6652 |
+ * <0 - error |
6653 |
+ */ |
6654 |
+ static int futex_proxy_trylock_atomic(u32 __user *pifutex, |
6655 |
+@@ -1204,7 +1303,7 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex, |
6656 |
+ { |
6657 |
+ struct futex_q *top_waiter = NULL; |
6658 |
+ u32 curval; |
6659 |
+- int ret; |
6660 |
++ int ret, vpid; |
6661 |
+ |
6662 |
+ if (get_futex_value_locked(&curval, pifutex)) |
6663 |
+ return -EFAULT; |
6664 |
+@@ -1232,11 +1331,13 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex, |
6665 |
+ * the contended case or if set_waiters is 1. The pi_state is returned |
6666 |
+ * in ps in contended cases. |
6667 |
+ */ |
6668 |
++ vpid = task_pid_vnr(top_waiter->task); |
6669 |
+ ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task, |
6670 |
+ set_waiters); |
6671 |
+- if (ret == 1) |
6672 |
++ if (ret == 1) { |
6673 |
+ requeue_pi_wake_futex(top_waiter, key2, hb2); |
6674 |
+- |
6675 |
++ return vpid; |
6676 |
++ } |
6677 |
+ return ret; |
6678 |
+ } |
6679 |
+ |
6680 |
+@@ -1268,10 +1369,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, |
6681 |
+ struct futex_hash_bucket *hb1, *hb2; |
6682 |
+ struct plist_head *head1; |
6683 |
+ struct futex_q *this, *next; |
6684 |
+- u32 curval2; |
6685 |
+ |
6686 |
+ if (requeue_pi) { |
6687 |
+ /* |
6688 |
++ * Requeue PI only works on two distinct uaddrs. This |
6689 |
++ * check is only valid for private futexes. See below. |
6690 |
++ */ |
6691 |
++ if (uaddr1 == uaddr2) |
6692 |
++ return -EINVAL; |
6693 |
++ |
6694 |
++ /* |
6695 |
+ * requeue_pi requires a pi_state, try to allocate it now |
6696 |
+ * without any locks in case it fails. |
6697 |
+ */ |
6698 |
+@@ -1309,6 +1416,15 @@ retry: |
6699 |
+ if (unlikely(ret != 0)) |
6700 |
+ goto out_put_key1; |
6701 |
+ |
6702 |
++ /* |
6703 |
++ * The check above which compares uaddrs is not sufficient for |
6704 |
++ * shared futexes. We need to compare the keys: |
6705 |
++ */ |
6706 |
++ if (requeue_pi && match_futex(&key1, &key2)) { |
6707 |
++ ret = -EINVAL; |
6708 |
++ goto out_put_keys; |
6709 |
++ } |
6710 |
++ |
6711 |
+ hb1 = hash_futex(&key1); |
6712 |
+ hb2 = hash_futex(&key2); |
6713 |
+ |
6714 |
+@@ -1354,16 +1470,25 @@ retry_private: |
6715 |
+ * At this point the top_waiter has either taken uaddr2 or is |
6716 |
+ * waiting on it. If the former, then the pi_state will not |
6717 |
+ * exist yet, look it up one more time to ensure we have a |
6718 |
+- * reference to it. |
6719 |
++ * reference to it. If the lock was taken, ret contains the |
6720 |
++ * vpid of the top waiter task. |
6721 |
+ */ |
6722 |
+- if (ret == 1) { |
6723 |
++ if (ret > 0) { |
6724 |
+ WARN_ON(pi_state); |
6725 |
+ drop_count++; |
6726 |
+ task_count++; |
6727 |
+- ret = get_futex_value_locked(&curval2, uaddr2); |
6728 |
+- if (!ret) |
6729 |
+- ret = lookup_pi_state(curval2, hb2, &key2, |
6730 |
+- &pi_state); |
6731 |
++ /* |
6732 |
++ * If we acquired the lock, then the user |
6733 |
++ * space value of uaddr2 should be vpid. It |
6734 |
++ * cannot be changed by the top waiter as it |
6735 |
++ * is blocked on hb2 lock if it tries to do |
6736 |
++ * so. If something fiddled with it behind our |
6737 |
++ * back the pi state lookup might unearth |
6738 |
++ * it. So we rather use the known value than |
6739 |
++ * rereading and handing potential crap to |
6740 |
++ * lookup_pi_state. |
6741 |
++ */ |
6742 |
++ ret = lookup_pi_state(ret, hb2, &key2, &pi_state); |
6743 |
+ } |
6744 |
+ |
6745 |
+ switch (ret) { |
6746 |
+@@ -2133,9 +2258,10 @@ retry: |
6747 |
+ /* |
6748 |
+ * To avoid races, try to do the TID -> 0 atomic transition |
6749 |
+ * again. If it succeeds then we can return without waking |
6750 |
+- * anyone else up: |
6751 |
++ * anyone else up. We only try this if neither the waiters nor |
6752 |
++ * the owner died bit are set. |
6753 |
+ */ |
6754 |
+- if (!(uval & FUTEX_OWNER_DIED) && |
6755 |
++ if (!(uval & ~FUTEX_TID_MASK) && |
6756 |
+ cmpxchg_futex_value_locked(&uval, uaddr, vpid, 0)) |
6757 |
+ goto pi_faulted; |
6758 |
+ /* |
6759 |
+@@ -2167,11 +2293,9 @@ retry: |
6760 |
+ /* |
6761 |
+ * No waiters - kernel unlocks the futex: |
6762 |
+ */ |
6763 |
+- if (!(uval & FUTEX_OWNER_DIED)) { |
6764 |
+- ret = unlock_futex_pi(uaddr, uval); |
6765 |
+- if (ret == -EFAULT) |
6766 |
+- goto pi_faulted; |
6767 |
+- } |
6768 |
++ ret = unlock_futex_pi(uaddr, uval); |
6769 |
++ if (ret == -EFAULT) |
6770 |
++ goto pi_faulted; |
6771 |
+ |
6772 |
+ out_unlock: |
6773 |
+ spin_unlock(&hb->lock); |
6774 |
+@@ -2331,6 +2455,15 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, |
6775 |
+ if (ret) |
6776 |
+ goto out_key2; |
6777 |
+ |
6778 |
++ /* |
6779 |
++ * The check above which compares uaddrs is not sufficient for |
6780 |
++ * shared futexes. We need to compare the keys: |
6781 |
++ */ |
6782 |
++ if (match_futex(&q.key, &key2)) { |
6783 |
++ ret = -EINVAL; |
6784 |
++ goto out_put_keys; |
6785 |
++ } |
6786 |
++ |
6787 |
+ /* Queue the futex_q, drop the hb lock, wait for wakeup. */ |
6788 |
+ futex_wait_queue_me(hb, &q, to); |
6789 |
+ |
6790 |
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c |
6791 |
+index a57ef25867e3..434f2b673d5b 100644 |
6792 |
+--- a/kernel/hrtimer.c |
6793 |
++++ b/kernel/hrtimer.c |
6794 |
+@@ -232,6 +232,11 @@ again: |
6795 |
+ goto again; |
6796 |
+ } |
6797 |
+ timer->base = new_base; |
6798 |
++ } else { |
6799 |
++ if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { |
6800 |
++ cpu = this_cpu; |
6801 |
++ goto again; |
6802 |
++ } |
6803 |
+ } |
6804 |
+ return new_base; |
6805 |
+ } |
6806 |
+@@ -567,6 +572,23 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) |
6807 |
+ |
6808 |
+ cpu_base->expires_next.tv64 = expires_next.tv64; |
6809 |
+ |
6810 |
++ /* |
6811 |
++ * If a hang was detected in the last timer interrupt then we |
6812 |
++ * leave the hang delay active in the hardware. We want the |
6813 |
++ * system to make progress. That also prevents the following |
6814 |
++ * scenario: |
6815 |
++ * T1 expires 50ms from now |
6816 |
++ * T2 expires 5s from now |
6817 |
++ * |
6818 |
++ * T1 is removed, so this code is called and would reprogram |
6819 |
++ * the hardware to 5s from now. Any hrtimer_start after that |
6820 |
++ * will not reprogram the hardware due to hang_detected being |
6821 |
++ * set. So we'd effectivly block all timers until the T2 event |
6822 |
++ * fires. |
6823 |
++ */ |
6824 |
++ if (cpu_base->hang_detected) |
6825 |
++ return; |
6826 |
++ |
6827 |
+ if (cpu_base->expires_next.tv64 != KTIME_MAX) |
6828 |
+ tick_program_event(cpu_base->expires_next, 1); |
6829 |
+ } |
6830 |
+@@ -963,11 +985,8 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, |
6831 |
+ /* Remove an active timer from the queue: */ |
6832 |
+ ret = remove_hrtimer(timer, base); |
6833 |
+ |
6834 |
+- /* Switch the timer base, if necessary: */ |
6835 |
+- new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); |
6836 |
+- |
6837 |
+ if (mode & HRTIMER_MODE_REL) { |
6838 |
+- tim = ktime_add_safe(tim, new_base->get_time()); |
6839 |
++ tim = ktime_add_safe(tim, base->get_time()); |
6840 |
+ /* |
6841 |
+ * CONFIG_TIME_LOW_RES is a temporary way for architectures |
6842 |
+ * to signal that they simply return xtime in |
6843 |
+@@ -982,6 +1001,9 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, |
6844 |
+ |
6845 |
+ hrtimer_set_expires_range_ns(timer, tim, delta_ns); |
6846 |
+ |
6847 |
++ /* Switch the timer base, if necessary: */ |
6848 |
++ new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); |
6849 |
++ |
6850 |
+ timer_stats_hrtimer_set_start_info(timer); |
6851 |
+ |
6852 |
+ leftmost = enqueue_hrtimer(timer, new_base); |
6853 |
+diff --git a/kernel/module.c b/kernel/module.c |
6854 |
+index 85972171ecd3..5e398961b7b5 100644 |
6855 |
+--- a/kernel/module.c |
6856 |
++++ b/kernel/module.c |
6857 |
+@@ -2951,6 +2951,9 @@ static struct module *load_module(void __user *umod, |
6858 |
+ /* This has to be done once we're sure module name is unique. */ |
6859 |
+ dynamic_debug_setup(info.debug, info.num_debug); |
6860 |
+ |
6861 |
++ /* Ftrace init must be called in the MODULE_STATE_UNFORMED state */ |
6862 |
++ ftrace_module_init(mod); |
6863 |
++ |
6864 |
+ /* Find duplicate symbols */ |
6865 |
+ err = verify_export_symbols(mod); |
6866 |
+ if (err < 0) |
6867 |
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c |
6868 |
+index 94f132775d05..e01398f0a52e 100644 |
6869 |
+--- a/kernel/sched/core.c |
6870 |
++++ b/kernel/sched/core.c |
6871 |
+@@ -5433,16 +5433,25 @@ static void sd_free_ctl_entry(struct ctl_table **tablep) |
6872 |
+ *tablep = NULL; |
6873 |
+ } |
6874 |
+ |
6875 |
++static int min_load_idx = 0; |
6876 |
++static int max_load_idx = CPU_LOAD_IDX_MAX-1; |
6877 |
++ |
6878 |
+ static void |
6879 |
+ set_table_entry(struct ctl_table *entry, |
6880 |
+ const char *procname, void *data, int maxlen, |
6881 |
+- umode_t mode, proc_handler *proc_handler) |
6882 |
++ umode_t mode, proc_handler *proc_handler, |
6883 |
++ bool load_idx) |
6884 |
+ { |
6885 |
+ entry->procname = procname; |
6886 |
+ entry->data = data; |
6887 |
+ entry->maxlen = maxlen; |
6888 |
+ entry->mode = mode; |
6889 |
+ entry->proc_handler = proc_handler; |
6890 |
++ |
6891 |
++ if (load_idx) { |
6892 |
++ entry->extra1 = &min_load_idx; |
6893 |
++ entry->extra2 = &max_load_idx; |
6894 |
++ } |
6895 |
+ } |
6896 |
+ |
6897 |
+ static struct ctl_table * |
6898 |
+@@ -5454,30 +5463,30 @@ sd_alloc_ctl_domain_table(struct sched_domain *sd) |
6899 |
+ return NULL; |
6900 |
+ |
6901 |
+ set_table_entry(&table[0], "min_interval", &sd->min_interval, |
6902 |
+- sizeof(long), 0644, proc_doulongvec_minmax); |
6903 |
++ sizeof(long), 0644, proc_doulongvec_minmax, false); |
6904 |
+ set_table_entry(&table[1], "max_interval", &sd->max_interval, |
6905 |
+- sizeof(long), 0644, proc_doulongvec_minmax); |
6906 |
++ sizeof(long), 0644, proc_doulongvec_minmax, false); |
6907 |
+ set_table_entry(&table[2], "busy_idx", &sd->busy_idx, |
6908 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6909 |
++ sizeof(int), 0644, proc_dointvec_minmax, true); |
6910 |
+ set_table_entry(&table[3], "idle_idx", &sd->idle_idx, |
6911 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6912 |
++ sizeof(int), 0644, proc_dointvec_minmax, true); |
6913 |
+ set_table_entry(&table[4], "newidle_idx", &sd->newidle_idx, |
6914 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6915 |
++ sizeof(int), 0644, proc_dointvec_minmax, true); |
6916 |
+ set_table_entry(&table[5], "wake_idx", &sd->wake_idx, |
6917 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6918 |
++ sizeof(int), 0644, proc_dointvec_minmax, true); |
6919 |
+ set_table_entry(&table[6], "forkexec_idx", &sd->forkexec_idx, |
6920 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6921 |
++ sizeof(int), 0644, proc_dointvec_minmax, true); |
6922 |
+ set_table_entry(&table[7], "busy_factor", &sd->busy_factor, |
6923 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6924 |
++ sizeof(int), 0644, proc_dointvec_minmax, false); |
6925 |
+ set_table_entry(&table[8], "imbalance_pct", &sd->imbalance_pct, |
6926 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6927 |
++ sizeof(int), 0644, proc_dointvec_minmax, false); |
6928 |
+ set_table_entry(&table[9], "cache_nice_tries", |
6929 |
+ &sd->cache_nice_tries, |
6930 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6931 |
++ sizeof(int), 0644, proc_dointvec_minmax, false); |
6932 |
+ set_table_entry(&table[10], "flags", &sd->flags, |
6933 |
+- sizeof(int), 0644, proc_dointvec_minmax); |
6934 |
++ sizeof(int), 0644, proc_dointvec_minmax, false); |
6935 |
+ set_table_entry(&table[11], "name", sd->name, |
6936 |
+- CORENAME_MAX_SIZE, 0444, proc_dostring); |
6937 |
++ CORENAME_MAX_SIZE, 0444, proc_dostring, false); |
6938 |
+ /* &table[12] is terminator */ |
6939 |
+ |
6940 |
+ return table; |
6941 |
+diff --git a/kernel/timer.c b/kernel/timer.c |
6942 |
+index 7e0a770be489..87ff7b19d27f 100644 |
6943 |
+--- a/kernel/timer.c |
6944 |
++++ b/kernel/timer.c |
6945 |
+@@ -815,7 +815,7 @@ unsigned long apply_slack(struct timer_list *timer, unsigned long expires) |
6946 |
+ |
6947 |
+ bit = find_last_bit(&mask, BITS_PER_LONG); |
6948 |
+ |
6949 |
+- mask = (1 << bit) - 1; |
6950 |
++ mask = (1UL << bit) - 1; |
6951 |
+ |
6952 |
+ expires_limit = expires_limit & ~(mask); |
6953 |
+ |
6954 |
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c |
6955 |
+index 5efdddf04b15..5b6bd45bc58b 100644 |
6956 |
+--- a/kernel/trace/ftrace.c |
6957 |
++++ b/kernel/trace/ftrace.c |
6958 |
+@@ -2080,12 +2080,57 @@ static cycle_t ftrace_update_time; |
6959 |
+ static unsigned long ftrace_update_cnt; |
6960 |
+ unsigned long ftrace_update_tot_cnt; |
6961 |
+ |
6962 |
+-static int ops_traces_mod(struct ftrace_ops *ops) |
6963 |
++static inline int ops_traces_mod(struct ftrace_ops *ops) |
6964 |
+ { |
6965 |
+- struct ftrace_hash *hash; |
6966 |
++ /* |
6967 |
++ * Filter_hash being empty will default to trace module. |
6968 |
++ * But notrace hash requires a test of individual module functions. |
6969 |
++ */ |
6970 |
++ return ftrace_hash_empty(ops->filter_hash) && |
6971 |
++ ftrace_hash_empty(ops->notrace_hash); |
6972 |
++} |
6973 |
+ |
6974 |
+- hash = ops->filter_hash; |
6975 |
+- return ftrace_hash_empty(hash); |
6976 |
++/* |
6977 |
++ * Check if the current ops references the record. |
6978 |
++ * |
6979 |
++ * If the ops traces all functions, then it was already accounted for. |
6980 |
++ * If the ops does not trace the current record function, skip it. |
6981 |
++ * If the ops ignores the function via notrace filter, skip it. |
6982 |
++ */ |
6983 |
++static inline bool |
6984 |
++ops_references_rec(struct ftrace_ops *ops, struct dyn_ftrace *rec) |
6985 |
++{ |
6986 |
++ /* If ops isn't enabled, ignore it */ |
6987 |
++ if (!(ops->flags & FTRACE_OPS_FL_ENABLED)) |
6988 |
++ return 0; |
6989 |
++ |
6990 |
++ /* If ops traces all mods, we already accounted for it */ |
6991 |
++ if (ops_traces_mod(ops)) |
6992 |
++ return 0; |
6993 |
++ |
6994 |
++ /* The function must be in the filter */ |
6995 |
++ if (!ftrace_hash_empty(ops->filter_hash) && |
6996 |
++ !ftrace_lookup_ip(ops->filter_hash, rec->ip)) |
6997 |
++ return 0; |
6998 |
++ |
6999 |
++ /* If in notrace hash, we ignore it too */ |
7000 |
++ if (ftrace_lookup_ip(ops->notrace_hash, rec->ip)) |
7001 |
++ return 0; |
7002 |
++ |
7003 |
++ return 1; |
7004 |
++} |
7005 |
++ |
7006 |
++static int referenced_filters(struct dyn_ftrace *rec) |
7007 |
++{ |
7008 |
++ struct ftrace_ops *ops; |
7009 |
++ int cnt = 0; |
7010 |
++ |
7011 |
++ for (ops = ftrace_ops_list; ops != &ftrace_list_end; ops = ops->next) { |
7012 |
++ if (ops_references_rec(ops, rec)) |
7013 |
++ cnt++; |
7014 |
++ } |
7015 |
++ |
7016 |
++ return cnt; |
7017 |
+ } |
7018 |
+ |
7019 |
+ static int ftrace_update_code(struct module *mod) |
7020 |
+@@ -2094,6 +2139,7 @@ static int ftrace_update_code(struct module *mod) |
7021 |
+ struct dyn_ftrace *p; |
7022 |
+ cycle_t start, stop; |
7023 |
+ unsigned long ref = 0; |
7024 |
++ bool test = false; |
7025 |
+ int i; |
7026 |
+ |
7027 |
+ /* |
7028 |
+@@ -2107,9 +2153,12 @@ static int ftrace_update_code(struct module *mod) |
7029 |
+ |
7030 |
+ for (ops = ftrace_ops_list; |
7031 |
+ ops != &ftrace_list_end; ops = ops->next) { |
7032 |
+- if (ops->flags & FTRACE_OPS_FL_ENABLED && |
7033 |
+- ops_traces_mod(ops)) |
7034 |
+- ref++; |
7035 |
++ if (ops->flags & FTRACE_OPS_FL_ENABLED) { |
7036 |
++ if (ops_traces_mod(ops)) |
7037 |
++ ref++; |
7038 |
++ else |
7039 |
++ test = true; |
7040 |
++ } |
7041 |
+ } |
7042 |
+ } |
7043 |
+ |
7044 |
+@@ -2119,12 +2168,16 @@ static int ftrace_update_code(struct module *mod) |
7045 |
+ for (pg = ftrace_new_pgs; pg; pg = pg->next) { |
7046 |
+ |
7047 |
+ for (i = 0; i < pg->index; i++) { |
7048 |
++ int cnt = ref; |
7049 |
++ |
7050 |
+ /* If something went wrong, bail without enabling anything */ |
7051 |
+ if (unlikely(ftrace_disabled)) |
7052 |
+ return -1; |
7053 |
+ |
7054 |
+ p = &pg->records[i]; |
7055 |
+- p->flags = ref; |
7056 |
++ if (test) |
7057 |
++ cnt += referenced_filters(p); |
7058 |
++ p->flags = cnt; |
7059 |
+ |
7060 |
+ /* |
7061 |
+ * Do the initial record conversion from mcount jump |
7062 |
+@@ -2144,7 +2197,7 @@ static int ftrace_update_code(struct module *mod) |
7063 |
+ * conversion puts the module to the correct state, thus |
7064 |
+ * passing the ftrace_make_call check. |
7065 |
+ */ |
7066 |
+- if (ftrace_start_up && ref) { |
7067 |
++ if (ftrace_start_up && cnt) { |
7068 |
+ int failed = __ftrace_replace_code(p, 1); |
7069 |
+ if (failed) |
7070 |
+ ftrace_bug(failed, p->ip); |
7071 |
+@@ -3880,16 +3933,11 @@ static void ftrace_init_module(struct module *mod, |
7072 |
+ ftrace_process_locs(mod, start, end); |
7073 |
+ } |
7074 |
+ |
7075 |
+-static int ftrace_module_notify_enter(struct notifier_block *self, |
7076 |
+- unsigned long val, void *data) |
7077 |
++void ftrace_module_init(struct module *mod) |
7078 |
+ { |
7079 |
+- struct module *mod = data; |
7080 |
+- |
7081 |
+- if (val == MODULE_STATE_COMING) |
7082 |
+- ftrace_init_module(mod, mod->ftrace_callsites, |
7083 |
+- mod->ftrace_callsites + |
7084 |
+- mod->num_ftrace_callsites); |
7085 |
+- return 0; |
7086 |
++ ftrace_init_module(mod, mod->ftrace_callsites, |
7087 |
++ mod->ftrace_callsites + |
7088 |
++ mod->num_ftrace_callsites); |
7089 |
+ } |
7090 |
+ |
7091 |
+ static int ftrace_module_notify_exit(struct notifier_block *self, |
7092 |
+@@ -3903,11 +3951,6 @@ static int ftrace_module_notify_exit(struct notifier_block *self, |
7093 |
+ return 0; |
7094 |
+ } |
7095 |
+ #else |
7096 |
+-static int ftrace_module_notify_enter(struct notifier_block *self, |
7097 |
+- unsigned long val, void *data) |
7098 |
+-{ |
7099 |
+- return 0; |
7100 |
+-} |
7101 |
+ static int ftrace_module_notify_exit(struct notifier_block *self, |
7102 |
+ unsigned long val, void *data) |
7103 |
+ { |
7104 |
+@@ -3915,11 +3958,6 @@ static int ftrace_module_notify_exit(struct notifier_block *self, |
7105 |
+ } |
7106 |
+ #endif /* CONFIG_MODULES */ |
7107 |
+ |
7108 |
+-struct notifier_block ftrace_module_enter_nb = { |
7109 |
+- .notifier_call = ftrace_module_notify_enter, |
7110 |
+- .priority = INT_MAX, /* Run before anything that can use kprobes */ |
7111 |
+-}; |
7112 |
+- |
7113 |
+ struct notifier_block ftrace_module_exit_nb = { |
7114 |
+ .notifier_call = ftrace_module_notify_exit, |
7115 |
+ .priority = INT_MIN, /* Run after anything that can remove kprobes */ |
7116 |
+@@ -3956,10 +3994,6 @@ void __init ftrace_init(void) |
7117 |
+ __start_mcount_loc, |
7118 |
+ __stop_mcount_loc); |
7119 |
+ |
7120 |
+- ret = register_module_notifier(&ftrace_module_enter_nb); |
7121 |
+- if (ret) |
7122 |
+- pr_warning("Failed to register trace ftrace module enter notifier\n"); |
7123 |
+- |
7124 |
+ ret = register_module_notifier(&ftrace_module_exit_nb); |
7125 |
+ if (ret) |
7126 |
+ pr_warning("Failed to register trace ftrace module exit notifier\n"); |
7127 |
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c |
7128 |
+index 022940f097b8..a494ec317e0a 100644 |
7129 |
+--- a/kernel/trace/trace.c |
7130 |
++++ b/kernel/trace/trace.c |
7131 |
+@@ -2782,8 +2782,12 @@ int set_tracer_flag(unsigned int mask, int enabled) |
7132 |
+ if (mask == TRACE_ITER_RECORD_CMD) |
7133 |
+ trace_event_enable_cmd_record(enabled); |
7134 |
+ |
7135 |
+- if (mask == TRACE_ITER_OVERWRITE) |
7136 |
++ if (mask == TRACE_ITER_OVERWRITE) { |
7137 |
+ ring_buffer_change_overwrite(global_trace.buffer, enabled); |
7138 |
++#ifdef CONFIG_TRACER_MAX_TRACE |
7139 |
++ ring_buffer_change_overwrite(max_tr.buffer, enabled); |
7140 |
++#endif |
7141 |
++ } |
7142 |
+ |
7143 |
+ return 0; |
7144 |
+ } |
7145 |
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c |
7146 |
+index c8eed94343e0..d5facc8d78a4 100644 |
7147 |
+--- a/mm/hugetlb.c |
7148 |
++++ b/mm/hugetlb.c |
7149 |
+@@ -1082,6 +1082,7 @@ static void return_unused_surplus_pages(struct hstate *h, |
7150 |
+ while (nr_pages--) { |
7151 |
+ if (!free_pool_huge_page(h, &node_states[N_HIGH_MEMORY], 1)) |
7152 |
+ break; |
7153 |
++ cond_resched_lock(&hugetlb_lock); |
7154 |
+ } |
7155 |
+ } |
7156 |
+ |
7157 |
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c |
7158 |
+index 7e95698e4139..455a67971570 100644 |
7159 |
+--- a/mm/memory-failure.c |
7160 |
++++ b/mm/memory-failure.c |
7161 |
+@@ -1061,15 +1061,16 @@ int memory_failure(unsigned long pfn, int trapno, int flags) |
7162 |
+ return 0; |
7163 |
+ } else if (PageHuge(hpage)) { |
7164 |
+ /* |
7165 |
+- * Check "just unpoisoned", "filter hit", and |
7166 |
+- * "race with other subpage." |
7167 |
++ * Check "filter hit" and "race with other subpage." |
7168 |
+ */ |
7169 |
+ lock_page(hpage); |
7170 |
+- if (!PageHWPoison(hpage) |
7171 |
+- || (hwpoison_filter(p) && TestClearPageHWPoison(p)) |
7172 |
+- || (p != hpage && TestSetPageHWPoison(hpage))) { |
7173 |
+- atomic_long_sub(nr_pages, &mce_bad_pages); |
7174 |
+- return 0; |
7175 |
++ if (PageHWPoison(hpage)) { |
7176 |
++ if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) |
7177 |
++ || (p != hpage && TestSetPageHWPoison(hpage))) { |
7178 |
++ atomic_long_sub(nr_pages, &mce_bad_pages); |
7179 |
++ unlock_page(hpage); |
7180 |
++ return 0; |
7181 |
++ } |
7182 |
+ } |
7183 |
+ set_page_hwpoison_huge_page(hpage); |
7184 |
+ res = dequeue_hwpoisoned_huge_page(hpage); |
7185 |
+diff --git a/mm/memory.c b/mm/memory.c |
7186 |
+index 17d8661f44fe..ffd74f370e8d 100644 |
7187 |
+--- a/mm/memory.c |
7188 |
++++ b/mm/memory.c |
7189 |
+@@ -1872,12 +1872,17 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, |
7190 |
+ unsigned long address, unsigned int fault_flags) |
7191 |
+ { |
7192 |
+ struct vm_area_struct *vma; |
7193 |
++ vm_flags_t vm_flags; |
7194 |
+ int ret; |
7195 |
+ |
7196 |
+ vma = find_extend_vma(mm, address); |
7197 |
+ if (!vma || address < vma->vm_start) |
7198 |
+ return -EFAULT; |
7199 |
+ |
7200 |
++ vm_flags = (fault_flags & FAULT_FLAG_WRITE) ? VM_WRITE : VM_READ; |
7201 |
++ if (!(vm_flags & vma->vm_flags)) |
7202 |
++ return -EFAULT; |
7203 |
++ |
7204 |
+ ret = handle_mm_fault(mm, vma, address, fault_flags); |
7205 |
+ if (ret & VM_FAULT_ERROR) { |
7206 |
+ if (ret & VM_FAULT_OOM) |
7207 |
+diff --git a/mm/percpu.c b/mm/percpu.c |
7208 |
+index bb4be7435ce3..13b2eefabfdd 100644 |
7209 |
+--- a/mm/percpu.c |
7210 |
++++ b/mm/percpu.c |
7211 |
+@@ -612,7 +612,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(void) |
7212 |
+ chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC * |
7213 |
+ sizeof(chunk->map[0])); |
7214 |
+ if (!chunk->map) { |
7215 |
+- kfree(chunk); |
7216 |
++ pcpu_mem_free(chunk, pcpu_chunk_struct_size); |
7217 |
+ return NULL; |
7218 |
+ } |
7219 |
+ |
7220 |
+diff --git a/net/core/dev.c b/net/core/dev.c |
7221 |
+index cebdc15ce327..b47375d9d956 100644 |
7222 |
+--- a/net/core/dev.c |
7223 |
++++ b/net/core/dev.c |
7224 |
+@@ -3574,6 +3574,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) |
7225 |
+ skb->vlan_tci = 0; |
7226 |
+ skb->dev = napi->dev; |
7227 |
+ skb->skb_iif = 0; |
7228 |
++ skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); |
7229 |
+ |
7230 |
+ napi->skb = skb; |
7231 |
+ } |
7232 |
+diff --git a/net/core/filter.c b/net/core/filter.c |
7233 |
+index 6f755cca4520..3b7398ae270a 100644 |
7234 |
+--- a/net/core/filter.c |
7235 |
++++ b/net/core/filter.c |
7236 |
+@@ -322,6 +322,8 @@ load_b: |
7237 |
+ |
7238 |
+ if (skb_is_nonlinear(skb)) |
7239 |
+ return 0; |
7240 |
++ if (skb->len < sizeof(struct nlattr)) |
7241 |
++ return 0; |
7242 |
+ if (A > skb->len - sizeof(struct nlattr)) |
7243 |
+ return 0; |
7244 |
+ |
7245 |
+@@ -338,11 +340,13 @@ load_b: |
7246 |
+ |
7247 |
+ if (skb_is_nonlinear(skb)) |
7248 |
+ return 0; |
7249 |
++ if (skb->len < sizeof(struct nlattr)) |
7250 |
++ return 0; |
7251 |
+ if (A > skb->len - sizeof(struct nlattr)) |
7252 |
+ return 0; |
7253 |
+ |
7254 |
+ nla = (struct nlattr *)&skb->data[A]; |
7255 |
+- if (nla->nla_len > A - skb->len) |
7256 |
++ if (nla->nla_len > skb->len - A) |
7257 |
+ return 0; |
7258 |
+ |
7259 |
+ nla = nla_find_nested(nla, X); |
7260 |
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c |
7261 |
+index a1334275a7da..3cd37e9d91a6 100644 |
7262 |
+--- a/net/core/rtnetlink.c |
7263 |
++++ b/net/core/rtnetlink.c |
7264 |
+@@ -746,7 +746,8 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev, |
7265 |
+ return 0; |
7266 |
+ } |
7267 |
+ |
7268 |
+-static size_t rtnl_port_size(const struct net_device *dev) |
7269 |
++static size_t rtnl_port_size(const struct net_device *dev, |
7270 |
++ u32 ext_filter_mask) |
7271 |
+ { |
7272 |
+ size_t port_size = nla_total_size(4) /* PORT_VF */ |
7273 |
+ + nla_total_size(PORT_PROFILE_MAX) /* PORT_PROFILE */ |
7274 |
+@@ -762,7 +763,8 @@ static size_t rtnl_port_size(const struct net_device *dev) |
7275 |
+ size_t port_self_size = nla_total_size(sizeof(struct nlattr)) |
7276 |
+ + port_size; |
7277 |
+ |
7278 |
+- if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent) |
7279 |
++ if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent || |
7280 |
++ !(ext_filter_mask & RTEXT_FILTER_VF)) |
7281 |
+ return 0; |
7282 |
+ if (dev_num_vf(dev->dev.parent)) |
7283 |
+ return port_self_size + vf_ports_size + |
7284 |
+@@ -793,7 +795,7 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev, |
7285 |
+ + nla_total_size(ext_filter_mask |
7286 |
+ & RTEXT_FILTER_VF ? 4 : 0) /* IFLA_NUM_VF */ |
7287 |
+ + rtnl_vfinfo_size(dev, ext_filter_mask) /* IFLA_VFINFO_LIST */ |
7288 |
+- + rtnl_port_size(dev) /* IFLA_VF_PORTS + IFLA_PORT_SELF */ |
7289 |
++ + rtnl_port_size(dev, ext_filter_mask) /* IFLA_VF_PORTS + IFLA_PORT_SELF */ |
7290 |
+ + rtnl_link_get_size(dev) /* IFLA_LINKINFO */ |
7291 |
+ + rtnl_link_get_af_size(dev); /* IFLA_AF_SPEC */ |
7292 |
+ } |
7293 |
+@@ -853,11 +855,13 @@ static int rtnl_port_self_fill(struct sk_buff *skb, struct net_device *dev) |
7294 |
+ return 0; |
7295 |
+ } |
7296 |
+ |
7297 |
+-static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev) |
7298 |
++static int rtnl_port_fill(struct sk_buff *skb, struct net_device *dev, |
7299 |
++ u32 ext_filter_mask) |
7300 |
+ { |
7301 |
+ int err; |
7302 |
+ |
7303 |
+- if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent) |
7304 |
++ if (!dev->netdev_ops->ndo_get_vf_port || !dev->dev.parent || |
7305 |
++ !(ext_filter_mask & RTEXT_FILTER_VF)) |
7306 |
+ return 0; |
7307 |
+ |
7308 |
+ err = rtnl_port_self_fill(skb, dev); |
7309 |
+@@ -1004,7 +1008,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev, |
7310 |
+ nla_nest_end(skb, vfinfo); |
7311 |
+ } |
7312 |
+ |
7313 |
+- if (rtnl_port_fill(skb, dev)) |
7314 |
++ if (rtnl_port_fill(skb, dev, ext_filter_mask)) |
7315 |
+ goto nla_put_failure; |
7316 |
+ |
7317 |
+ if (dev->rtnl_link_ops) { |
7318 |
+@@ -1059,6 +1063,7 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb) |
7319 |
+ struct hlist_node *node; |
7320 |
+ struct nlattr *tb[IFLA_MAX+1]; |
7321 |
+ u32 ext_filter_mask = 0; |
7322 |
++ int err; |
7323 |
+ |
7324 |
+ s_h = cb->args[0]; |
7325 |
+ s_idx = cb->args[1]; |
7326 |
+@@ -1079,11 +1084,17 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb) |
7327 |
+ hlist_for_each_entry_rcu(dev, node, head, index_hlist) { |
7328 |
+ if (idx < s_idx) |
7329 |
+ goto cont; |
7330 |
+- if (rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, |
7331 |
+- NETLINK_CB(cb->skb).pid, |
7332 |
+- cb->nlh->nlmsg_seq, 0, |
7333 |
+- NLM_F_MULTI, |
7334 |
+- ext_filter_mask) <= 0) |
7335 |
++ err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, |
7336 |
++ NETLINK_CB(cb->skb).pid, |
7337 |
++ cb->nlh->nlmsg_seq, 0, |
7338 |
++ NLM_F_MULTI, |
7339 |
++ ext_filter_mask); |
7340 |
++ /* If we ran out of room on the first message, |
7341 |
++ * we're in trouble |
7342 |
++ */ |
7343 |
++ WARN_ON((err == -EMSGSIZE) && (skb->len == 0)); |
7344 |
++ |
7345 |
++ if (err <= 0) |
7346 |
+ goto out; |
7347 |
+ |
7348 |
+ nl_dump_check_consistent(cb, nlmsg_hdr(skb)); |
7349 |
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c |
7350 |
+index 7a597d4feaec..fe42834df408 100644 |
7351 |
+--- a/net/core/skbuff.c |
7352 |
++++ b/net/core/skbuff.c |
7353 |
+@@ -821,7 +821,7 @@ static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old) |
7354 |
+ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) |
7355 |
+ { |
7356 |
+ int headerlen = skb_headroom(skb); |
7357 |
+- unsigned int size = (skb_end_pointer(skb) - skb->head) + skb->data_len; |
7358 |
++ unsigned int size = skb_end_offset(skb) + skb->data_len; |
7359 |
+ struct sk_buff *n = alloc_skb(size, gfp_mask); |
7360 |
+ |
7361 |
+ if (!n) |
7362 |
+@@ -922,7 +922,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, |
7363 |
+ { |
7364 |
+ int i; |
7365 |
+ u8 *data; |
7366 |
+- int size = nhead + (skb_end_pointer(skb) - skb->head) + ntail; |
7367 |
++ int size = nhead + skb_end_offset(skb) + ntail; |
7368 |
+ long off; |
7369 |
+ bool fastpath; |
7370 |
+ |
7371 |
+@@ -2721,14 +2721,13 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) |
7372 |
+ if (unlikely(!nskb)) |
7373 |
+ goto err; |
7374 |
+ |
7375 |
+- hsize = skb_end_pointer(nskb) - nskb->head; |
7376 |
++ hsize = skb_end_offset(nskb); |
7377 |
+ if (skb_cow_head(nskb, doffset + headroom)) { |
7378 |
+ kfree_skb(nskb); |
7379 |
+ goto err; |
7380 |
+ } |
7381 |
+ |
7382 |
+- nskb->truesize += skb_end_pointer(nskb) - nskb->head - |
7383 |
+- hsize; |
7384 |
++ nskb->truesize += skb_end_offset(nskb) - hsize; |
7385 |
+ skb_release_head_state(nskb); |
7386 |
+ __skb_push(nskb, doffset); |
7387 |
+ } else { |
7388 |
+@@ -3297,12 +3296,14 @@ EXPORT_SYMBOL(__skb_warn_lro_forwarding); |
7389 |
+ unsigned int skb_gso_transport_seglen(const struct sk_buff *skb) |
7390 |
+ { |
7391 |
+ const struct skb_shared_info *shinfo = skb_shinfo(skb); |
7392 |
+- unsigned int hdr_len; |
7393 |
+ |
7394 |
+ if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) |
7395 |
+- hdr_len = tcp_hdrlen(skb); |
7396 |
+- else |
7397 |
+- hdr_len = sizeof(struct udphdr); |
7398 |
+- return hdr_len + shinfo->gso_size; |
7399 |
++ return tcp_hdrlen(skb) + shinfo->gso_size; |
7400 |
++ |
7401 |
++ /* UFO sets gso_size to the size of the fragmentation |
7402 |
++ * payload, i.e. the size of the L4 (UDP) header is already |
7403 |
++ * accounted for. |
7404 |
++ */ |
7405 |
++ return shinfo->gso_size; |
7406 |
+ } |
7407 |
+ EXPORT_SYMBOL_GPL(skb_gso_transport_seglen); |
7408 |
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c |
7409 |
+index 8861f91a07cf..8d244eaf6b0d 100644 |
7410 |
+--- a/net/ipv4/fib_semantics.c |
7411 |
++++ b/net/ipv4/fib_semantics.c |
7412 |
+@@ -751,13 +751,13 @@ struct fib_info *fib_create_info(struct fib_config *cfg) |
7413 |
+ fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); |
7414 |
+ if (fi == NULL) |
7415 |
+ goto failure; |
7416 |
++ fib_info_cnt++; |
7417 |
+ if (cfg->fc_mx) { |
7418 |
+ fi->fib_metrics = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL); |
7419 |
+ if (!fi->fib_metrics) |
7420 |
+ goto failure; |
7421 |
+ } else |
7422 |
+ fi->fib_metrics = (u32 *) dst_default_metrics; |
7423 |
+- fib_info_cnt++; |
7424 |
+ |
7425 |
+ fi->fib_net = hold_net(net); |
7426 |
+ fi->fib_protocol = cfg->fc_protocol; |
7427 |
+diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c |
7428 |
+index e0d9f02fec11..7593f3a46035 100644 |
7429 |
+--- a/net/ipv4/ip_forward.c |
7430 |
++++ b/net/ipv4/ip_forward.c |
7431 |
+@@ -42,12 +42,12 @@ |
7432 |
+ static bool ip_may_fragment(const struct sk_buff *skb) |
7433 |
+ { |
7434 |
+ return unlikely((ip_hdr(skb)->frag_off & htons(IP_DF)) == 0) || |
7435 |
+- !skb->local_df; |
7436 |
++ skb->local_df; |
7437 |
+ } |
7438 |
+ |
7439 |
+ static bool ip_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu) |
7440 |
+ { |
7441 |
+- if (skb->len <= mtu || skb->local_df) |
7442 |
++ if (skb->len <= mtu) |
7443 |
+ return false; |
7444 |
+ |
7445 |
+ if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu) |
7446 |
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c |
7447 |
+index e80db1e6b0b2..cb9085272dd7 100644 |
7448 |
+--- a/net/ipv4/ping.c |
7449 |
++++ b/net/ipv4/ping.c |
7450 |
+@@ -203,26 +203,33 @@ static int ping_init_sock(struct sock *sk) |
7451 |
+ struct net *net = sock_net(sk); |
7452 |
+ gid_t group = current_egid(); |
7453 |
+ gid_t range[2]; |
7454 |
+- struct group_info *group_info = get_current_groups(); |
7455 |
+- int i, j, count = group_info->ngroups; |
7456 |
++ struct group_info *group_info; |
7457 |
++ int i, j, count; |
7458 |
++ int ret = 0; |
7459 |
+ |
7460 |
+ inet_get_ping_group_range_net(net, range, range+1); |
7461 |
+ if (range[0] <= group && group <= range[1]) |
7462 |
+ return 0; |
7463 |
+ |
7464 |
++ group_info = get_current_groups(); |
7465 |
++ count = group_info->ngroups; |
7466 |
+ for (i = 0; i < group_info->nblocks; i++) { |
7467 |
+ int cp_count = min_t(int, NGROUPS_PER_BLOCK, count); |
7468 |
+ |
7469 |
+ for (j = 0; j < cp_count; j++) { |
7470 |
+ group = group_info->blocks[i][j]; |
7471 |
+ if (range[0] <= group && group <= range[1]) |
7472 |
+- return 0; |
7473 |
++ goto out_release_group; |
7474 |
+ } |
7475 |
+ |
7476 |
+ count -= cp_count; |
7477 |
+ } |
7478 |
+ |
7479 |
+- return -EACCES; |
7480 |
++ ret = -EACCES; |
7481 |
++ |
7482 |
++out_release_group: |
7483 |
++ put_group_info(group_info); |
7484 |
++ return ret; |
7485 |
+ } |
7486 |
+ |
7487 |
+ static void ping_close(struct sock *sk, long timeout) |
7488 |
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c |
7489 |
+index 108c73d760df..90bc88bbd2a4 100644 |
7490 |
+--- a/net/ipv4/route.c |
7491 |
++++ b/net/ipv4/route.c |
7492 |
+@@ -2129,7 +2129,7 @@ static int __mkroute_input(struct sk_buff *skb, |
7493 |
+ struct in_device *out_dev; |
7494 |
+ unsigned int flags = 0; |
7495 |
+ __be32 spec_dst; |
7496 |
+- u32 itag; |
7497 |
++ u32 itag = 0; |
7498 |
+ |
7499 |
+ /* get a working reference to the output device */ |
7500 |
+ out_dev = __in_dev_get_rcu(FIB_RES_DEV(*res)); |
7501 |
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c |
7502 |
+index b6ae92a51f58..894b7cea5d7b 100644 |
7503 |
+--- a/net/ipv4/tcp_cubic.c |
7504 |
++++ b/net/ipv4/tcp_cubic.c |
7505 |
+@@ -408,7 +408,7 @@ static void bictcp_acked(struct sock *sk, u32 cnt, s32 rtt_us) |
7506 |
+ ratio -= ca->delayed_ack >> ACK_RATIO_SHIFT; |
7507 |
+ ratio += cnt; |
7508 |
+ |
7509 |
+- ca->delayed_ack = min(ratio, ACK_RATIO_LIMIT); |
7510 |
++ ca->delayed_ack = clamp(ratio, 1U, ACK_RATIO_LIMIT); |
7511 |
+ } |
7512 |
+ |
7513 |
+ /* Some calls are for duplicates without timetamps */ |
7514 |
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c |
7515 |
+index b685f99ffb18..c8643a3d2658 100644 |
7516 |
+--- a/net/ipv6/route.c |
7517 |
++++ b/net/ipv6/route.c |
7518 |
+@@ -1092,7 +1092,7 @@ static unsigned int ip6_mtu(const struct dst_entry *dst) |
7519 |
+ unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); |
7520 |
+ |
7521 |
+ if (mtu) |
7522 |
+- return mtu; |
7523 |
++ goto out; |
7524 |
+ |
7525 |
+ mtu = IPV6_MIN_MTU; |
7526 |
+ |
7527 |
+@@ -1102,7 +1102,8 @@ static unsigned int ip6_mtu(const struct dst_entry *dst) |
7528 |
+ mtu = idev->cnf.mtu6; |
7529 |
+ rcu_read_unlock(); |
7530 |
+ |
7531 |
+- return mtu; |
7532 |
++out: |
7533 |
++ return min_t(unsigned int, mtu, IP6_MAX_MTU); |
7534 |
+ } |
7535 |
+ |
7536 |
+ static struct dst_entry *icmp6_dst_gc_list; |
7537 |
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c |
7538 |
+index 22112754ba06..4e38a81e48ee 100644 |
7539 |
+--- a/net/l2tp/l2tp_ppp.c |
7540 |
++++ b/net/l2tp/l2tp_ppp.c |
7541 |
+@@ -772,9 +772,9 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr, |
7542 |
+ session->deref = pppol2tp_session_sock_put; |
7543 |
+ |
7544 |
+ /* If PMTU discovery was enabled, use the MTU that was discovered */ |
7545 |
+- dst = sk_dst_get(sk); |
7546 |
++ dst = sk_dst_get(tunnel->sock); |
7547 |
+ if (dst != NULL) { |
7548 |
+- u32 pmtu = dst_mtu(__sk_dst_get(sk)); |
7549 |
++ u32 pmtu = dst_mtu(__sk_dst_get(tunnel->sock)); |
7550 |
+ if (pmtu != 0) |
7551 |
+ session->mtu = session->mru = pmtu - |
7552 |
+ PPPOL2TP_HEADER_OVERHEAD; |
7553 |
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c |
7554 |
+index 6937a84bef3a..f5ed86388555 100644 |
7555 |
+--- a/net/mac80211/rx.c |
7556 |
++++ b/net/mac80211/rx.c |
7557 |
+@@ -2828,6 +2828,9 @@ static int prepare_for_handlers(struct ieee80211_rx_data *rx, |
7558 |
+ case NL80211_IFTYPE_ADHOC: |
7559 |
+ if (!bssid) |
7560 |
+ return 0; |
7561 |
++ if (compare_ether_addr(sdata->vif.addr, hdr->addr2) == 0 || |
7562 |
++ compare_ether_addr(sdata->u.ibss.bssid, hdr->addr2) == 0) |
7563 |
++ return 0; |
7564 |
+ if (ieee80211_is_beacon(hdr->frame_control)) { |
7565 |
+ return 1; |
7566 |
+ } |
7567 |
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c |
7568 |
+index b992a49fbe08..9e888970a7e4 100644 |
7569 |
+--- a/net/mac80211/status.c |
7570 |
++++ b/net/mac80211/status.c |
7571 |
+@@ -432,7 +432,11 @@ void ieee80211_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb) |
7572 |
+ IEEE80211_BAR_CTRL_TID_INFO_MASK) >> |
7573 |
+ IEEE80211_BAR_CTRL_TID_INFO_SHIFT; |
7574 |
+ |
7575 |
+- ieee80211_set_bar_pending(sta, tid, ssn); |
7576 |
++ if (local->hw.flags & |
7577 |
++ IEEE80211_HW_TEARDOWN_AGGR_ON_BAR_FAIL) |
7578 |
++ ieee80211_stop_tx_ba_session(&sta->sta, tid); |
7579 |
++ else |
7580 |
++ ieee80211_set_bar_pending(sta, tid, ssn); |
7581 |
+ } |
7582 |
+ } |
7583 |
+ |
7584 |
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c |
7585 |
+index e051398fdf6b..d067ed16bab1 100644 |
7586 |
+--- a/net/sched/act_mirred.c |
7587 |
++++ b/net/sched/act_mirred.c |
7588 |
+@@ -201,13 +201,12 @@ static int tcf_mirred(struct sk_buff *skb, const struct tc_action *a, |
7589 |
+ out: |
7590 |
+ if (err) { |
7591 |
+ m->tcf_qstats.overlimits++; |
7592 |
+- /* should we be asking for packet to be dropped? |
7593 |
+- * may make sense for redirect case only |
7594 |
+- */ |
7595 |
+- retval = TC_ACT_SHOT; |
7596 |
+- } else { |
7597 |
++ if (m->tcfm_eaction != TCA_EGRESS_MIRROR) |
7598 |
++ retval = TC_ACT_SHOT; |
7599 |
++ else |
7600 |
++ retval = m->tcf_action; |
7601 |
++ } else |
7602 |
+ retval = m->tcf_action; |
7603 |
+- } |
7604 |
+ spin_unlock(&m->tcf_lock); |
7605 |
+ |
7606 |
+ return retval; |
7607 |
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c |
7608 |
+index 0ed156b537d2..0c0bd2fe9aca 100644 |
7609 |
+--- a/net/sctp/socket.c |
7610 |
++++ b/net/sctp/socket.c |
7611 |
+@@ -6369,6 +6369,46 @@ static void __sctp_write_space(struct sctp_association *asoc) |
7612 |
+ } |
7613 |
+ } |
7614 |
+ |
7615 |
++static void sctp_wake_up_waiters(struct sock *sk, |
7616 |
++ struct sctp_association *asoc) |
7617 |
++{ |
7618 |
++ struct sctp_association *tmp = asoc; |
7619 |
++ |
7620 |
++ /* We do accounting for the sndbuf space per association, |
7621 |
++ * so we only need to wake our own association. |
7622 |
++ */ |
7623 |
++ if (asoc->ep->sndbuf_policy) |
7624 |
++ return __sctp_write_space(asoc); |
7625 |
++ |
7626 |
++ /* If association goes down and is just flushing its |
7627 |
++ * outq, then just normally notify others. |
7628 |
++ */ |
7629 |
++ if (asoc->base.dead) |
7630 |
++ return sctp_write_space(sk); |
7631 |
++ |
7632 |
++ /* Accounting for the sndbuf space is per socket, so we |
7633 |
++ * need to wake up others, try to be fair and in case of |
7634 |
++ * other associations, let them have a go first instead |
7635 |
++ * of just doing a sctp_write_space() call. |
7636 |
++ * |
7637 |
++ * Note that we reach sctp_wake_up_waiters() only when |
7638 |
++ * associations free up queued chunks, thus we are under |
7639 |
++ * lock and the list of associations on a socket is |
7640 |
++ * guaranteed not to change. |
7641 |
++ */ |
7642 |
++ for (tmp = list_next_entry(tmp, asocs); 1; |
7643 |
++ tmp = list_next_entry(tmp, asocs)) { |
7644 |
++ /* Manually skip the head element. */ |
7645 |
++ if (&tmp->asocs == &((sctp_sk(sk))->ep->asocs)) |
7646 |
++ continue; |
7647 |
++ /* Wake up association. */ |
7648 |
++ __sctp_write_space(tmp); |
7649 |
++ /* We've reached the end. */ |
7650 |
++ if (tmp == asoc) |
7651 |
++ break; |
7652 |
++ } |
7653 |
++} |
7654 |
++ |
7655 |
+ /* Do accounting for the sndbuf space. |
7656 |
+ * Decrement the used sndbuf space of the corresponding association by the |
7657 |
+ * data size which was just transmitted(freed). |
7658 |
+@@ -6396,7 +6436,7 @@ static void sctp_wfree(struct sk_buff *skb) |
7659 |
+ sk_mem_uncharge(sk, skb->truesize); |
7660 |
+ |
7661 |
+ sock_wfree(skb); |
7662 |
+- __sctp_write_space(asoc); |
7663 |
++ sctp_wake_up_waiters(sk, asoc); |
7664 |
+ |
7665 |
+ sctp_association_put(asoc); |
7666 |
+ } |
7667 |
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c |
7668 |
+index f7e937ff8978..3fd50a11c150 100644 |
7669 |
+--- a/net/wireless/sme.c |
7670 |
++++ b/net/wireless/sme.c |
7671 |
+@@ -222,6 +222,9 @@ void cfg80211_conn_work(struct work_struct *work) |
7672 |
+ mutex_lock(&rdev->devlist_mtx); |
7673 |
+ |
7674 |
+ list_for_each_entry(wdev, &rdev->netdev_list, list) { |
7675 |
++ if (!wdev->netdev) |
7676 |
++ continue; |
7677 |
++ |
7678 |
+ wdev_lock(wdev); |
7679 |
+ if (!netif_running(wdev->netdev)) { |
7680 |
+ wdev_unlock(wdev); |
7681 |
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c |
7682 |
+index 17d80b2694a2..20cfc5b44710 100644 |
7683 |
+--- a/sound/pci/hda/patch_conexant.c |
7684 |
++++ b/sound/pci/hda/patch_conexant.c |
7685 |
+@@ -4430,7 +4430,9 @@ static void apply_fixup(struct hda_codec *codec, |
7686 |
+ struct conexant_spec *spec = codec->spec; |
7687 |
+ |
7688 |
+ quirk = snd_pci_quirk_lookup(codec->bus->pci, quirk); |
7689 |
+- if (quirk && table[quirk->value]) { |
7690 |
++ if (!quirk) |
7691 |
++ return; |
7692 |
++ if (table[quirk->value]) { |
7693 |
+ snd_printdd(KERN_INFO "hda_codec: applying pincfg for %s\n", |
7694 |
+ quirk->name); |
7695 |
+ apply_pincfg(codec, table[quirk->value]); |
7696 |
+@@ -4471,12 +4473,15 @@ static const struct snd_pci_quirk cxt5051_fixups[] = { |
7697 |
+ }; |
7698 |
+ |
7699 |
+ static const struct snd_pci_quirk cxt5066_fixups[] = { |
7700 |
++ SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC), |
7701 |
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410), |
7702 |
+ SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo T410", CXT_PINCFG_LENOVO_TP410), |
7703 |
+ SND_PCI_QUIRK(0x17aa, 0x215f, "Lenovo T510", CXT_PINCFG_LENOVO_TP410), |
7704 |
+ SND_PCI_QUIRK(0x17aa, 0x21ce, "Lenovo T420", CXT_PINCFG_LENOVO_TP410), |
7705 |
+ SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410), |
7706 |
+ SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC), |
7707 |
++ SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC), |
7708 |
++ SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC), |
7709 |
+ {} |
7710 |
+ }; |
7711 |
+ |
7712 |
+@@ -4567,10 +4572,6 @@ static int patch_conexant_auto(struct hda_codec *codec) |
7713 |
+ */ |
7714 |
+ |
7715 |
+ static const struct hda_codec_preset snd_hda_preset_conexant[] = { |
7716 |
+- { .id = 0x14f11510, .name = "CX20751/2", |
7717 |
+- .patch = patch_conexant_auto }, |
7718 |
+- { .id = 0x14f11511, .name = "CX20753/4", |
7719 |
+- .patch = patch_conexant_auto }, |
7720 |
+ { .id = 0x14f15045, .name = "CX20549 (Venice)", |
7721 |
+ .patch = patch_cxt5045 }, |
7722 |
+ { .id = 0x14f15047, .name = "CX20551 (Waikiki)", |
7723 |
+@@ -4605,11 +4606,23 @@ static const struct hda_codec_preset snd_hda_preset_conexant[] = { |
7724 |
+ .patch = patch_conexant_auto }, |
7725 |
+ { .id = 0x14f150b9, .name = "CX20665", |
7726 |
+ .patch = patch_conexant_auto }, |
7727 |
++ { .id = 0x14f1510f, .name = "CX20751/2", |
7728 |
++ .patch = patch_conexant_auto }, |
7729 |
++ { .id = 0x14f15110, .name = "CX20751/2", |
7730 |
++ .patch = patch_conexant_auto }, |
7731 |
++ { .id = 0x14f15111, .name = "CX20753/4", |
7732 |
++ .patch = patch_conexant_auto }, |
7733 |
++ { .id = 0x14f15113, .name = "CX20755", |
7734 |
++ .patch = patch_conexant_auto }, |
7735 |
++ { .id = 0x14f15114, .name = "CX20756", |
7736 |
++ .patch = patch_conexant_auto }, |
7737 |
++ { .id = 0x14f15115, .name = "CX20757", |
7738 |
++ .patch = patch_conexant_auto }, |
7739 |
++ { .id = 0x14f151d7, .name = "CX20952", |
7740 |
++ .patch = patch_conexant_auto }, |
7741 |
+ {} /* terminator */ |
7742 |
+ }; |
7743 |
+ |
7744 |
+-MODULE_ALIAS("snd-hda-codec-id:14f11510"); |
7745 |
+-MODULE_ALIAS("snd-hda-codec-id:14f11511"); |
7746 |
+ MODULE_ALIAS("snd-hda-codec-id:14f15045"); |
7747 |
+ MODULE_ALIAS("snd-hda-codec-id:14f15047"); |
7748 |
+ MODULE_ALIAS("snd-hda-codec-id:14f15051"); |
7749 |
+@@ -4627,6 +4640,13 @@ MODULE_ALIAS("snd-hda-codec-id:14f150ab"); |
7750 |
+ MODULE_ALIAS("snd-hda-codec-id:14f150ac"); |
7751 |
+ MODULE_ALIAS("snd-hda-codec-id:14f150b8"); |
7752 |
+ MODULE_ALIAS("snd-hda-codec-id:14f150b9"); |
7753 |
++MODULE_ALIAS("snd-hda-codec-id:14f1510f"); |
7754 |
++MODULE_ALIAS("snd-hda-codec-id:14f15110"); |
7755 |
++MODULE_ALIAS("snd-hda-codec-id:14f15111"); |
7756 |
++MODULE_ALIAS("snd-hda-codec-id:14f15113"); |
7757 |
++MODULE_ALIAS("snd-hda-codec-id:14f15114"); |
7758 |
++MODULE_ALIAS("snd-hda-codec-id:14f15115"); |
7759 |
++MODULE_ALIAS("snd-hda-codec-id:14f151d7"); |
7760 |
+ |
7761 |
+ MODULE_LICENSE("GPL"); |
7762 |
+ MODULE_DESCRIPTION("Conexant HD-audio codec"); |
7763 |
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c |
7764 |
+index 0b5132f79750..451ec4800f4e 100644 |
7765 |
+--- a/sound/soc/codecs/wm8962.c |
7766 |
++++ b/sound/soc/codecs/wm8962.c |
7767 |
+@@ -153,6 +153,7 @@ static struct reg_default wm8962_reg[] = { |
7768 |
+ { 40, 0x0000 }, /* R40 - SPKOUTL volume */ |
7769 |
+ { 41, 0x0000 }, /* R41 - SPKOUTR volume */ |
7770 |
+ |
7771 |
++ { 49, 0x0010 }, /* R49 - Class D Control 1 */ |
7772 |
+ { 51, 0x0003 }, /* R51 - Class D Control 2 */ |
7773 |
+ |
7774 |
+ { 56, 0x0506 }, /* R56 - Clocking 4 */ |
7775 |
+@@ -794,7 +795,6 @@ static bool wm8962_volatile_register(struct device *dev, unsigned int reg) |
7776 |
+ case WM8962_ALC2: |
7777 |
+ case WM8962_THERMAL_SHUTDOWN_STATUS: |
7778 |
+ case WM8962_ADDITIONAL_CONTROL_4: |
7779 |
+- case WM8962_CLASS_D_CONTROL_1: |
7780 |
+ case WM8962_DC_SERVO_6: |
7781 |
+ case WM8962_INTERRUPT_STATUS_1: |
7782 |
+ case WM8962_INTERRUPT_STATUS_2: |
7783 |
+@@ -2888,13 +2888,22 @@ static int wm8962_set_fll(struct snd_soc_codec *codec, int fll_id, int source, |
7784 |
+ static int wm8962_mute(struct snd_soc_dai *dai, int mute) |
7785 |
+ { |
7786 |
+ struct snd_soc_codec *codec = dai->codec; |
7787 |
+- int val; |
7788 |
++ int val, ret; |
7789 |
+ |
7790 |
+ if (mute) |
7791 |
+- val = WM8962_DAC_MUTE; |
7792 |
++ val = WM8962_DAC_MUTE | WM8962_DAC_MUTE_ALT; |
7793 |
+ else |
7794 |
+ val = 0; |
7795 |
+ |
7796 |
++ /** |
7797 |
++ * The DAC mute bit is mirrored in two registers, update both to keep |
7798 |
++ * the register cache consistent. |
7799 |
++ */ |
7800 |
++ ret = snd_soc_update_bits(codec, WM8962_CLASS_D_CONTROL_1, |
7801 |
++ WM8962_DAC_MUTE_ALT, val); |
7802 |
++ if (ret < 0) |
7803 |
++ return ret; |
7804 |
++ |
7805 |
+ return snd_soc_update_bits(codec, WM8962_ADC_DAC_CONTROL_1, |
7806 |
+ WM8962_DAC_MUTE, val); |
7807 |
+ } |
7808 |
+diff --git a/sound/soc/codecs/wm8962.h b/sound/soc/codecs/wm8962.h |
7809 |
+index a1a5d5294c19..910aafd09d21 100644 |
7810 |
+--- a/sound/soc/codecs/wm8962.h |
7811 |
++++ b/sound/soc/codecs/wm8962.h |
7812 |
+@@ -1954,6 +1954,10 @@ |
7813 |
+ #define WM8962_SPKOUTL_ENA_MASK 0x0040 /* SPKOUTL_ENA */ |
7814 |
+ #define WM8962_SPKOUTL_ENA_SHIFT 6 /* SPKOUTL_ENA */ |
7815 |
+ #define WM8962_SPKOUTL_ENA_WIDTH 1 /* SPKOUTL_ENA */ |
7816 |
++#define WM8962_DAC_MUTE_ALT 0x0010 /* DAC_MUTE */ |
7817 |
++#define WM8962_DAC_MUTE_ALT_MASK 0x0010 /* DAC_MUTE */ |
7818 |
++#define WM8962_DAC_MUTE_ALT_SHIFT 4 /* DAC_MUTE */ |
7819 |
++#define WM8962_DAC_MUTE_ALT_WIDTH 1 /* DAC_MUTE */ |
7820 |
+ #define WM8962_SPKOUTL_PGA_MUTE 0x0002 /* SPKOUTL_PGA_MUTE */ |
7821 |
+ #define WM8962_SPKOUTL_PGA_MUTE_MASK 0x0002 /* SPKOUTL_PGA_MUTE */ |
7822 |
+ #define WM8962_SPKOUTL_PGA_MUTE_SHIFT 1 /* SPKOUTL_PGA_MUTE */ |
7823 |
|
7824 |
Deleted: genpatches-2.6/trunk/3.4/1501-futex-add-another-early-deadlock-detection-check.patch |
7825 |
=================================================================== |
7826 |
--- genpatches-2.6/trunk/3.4/1501-futex-add-another-early-deadlock-detection-check.patch 2014-06-09 12:35:00 UTC (rev 2820) |
7827 |
+++ genpatches-2.6/trunk/3.4/1501-futex-add-another-early-deadlock-detection-check.patch 2014-06-09 17:53:50 UTC (rev 2821) |
7828 |
@@ -1,160 +0,0 @@ |
7829 |
-From: Thomas Gleixner <tglx@××××××××××.de> |
7830 |
-Date: Mon, 12 May 2014 20:45:34 +0000 |
7831 |
-Subject: futex: Add another early deadlock detection check |
7832 |
-Git-commit: 866293ee54227584ffcb4a42f69c1f365974ba7f |
7833 |
- |
7834 |
-Dave Jones trinity syscall fuzzer exposed an issue in the deadlock |
7835 |
-detection code of rtmutex: |
7836 |
- http://lkml.kernel.org/r/20140429151655.GA14277@××××××.com |
7837 |
- |
7838 |
-That underlying issue has been fixed with a patch to the rtmutex code, |
7839 |
-but the futex code must not call into rtmutex in that case because |
7840 |
- - it can detect that issue early |
7841 |
- - it avoids a different and more complex fixup for backing out |
7842 |
- |
7843 |
-If the user space variable got manipulated to 0x80000000 which means |
7844 |
-no lock holder, but the waiters bit set and an active pi_state in the |
7845 |
-kernel is found we can figure out the recursive locking issue by |
7846 |
-looking at the pi_state owner. If that is the current task, then we |
7847 |
-can safely return -EDEADLK. |
7848 |
- |
7849 |
-The check should have been added in commit 59fa62451 (futex: Handle |
7850 |
-futex_pi OWNER_DIED take over correctly) already, but I did not see |
7851 |
-the above issue caused by user space manipulation back then. |
7852 |
- |
7853 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
7854 |
-Cc: Dave Jones <davej@××××××.com> |
7855 |
-Cc: Linus Torvalds <torvalds@××××××××××××××××.org> |
7856 |
-Cc: Peter Zijlstra <peterz@×××××××××.org> |
7857 |
-Cc: Darren Hart <darren@××××××.com> |
7858 |
-Cc: Davidlohr Bueso <davidlohr@××.com> |
7859 |
-Cc: Steven Rostedt <rostedt@×××××××.org> |
7860 |
-Cc: Clark Williams <williams@××××××.com> |
7861 |
-Cc: Paul McKenney <paulmck@××××××××××××××.com> |
7862 |
-Cc: Lai Jiangshan <laijs@××××××××××.com> |
7863 |
-Cc: Roland McGrath <roland@×××××××××.com> |
7864 |
-Cc: Carlos ODonell <carlos@××××××.com> |
7865 |
-Cc: Jakub Jelinek <jakub@××××××.com> |
7866 |
-Cc: Michael Kerrisk <mtk.manpages@×××××.com> |
7867 |
-Cc: Sebastian Andrzej Siewior <bigeasy@××××××××××.de> |
7868 |
-Link: http://lkml.kernel.org/r/20140512201701.097349971@××××××××××.de |
7869 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
7870 |
-Cc: stable@×××××××××××.org |
7871 |
---- |
7872 |
- kernel/futex.c | 47 ++++++++++++++++++++++++++++++++++------------- |
7873 |
- 1 file changed, 34 insertions(+), 13 deletions(-) |
7874 |
- |
7875 |
-Index: linux-3.4/kernel/futex.c |
7876 |
-=================================================================== |
7877 |
---- linux-3.4.orig/kernel/futex.c |
7878 |
-+++ linux-3.4/kernel/futex.c |
7879 |
-@@ -590,7 +590,8 @@ void exit_pi_state_list(struct task_stru |
7880 |
- |
7881 |
- static int |
7882 |
- lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, |
7883 |
-- union futex_key *key, struct futex_pi_state **ps) |
7884 |
-+ union futex_key *key, struct futex_pi_state **ps, |
7885 |
-+ struct task_struct *task) |
7886 |
- { |
7887 |
- struct futex_pi_state *pi_state = NULL; |
7888 |
- struct futex_q *this, *next; |
7889 |
-@@ -634,6 +635,16 @@ lookup_pi_state(u32 uval, struct futex_h |
7890 |
- return -EINVAL; |
7891 |
- } |
7892 |
- |
7893 |
-+ /* |
7894 |
-+ * Protect against a corrupted uval. If uval |
7895 |
-+ * is 0x80000000 then pid is 0 and the waiter |
7896 |
-+ * bit is set. So the deadlock check in the |
7897 |
-+ * calling code has failed and we did not fall |
7898 |
-+ * into the check above due to !pid. |
7899 |
-+ */ |
7900 |
-+ if (task && pi_state->owner == task) |
7901 |
-+ return -EDEADLK; |
7902 |
-+ |
7903 |
- atomic_inc(&pi_state->refcount); |
7904 |
- *ps = pi_state; |
7905 |
- |
7906 |
-@@ -783,7 +794,7 @@ retry: |
7907 |
- * We dont have the lock. Look up the PI state (or create it if |
7908 |
- * we are the first waiter): |
7909 |
- */ |
7910 |
-- ret = lookup_pi_state(uval, hb, key, ps); |
7911 |
-+ ret = lookup_pi_state(uval, hb, key, ps, task); |
7912 |
- |
7913 |
- if (unlikely(ret)) { |
7914 |
- switch (ret) { |
7915 |
-@@ -1193,7 +1204,7 @@ void requeue_pi_wake_futex(struct futex_ |
7916 |
- * |
7917 |
- * Returns: |
7918 |
- * 0 - failed to acquire the lock atomicly |
7919 |
-- * 1 - acquired the lock |
7920 |
-+ * >0 - acquired the lock, return value is vpid of the top_waiter |
7921 |
- * <0 - error |
7922 |
- */ |
7923 |
- static int futex_proxy_trylock_atomic(u32 __user *pifutex, |
7924 |
-@@ -1204,7 +1215,7 @@ static int futex_proxy_trylock_atomic(u3 |
7925 |
- { |
7926 |
- struct futex_q *top_waiter = NULL; |
7927 |
- u32 curval; |
7928 |
-- int ret; |
7929 |
-+ int ret, vpid; |
7930 |
- |
7931 |
- if (get_futex_value_locked(&curval, pifutex)) |
7932 |
- return -EFAULT; |
7933 |
-@@ -1232,11 +1243,13 @@ static int futex_proxy_trylock_atomic(u3 |
7934 |
- * the contended case or if set_waiters is 1. The pi_state is returned |
7935 |
- * in ps in contended cases. |
7936 |
- */ |
7937 |
-+ vpid = task_pid_vnr(top_waiter->task); |
7938 |
- ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task, |
7939 |
- set_waiters); |
7940 |
-- if (ret == 1) |
7941 |
-+ if (ret == 1) { |
7942 |
- requeue_pi_wake_futex(top_waiter, key2, hb2); |
7943 |
-- |
7944 |
-+ return vpid; |
7945 |
-+ } |
7946 |
- return ret; |
7947 |
- } |
7948 |
- |
7949 |
-@@ -1268,7 +1281,6 @@ static int futex_requeue(u32 __user *uad |
7950 |
- struct futex_hash_bucket *hb1, *hb2; |
7951 |
- struct plist_head *head1; |
7952 |
- struct futex_q *this, *next; |
7953 |
-- u32 curval2; |
7954 |
- |
7955 |
- if (requeue_pi) { |
7956 |
- /* |
7957 |
-@@ -1354,16 +1366,25 @@ retry_private: |
7958 |
- * At this point the top_waiter has either taken uaddr2 or is |
7959 |
- * waiting on it. If the former, then the pi_state will not |
7960 |
- * exist yet, look it up one more time to ensure we have a |
7961 |
-- * reference to it. |
7962 |
-+ * reference to it. If the lock was taken, ret contains the |
7963 |
-+ * vpid of the top waiter task. |
7964 |
- */ |
7965 |
-- if (ret == 1) { |
7966 |
-+ if (ret > 0) { |
7967 |
- WARN_ON(pi_state); |
7968 |
- drop_count++; |
7969 |
- task_count++; |
7970 |
-- ret = get_futex_value_locked(&curval2, uaddr2); |
7971 |
-- if (!ret) |
7972 |
-- ret = lookup_pi_state(curval2, hb2, &key2, |
7973 |
-- &pi_state); |
7974 |
-+ /* |
7975 |
-+ * If we acquired the lock, then the user |
7976 |
-+ * space value of uaddr2 should be vpid. It |
7977 |
-+ * cannot be changed by the top waiter as it |
7978 |
-+ * is blocked on hb2 lock if it tries to do |
7979 |
-+ * so. If something fiddled with it behind our |
7980 |
-+ * back the pi state lookup might unearth |
7981 |
-+ * it. So we rather use the known value than |
7982 |
-+ * rereading and handing potential crap to |
7983 |
-+ * lookup_pi_state. |
7984 |
-+ */ |
7985 |
-+ ret = lookup_pi_state(ret, hb2, &key2, &pi_state, NULL); |
7986 |
- } |
7987 |
- |
7988 |
- switch (ret) { |
7989 |
|
7990 |
Deleted: genpatches-2.6/trunk/3.4/1502-futex-prevent-attaching-to-kernel-threads.patch |
7991 |
=================================================================== |
7992 |
--- genpatches-2.6/trunk/3.4/1502-futex-prevent-attaching-to-kernel-threads.patch 2014-06-09 12:35:00 UTC (rev 2820) |
7993 |
+++ genpatches-2.6/trunk/3.4/1502-futex-prevent-attaching-to-kernel-threads.patch 2014-06-09 17:53:50 UTC (rev 2821) |
7994 |
@@ -1,52 +0,0 @@ |
7995 |
-From: Thomas Gleixner <tglx@××××××××××.de> |
7996 |
-Date: Mon, 12 May 2014 20:45:35 +0000 |
7997 |
-Subject: futex: Prevent attaching to kernel threads |
7998 |
-Git-commit: f0d71b3dcb8332f7971b5f2363632573e6d9486a |
7999 |
- |
8000 |
-We happily allow userspace to declare a random kernel thread to be the |
8001 |
-owner of a user space PI futex. |
8002 |
- |
8003 |
-Found while analysing the fallout of Dave Jones syscall fuzzer. |
8004 |
- |
8005 |
-We also should validate the thread group for private futexes and find |
8006 |
-some fast way to validate whether the "alleged" owner has RW access on |
8007 |
-the file which backs the SHM, but that's a separate issue. |
8008 |
- |
8009 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
8010 |
-Cc: Dave Jones <davej@××××××.com> |
8011 |
-Cc: Linus Torvalds <torvalds@××××××××××××××××.org> |
8012 |
-Cc: Peter Zijlstra <peterz@×××××××××.org> |
8013 |
-Cc: Darren Hart <darren@××××××.com> |
8014 |
-Cc: Davidlohr Bueso <davidlohr@××.com> |
8015 |
-Cc: Steven Rostedt <rostedt@×××××××.org> |
8016 |
-Cc: Clark Williams <williams@××××××.com> |
8017 |
-Cc: Paul McKenney <paulmck@××××××××××××××.com> |
8018 |
-Cc: Lai Jiangshan <laijs@××××××××××.com> |
8019 |
-Cc: Roland McGrath <roland@×××××××××.com> |
8020 |
-Cc: Carlos ODonell <carlos@××××××.com> |
8021 |
-Cc: Jakub Jelinek <jakub@××××××.com> |
8022 |
-Cc: Michael Kerrisk <mtk.manpages@×××××.com> |
8023 |
-Cc: Sebastian Andrzej Siewior <bigeasy@××××××××××.de> |
8024 |
-Link: http://lkml.kernel.org/r/20140512201701.194824402@××××××××××.de |
8025 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
8026 |
-Cc: stable@×××××××××××.org |
8027 |
---- |
8028 |
- kernel/futex.c | 5 +++++ |
8029 |
- 1 file changed, 5 insertions(+) |
8030 |
- |
8031 |
-Index: linux-3.4/kernel/futex.c |
8032 |
-=================================================================== |
8033 |
---- linux-3.4.orig/kernel/futex.c |
8034 |
-+++ linux-3.4/kernel/futex.c |
8035 |
-@@ -662,6 +662,11 @@ lookup_pi_state(u32 uval, struct futex_h |
8036 |
- if (!p) |
8037 |
- return -ESRCH; |
8038 |
- |
8039 |
-+ if (!p->mm) { |
8040 |
-+ put_task_struct(p); |
8041 |
-+ return -EPERM; |
8042 |
-+ } |
8043 |
-+ |
8044 |
- /* |
8045 |
- * We need to look at the task state flags to figure out, |
8046 |
- * whether the task is exiting. To protect against the do_exit |
8047 |
|
8048 |
Deleted: genpatches-2.6/trunk/3.4/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch |
8049 |
=================================================================== |
8050 |
--- genpatches-2.6/trunk/3.4/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch 2014-06-09 12:35:00 UTC (rev 2820) |
8051 |
+++ genpatches-2.6/trunk/3.4/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch 2014-06-09 17:53:50 UTC (rev 2821) |
8052 |
@@ -1,81 +0,0 @@ |
8053 |
-From: Thomas Gleixner <tglx@××××××××××.de> |
8054 |
-Date: Tue, 3 Jun 2014 12:27:06 +0000 |
8055 |
-Subject: futex-prevent-requeue-pi-on-same-futex.patch futex: Forbid uaddr == |
8056 |
- uaddr2 in futex_requeue(..., requeue_pi=1) |
8057 |
-Git-commit: e9c243a5a6de0be8e584c604d353412584b592f8 |
8058 |
- |
8059 |
-If uaddr == uaddr2, then we have broken the rule of only requeueing from |
8060 |
-a non-pi futex to a pi futex with this call. If we attempt this, then |
8061 |
-dangling pointers may be left for rt_waiter resulting in an exploitable |
8062 |
-condition. |
8063 |
- |
8064 |
-This change brings futex_requeue() in line with futex_wait_requeue_pi() |
8065 |
-which performs the same check as per commit 6f7b0a2a5c0f ("futex: Forbid |
8066 |
-uaddr == uaddr2 in futex_wait_requeue_pi()") |
8067 |
- |
8068 |
-[ tglx: Compare the resulting keys as well, as uaddrs might be |
8069 |
- different depending on the mapping ] |
8070 |
- |
8071 |
-Fixes CVE-2014-3153. |
8072 |
- |
8073 |
-Reported-by: Pinkie Pie |
8074 |
-Signed-off-by: Will Drewry <wad@××××××××.org> |
8075 |
-Signed-off-by: Kees Cook <keescook@××××××××.org> |
8076 |
-Cc: stable@×××××××××××.org |
8077 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
8078 |
-Reviewed-by: Darren Hart <dvhart@×××××××××××.com> |
8079 |
-Signed-off-by: Linus Torvalds <torvalds@××××××××××××××××.org> |
8080 |
---- |
8081 |
- kernel/futex.c | 25 +++++++++++++++++++++++++ |
8082 |
- 1 file changed, 25 insertions(+) |
8083 |
- |
8084 |
-Index: linux-3.4/kernel/futex.c |
8085 |
-=================================================================== |
8086 |
---- linux-3.4.orig/kernel/futex.c |
8087 |
-+++ linux-3.4/kernel/futex.c |
8088 |
-@@ -1289,6 +1289,13 @@ static int futex_requeue(u32 __user *uad |
8089 |
- |
8090 |
- if (requeue_pi) { |
8091 |
- /* |
8092 |
-+ * Requeue PI only works on two distinct uaddrs. This |
8093 |
-+ * check is only valid for private futexes. See below. |
8094 |
-+ */ |
8095 |
-+ if (uaddr1 == uaddr2) |
8096 |
-+ return -EINVAL; |
8097 |
-+ |
8098 |
-+ /* |
8099 |
- * requeue_pi requires a pi_state, try to allocate it now |
8100 |
- * without any locks in case it fails. |
8101 |
- */ |
8102 |
-@@ -1326,6 +1333,15 @@ retry: |
8103 |
- if (unlikely(ret != 0)) |
8104 |
- goto out_put_key1; |
8105 |
- |
8106 |
-+ /* |
8107 |
-+ * The check above which compares uaddrs is not sufficient for |
8108 |
-+ * shared futexes. We need to compare the keys: |
8109 |
-+ */ |
8110 |
-+ if (requeue_pi && match_futex(&key1, &key2)) { |
8111 |
-+ ret = -EINVAL; |
8112 |
-+ goto out_put_keys; |
8113 |
-+ } |
8114 |
-+ |
8115 |
- hb1 = hash_futex(&key1); |
8116 |
- hb2 = hash_futex(&key2); |
8117 |
- |
8118 |
-@@ -2357,6 +2373,15 @@ static int futex_wait_requeue_pi(u32 __u |
8119 |
- if (ret) |
8120 |
- goto out_key2; |
8121 |
- |
8122 |
-+ /* |
8123 |
-+ * The check above which compares uaddrs is not sufficient for |
8124 |
-+ * shared futexes. We need to compare the keys: |
8125 |
-+ */ |
8126 |
-+ if (match_futex(&q.key, &key2)) { |
8127 |
-+ ret = -EINVAL; |
8128 |
-+ goto out_put_keys; |
8129 |
-+ } |
8130 |
-+ |
8131 |
- /* Queue the futex_q, drop the hb lock, wait for wakeup. */ |
8132 |
- futex_wait_queue_me(hb, &q, to); |
8133 |
- |
8134 |
|
8135 |
Deleted: genpatches-2.6/trunk/3.4/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch |
8136 |
=================================================================== |
8137 |
--- genpatches-2.6/trunk/3.4/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch 2014-06-09 12:35:00 UTC (rev 2820) |
8138 |
+++ genpatches-2.6/trunk/3.4/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch 2014-06-09 17:53:50 UTC (rev 2821) |
8139 |
@@ -1,53 +0,0 @@ |
8140 |
-From: Thomas Gleixner <tglx@××××××××××.de> |
8141 |
-Date: Tue, 3 Jun 2014 12:27:06 +0000 |
8142 |
-Subject: futex: Validate atomic acquisition in futex_lock_pi_atomic() |
8143 |
-Git-commit: b3eaa9fc5cd0a4d74b18f6b8dc617aeaf1873270 |
8144 |
- |
8145 |
-We need to protect the atomic acquisition in the kernel against rogue |
8146 |
-user space which sets the user space futex to 0, so the kernel side |
8147 |
-acquisition succeeds while there is existing state in the kernel |
8148 |
-associated to the real owner. |
8149 |
- |
8150 |
-Verify whether the futex has waiters associated with kernel state. If |
8151 |
-it has, return -EINVAL. The state is corrupted already, so no point in |
8152 |
-cleaning it up. Subsequent calls will fail as well. Not our problem. |
8153 |
- |
8154 |
-[ tglx: Use futex_top_waiter() and explain why we do not need to try |
8155 |
- restoring the already corrupted user space state. ] |
8156 |
- |
8157 |
-Signed-off-by: Darren Hart <dvhart@×××××××××××.com> |
8158 |
-Cc: Kees Cook <keescook@××××××××.org> |
8159 |
-Cc: Will Drewry <wad@××××××××.org> |
8160 |
-Cc: stable@×××××××××××.org |
8161 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
8162 |
-Signed-off-by: Linus Torvalds <torvalds@××××××××××××××××.org> |
8163 |
---- |
8164 |
- kernel/futex.c | 14 +++++++++++--- |
8165 |
- 1 file changed, 11 insertions(+), 3 deletions(-) |
8166 |
- |
8167 |
-Index: linux-3.4/kernel/futex.c |
8168 |
-=================================================================== |
8169 |
---- linux-3.4.orig/kernel/futex.c |
8170 |
-+++ linux-3.4/kernel/futex.c |
8171 |
-@@ -758,10 +758,18 @@ retry: |
8172 |
- return -EDEADLK; |
8173 |
- |
8174 |
- /* |
8175 |
-- * Surprise - we got the lock. Just return to userspace: |
8176 |
-+ * Surprise - we got the lock, but we do not trust user space at all. |
8177 |
- */ |
8178 |
-- if (unlikely(!curval)) |
8179 |
-- return 1; |
8180 |
-+ if (unlikely(!curval)) { |
8181 |
-+ /* |
8182 |
-+ * We verify whether there is kernel state for this |
8183 |
-+ * futex. If not, we can safely assume, that the 0 -> |
8184 |
-+ * TID transition is correct. If state exists, we do |
8185 |
-+ * not bother to fixup the user space state as it was |
8186 |
-+ * corrupted already. |
8187 |
-+ */ |
8188 |
-+ return futex_top_waiter(hb, key) ? -EINVAL : 1; |
8189 |
-+ } |
8190 |
- |
8191 |
- uval = curval; |
8192 |
- |
8193 |
|
8194 |
Deleted: genpatches-2.6/trunk/3.4/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch |
8195 |
=================================================================== |
8196 |
--- genpatches-2.6/trunk/3.4/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch 2014-06-09 12:35:00 UTC (rev 2820) |
8197 |
+++ genpatches-2.6/trunk/3.4/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch 2014-06-09 17:53:50 UTC (rev 2821) |
8198 |
@@ -1,99 +0,0 @@ |
8199 |
-From: Thomas Gleixner <tglx@××××××××××.de> |
8200 |
-Date: Tue, 3 Jun 2014 12:27:07 +0000 |
8201 |
-Subject: futex: Always cleanup owner tid in unlock_pi |
8202 |
-Git-commit: 13fbca4c6ecd96ec1a1cfa2e4f2ce191fe928a5e |
8203 |
- |
8204 |
-If the owner died bit is set at futex_unlock_pi, we currently do not |
8205 |
-cleanup the user space futex. So the owner TID of the current owner |
8206 |
-(the unlocker) persists. That's observable inconsistant state, |
8207 |
-especially when the ownership of the pi state got transferred. |
8208 |
- |
8209 |
-Clean it up unconditionally. |
8210 |
- |
8211 |
-Signed-off-by: Thomas Gleixner <tglx@××××××××××.de> |
8212 |
-Cc: Kees Cook <keescook@××××××××.org> |
8213 |
-Cc: Will Drewry <wad@××××××××.org> |
8214 |
-Cc: Darren Hart <dvhart@×××××××××××.com> |
8215 |
-Cc: stable@×××××××××××.org |
8216 |
-Signed-off-by: Linus Torvalds <torvalds@××××××××××××××××.org> |
8217 |
---- |
8218 |
- kernel/futex.c | 40 ++++++++++++++++++---------------------- |
8219 |
- 1 file changed, 18 insertions(+), 22 deletions(-) |
8220 |
- |
8221 |
-Index: linux-3.4/kernel/futex.c |
8222 |
-=================================================================== |
8223 |
---- linux-3.4.orig/kernel/futex.c |
8224 |
-+++ linux-3.4/kernel/futex.c |
8225 |
-@@ -899,6 +899,7 @@ static int wake_futex_pi(u32 __user *uad |
8226 |
- struct task_struct *new_owner; |
8227 |
- struct futex_pi_state *pi_state = this->pi_state; |
8228 |
- u32 uninitialized_var(curval), newval; |
8229 |
-+ int ret = 0; |
8230 |
- |
8231 |
- if (!pi_state) |
8232 |
- return -EINVAL; |
8233 |
-@@ -922,23 +923,19 @@ static int wake_futex_pi(u32 __user *uad |
8234 |
- new_owner = this->task; |
8235 |
- |
8236 |
- /* |
8237 |
-- * We pass it to the next owner. (The WAITERS bit is always |
8238 |
-- * kept enabled while there is PI state around. We must also |
8239 |
-- * preserve the owner died bit.) |
8240 |
-- */ |
8241 |
-- if (!(uval & FUTEX_OWNER_DIED)) { |
8242 |
-- int ret = 0; |
8243 |
-- |
8244 |
-- newval = FUTEX_WAITERS | task_pid_vnr(new_owner); |
8245 |
-- |
8246 |
-- if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) |
8247 |
-- ret = -EFAULT; |
8248 |
-- else if (curval != uval) |
8249 |
-- ret = -EINVAL; |
8250 |
-- if (ret) { |
8251 |
-- raw_spin_unlock(&pi_state->pi_mutex.wait_lock); |
8252 |
-- return ret; |
8253 |
-- } |
8254 |
-+ * We pass it to the next owner. The WAITERS bit is always |
8255 |
-+ * kept enabled while there is PI state around. We cleanup the |
8256 |
-+ * owner died bit, because we are the owner. |
8257 |
-+ */ |
8258 |
-+ newval = FUTEX_WAITERS | task_pid_vnr(new_owner); |
8259 |
-+ |
8260 |
-+ if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) |
8261 |
-+ ret = -EFAULT; |
8262 |
-+ else if (curval != uval) |
8263 |
-+ ret = -EINVAL; |
8264 |
-+ if (ret) { |
8265 |
-+ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); |
8266 |
-+ return ret; |
8267 |
- } |
8268 |
- |
8269 |
- raw_spin_lock_irq(&pi_state->owner->pi_lock); |
8270 |
-@@ -2183,9 +2180,10 @@ retry: |
8271 |
- /* |
8272 |
- * To avoid races, try to do the TID -> 0 atomic transition |
8273 |
- * again. If it succeeds then we can return without waking |
8274 |
-- * anyone else up: |
8275 |
-+ * anyone else up. We only try this if neither the waiters nor |
8276 |
-+ * the owner died bit are set. |
8277 |
- */ |
8278 |
-- if (!(uval & FUTEX_OWNER_DIED) && |
8279 |
-+ if (!(uval & ~FUTEX_TID_MASK) && |
8280 |
- cmpxchg_futex_value_locked(&uval, uaddr, vpid, 0)) |
8281 |
- goto pi_faulted; |
8282 |
- /* |
8283 |
-@@ -2217,11 +2215,9 @@ retry: |
8284 |
- /* |
8285 |
- * No waiters - kernel unlocks the futex: |
8286 |
- */ |
8287 |
-- if (!(uval & FUTEX_OWNER_DIED)) { |
8288 |
-- ret = unlock_futex_pi(uaddr, uval); |
8289 |
-- if (ret == -EFAULT) |
8290 |
-- goto pi_faulted; |
8291 |
-- } |
8292 |
-+ ret = unlock_futex_pi(uaddr, uval); |
8293 |
-+ if (ret == -EFAULT) |
8294 |
-+ goto pi_faulted; |
8295 |
- |
8296 |
- out_unlock: |
8297 |
- spin_unlock(&hb->lock); |
8298 |
|
8299 |
Deleted: genpatches-2.6/trunk/3.4/1506-futex-make-lookup_pi_state-more-robust.patch |
8300 |
=================================================================== |
8301 |
(Binary files differ) |
8302 |
|
8303 |
Deleted: genpatches-2.6/trunk/3.4/2700_thinkpad-acpi_fix-issuing-duplicated-keyevents-for-brightness.patch |
8304 |
=================================================================== |
8305 |
--- genpatches-2.6/trunk/3.4/2700_thinkpad-acpi_fix-issuing-duplicated-keyevents-for-brightness.patch 2014-06-09 12:35:00 UTC (rev 2820) |
8306 |
+++ genpatches-2.6/trunk/3.4/2700_thinkpad-acpi_fix-issuing-duplicated-keyevents-for-brightness.patch 2014-06-09 17:53:50 UTC (rev 2821) |
8307 |
@@ -1,26 +0,0 @@ |
8308 |
-The tp_features.bright_acpimode will not be set correctly for brightness |
8309 |
-control because ACPI_VIDEO_HID will not be located in ACPI. As a result, |
8310 |
-a duplicated key event will always be sent. acpi_video_backlight_support() |
8311 |
-is sufficient to detect standard ACPI brightness control. |
8312 |
- |
8313 |
-Signed-off-by: Alex Hung <alex.hung@xxxxxxxxxxxxx> |
8314 |
---- |
8315 |
- drivers/platform/x86/thinkpad_acpi.c | 2 +- |
8316 |
- 1 files changed, 1 insertions(+), 1 deletions(-) |
8317 |
- |
8318 |
-diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c |
8319 |
-index 7b82868..7d032d5 100644 |
8320 |
---- a/drivers/platform/x86/thinkpad_acpi.c |
8321 |
-+++ b/drivers/platform/x86/thinkpad_acpi.c |
8322 |
-@@ -3405,7 +3405,7 @@ static int __init hotkey_init(struct ibm_init_struct *iibm) |
8323 |
- /* Do not issue duplicate brightness change events to |
8324 |
- * userspace. tpacpi_detect_brightness_capabilities() must have |
8325 |
- * been called before this point */ |
8326 |
-- if (tp_features.bright_acpimode && acpi_video_backlight_support()) { |
8327 |
-+ if (acpi_video_backlight_support()) { |
8328 |
- pr_info("This ThinkPad has standard ACPI backlight " |
8329 |
- "brightness control, supported by the ACPI " |
8330 |
- "video driver\n"); |
8331 |
--- |
8332 |
-1.7.0.4 |
8333 |
- |