1 |
Mark Knecht <markknecht <at> gmail.com> writes: |
2 |
|
3 |
|
4 |
|
5 |
> 1) I don't think the GPU latencies are much different than CPU |
6 |
> latencies. A lot of it can be done with DMA so that the CPU is hardly |
7 |
> involved once the pointers are set up. Of course it depends on the |
8 |
> system but the GPU is pretty close to the action so it should be quite |
9 |
> fast getting started. |
10 |
|
11 |
Privately, multi-core and GPUs are license for some folks to build |
12 |
out using the latest FPGA in massive efforts for some large clusters. |
13 |
These clusters are very private and the latency issue alluded to |
14 |
is gone and very much a positive attribute, if you have large |
15 |
sums of cash.... |
16 |
|
17 |
|
18 |
> 2) The big deal with GPUs is that they really pay off when you need to |
19 |
> do a lot of the same calculations on different data in parallel. A |
20 |
> book I read + some online stuff suggested they didn't pay off speed |
21 |
> wise until you were doing at least 100 operations in parallel. |
22 |
|
23 |
I always knew you were very sharp (Mark) here a few websites to |
24 |
further establish what you are saying. |
25 |
|
26 |
[1] |
27 |
http://www.prnewswire.com/news-releases/passware-kit-101-cracks-rar-and-truecrypt-encryption-in-record-time-99539629.html |
28 |
|
29 |
[2] http://www.tomshardware.com/news/nvidia-gpu-wifi-hack,6483.html |
30 |
|
31 |
These are just the tip of the berg.... |
32 |
|
33 |
> 3) You do have to get the data into the GPU so for things that used |
34 |
> fixed data blocks, like shading graphical elements, that data can be |
35 |
> loaded once and reused over and over. That can be very fast. In my |
36 |
> case it's financial data getting evaluated 1000 ways so that's |
37 |
> effective. For data like a packet I don't know how many ways there are |
38 |
> to evaluate that so I cannot suggest what the value would be. |
39 |
|
40 |
When you license core technologies, put them on FPGAs or ASICs and have |
41 |
lots of money, you can build special purpose busses that move |
42 |
data around very fast and massive on that custome hardware. |
43 |
Consumers and business don't want to pay for that sort of thing, |
44 |
but others are far ahead of the hacker-trains using massive numbers of |
45 |
workstations around the net. Those massive hacker efforts use the |
46 |
Internet like a buss. Others build custom busses that are faster |
47 |
and with more bandwidth that what vendors put under a 10G ethernet |
48 |
interface. A lot of very smart folks are studying the |
49 |
hacker communities with advance hardware for analysis, like |
50 |
you cannot believe. |
51 |
|
52 |
What the original poster has proposed, has been around for a long time. |
53 |
Ever wonder why not much progress is being made by the related open-source |
54 |
projects (compared to what's going on behind deep_pockets)? |
55 |
|
56 |
The best hope is for AMD stock to fall to a point where the owners |
57 |
are truely desparate. Then AMD may be motivated to offer something |
58 |
that every Linux user (world wide) want to go out and purchase... |
59 |
|
60 |
just my opinion... |
61 |
|
62 |
hth, |
63 |
James |