1 |
I just joined gentoo-server in the past few days and missed the |
2 |
original post, but I'll post some of our experiences so far with Linux |
3 |
iSCSI on the client side. |
4 |
|
5 |
Server side is Cisco iSCSI 5400 with RAID Inc backend. All |
6 |
volume management is in the Cisco gear and is roughly 8TB usable. I've got |
7 |
a single client machine, Redhat 9.0, using the RPMs from the sourceforge |
8 |
project. All interfaces are GigE. |
9 |
|
10 |
The client is running Helix Streaming server 9.02 and Apache |
11 |
2.0.49. Total outbound traffic is 70Mb/s at peak with 45Mb/s being |
12 |
average. All content is served from the iSCSI array on a single LUN at |
13 |
this time. By this time next week I should have the other 3 servers |
14 |
coverted over and be serving 250Mb/s off the iSCSI SAN. All servers are |
15 |
dual PIII 1Ghz or better, have a dedicated interface for the iSCSI lan, |
16 |
and 4GB of RAM. |
17 |
|
18 |
Tested with jumbo frames and without on the iSCSI lan. Jumbo |
19 |
frames increased performance 5-10% with regards to throughput and |
20 |
decreased CPU overhead by at least 60%. CPU on the Cisco side did drop |
21 |
slightly, but was generally unaffected. I'm assuming they do most of the |
22 |
TCP stuff in hardware for performance. I also used ethtool to set "autoneg |
23 |
on" which enabled flow control. Cisco recommended that as a way to |
24 |
decrease segment retransmits, shown by netstat -s | grep seg. This did |
25 |
almost eliminate bad segments and dropped segment retrans by a factor or |
26 |
10. Service has been stable other than Real Server problems with the Helix |
27 |
build on Redhat 9. |
28 |
|
29 |
Some musing about our system and how it may not resemble yours: |
30 |
|
31 |
Our system is 100% reads, long ones as well, off the SAN. I have |
32 |
noticed that when using the current client machine to rsync new data to |
33 |
the SAN that iSCSI CPU usage increases dramatically. Also doing lots of |
34 |
reads with significant writes also increase CPU usage more then the data |
35 |
transfer would suggest. |
36 |
I currently have one client going to one lun out of four. The plan |
37 |
is to spread content evenly on the SAN. I'm not sure how this will effect |
38 |
the client and server sides. Especially once I add the other 3 client |
39 |
machines. |
40 |
None of the file we're serving are smaller than 300MB so caching |
41 |
content in RAM never happens. I don't know if webserving off a SAN with |
42 |
smaller files would see less SAN traffic when Linux/Apache caches content. |
43 |
We're doing almost no volume management. I need to talk to the |
44 |
admin to set it up so I have no idea if it's easier under Cisco and how |
45 |
the client reacts to changes. |
46 |
|
47 |
This was probably a bit outside the data you were looking for, but |
48 |
if you want to hear more shoot me an email. I'll probably have another |
49 |
update early next week. |
50 |
|
51 |
kashani |
52 |
|
53 |
On Tue, 20 Apr 2004, Bryn Hughes wrote: |
54 |
|
55 |
> Doesn't look like I've had any bites out there so I'll share my |
56 |
> findings at least for any future googlers... |
57 |
> |
58 |
> The good news is iSCSI on Linux works. I have been able to export |
59 |
> block devices from one Linux machine and mount them on another. |
60 |
> Apparently the Windows drivers are well tested too but I don't have any |
61 |
> Windows machines to test with. The initiator sees the exported iSCSI |
62 |
> LUN as if it were a locally attached SCSI disk. If you export an LVM |
63 |
> volume it appears to the remote machine as a local SCSI disk and you |
64 |
> can do all the usual stuff to it - i.e. you need to either create a |
65 |
> partition table or use lvm on the remote machine to portion out your |
66 |
> virtual device. |
67 |
> |
68 |
> The bad news is the iSCSI target driver is lacking a few key functions. |
69 |
> It doesn't seem to handle resizing volumes very well - I tried |
70 |
> resizing an exported LVM volume and all hell broke loose. The biggest |
71 |
> problem however is that it doesn't appear to let you make configuration |
72 |
> changes without restarting the target daemon. This is a major problem |
73 |
> as it means you have to unmount _ALL_ iSCSI volumes before you can add |
74 |
> a new one on the target server. This wouldn't make for much of a SAN |
75 |
> if every time you wanted to make a configuration change you had to down |
76 |
> all your servers! |
77 |
> |
78 |
> So here's hoping that more developers will join in on the iSCSI target |
79 |
> project listed below at ardistech.com. There's definitely something |
80 |
> brewing here which will make SAN's a whole lot cheaper and easier to |
81 |
> implement for the average environment. Give it some more development |
82 |
> time and it will actually be viable! |
83 |
> |
84 |
> Bryn |
85 |
> |
86 |
> |
87 |
> On Apr 15, 2004, at 6:48 PM, Bryn Hughes wrote: |
88 |
> |
89 |
> > I'm looking for experiences implementing iSCSI on Linux - not just as |
90 |
> > an initiator but as a target as well. I've found these two projects: |
91 |
> > |
92 |
> > This is the iSCSI initiator project started by Cisco |
93 |
> > http://sourceforge.net/projects/linux-iscsi |
94 |
> > |
95 |
> > This is the ardistech iSCSI target project |
96 |
> > http://www.ardistech.com/iscsi/ |
97 |
> > |
98 |
> > In theory it is possible to create iSCSI volumes using EVMS or similar |
99 |
> > and then mount them as block devices over ethernet to another host |
100 |
> > (could be Windows, Novell, Linux, OS X, whatever). Basically you can |
101 |
> > build yourself a SAN using free software and normal ethernet |
102 |
> > (obviously gigabit preferred). |
103 |
> -- |
104 |
> ~ |
105 |
> ~ |
106 |
> :wq |
107 |
> |
108 |
> |