1 |
Excellent... this is the kind of info I'm looking for. Your setup |
2 |
isn't actually that different from what I'm looking at. I have about |
3 |
200 gigs of video that need to be stored along with another 300 gigs of |
4 |
various other multimedia - the video is all training video to be |
5 |
streamed off our web server and the other stuff is our marketing and |
6 |
media department's work files - basically their job archive. I'm not |
7 |
looking at running a mail server or a database off of this right now, I |
8 |
figured I'd start with something simpler and less i/o intensive. This |
9 |
is our 'proof of concept' SAN for all intents and purposes. |
10 |
|
11 |
With regards to volume management I found that I could use all the |
12 |
normal Linux tricks with the exported LUNs - the client machine really |
13 |
doesn't know it isn't a direct attached SCSI disk. I'm considering |
14 |
FalconStor IPStor for our SAN controller as it is Linux based (Red Hat |
15 |
unfortunately) and will handle everything from replication to snapshots |
16 |
and backups. It basically has the same function as the Cisco storage |
17 |
router except it's more open and will work with pretty much everything |
18 |
under the sun. |
19 |
|
20 |
Bryn |
21 |
|
22 |
|
23 |
On Apr 20, 2004, at 3:14 PM, Kashani wrote: |
24 |
|
25 |
> I just joined gentoo-server in the past few days and missed the |
26 |
> original post, but I'll post some of our experiences so far with Linux |
27 |
> iSCSI on the client side. |
28 |
> |
29 |
> Server side is Cisco iSCSI 5400 with RAID Inc backend. All |
30 |
> volume management is in the Cisco gear and is roughly 8TB usable. I've |
31 |
> got |
32 |
> a single client machine, Redhat 9.0, using the RPMs from the |
33 |
> sourceforge |
34 |
> project. All interfaces are GigE. |
35 |
> |
36 |
> The client is running Helix Streaming server 9.02 and Apache |
37 |
> 2.0.49. Total outbound traffic is 70Mb/s at peak with 45Mb/s being |
38 |
> average. All content is served from the iSCSI array on a single LUN at |
39 |
> this time. By this time next week I should have the other 3 servers |
40 |
> coverted over and be serving 250Mb/s off the iSCSI SAN. All servers are |
41 |
> dual PIII 1Ghz or better, have a dedicated interface for the iSCSI lan, |
42 |
> and 4GB of RAM. |
43 |
> |
44 |
> Tested with jumbo frames and without on the iSCSI lan. Jumbo |
45 |
> frames increased performance 5-10% with regards to throughput and |
46 |
> decreased CPU overhead by at least 60%. CPU on the Cisco side did drop |
47 |
> slightly, but was generally unaffected. I'm assuming they do most of |
48 |
> the |
49 |
> TCP stuff in hardware for performance. I also used ethtool to set |
50 |
> "autoneg |
51 |
> on" which enabled flow control. Cisco recommended that as a way to |
52 |
> decrease segment retransmits, shown by netstat -s | grep seg. This did |
53 |
> almost eliminate bad segments and dropped segment retrans by a factor |
54 |
> or |
55 |
> 10. Service has been stable other than Real Server problems with the |
56 |
> Helix |
57 |
> build on Redhat 9. |
58 |
> |
59 |
> Some musing about our system and how it may not resemble yours: |
60 |
> |
61 |
> Our system is 100% reads, long ones as well, off the SAN. I have |
62 |
> noticed that when using the current client machine to rsync new data to |
63 |
> the SAN that iSCSI CPU usage increases dramatically. Also doing lots of |
64 |
> reads with significant writes also increase CPU usage more then the |
65 |
> data |
66 |
> transfer would suggest. |
67 |
> I currently have one client going to one lun out of four. The plan |
68 |
> is to spread content evenly on the SAN. I'm not sure how this will |
69 |
> effect |
70 |
> the client and server sides. Especially once I add the other 3 client |
71 |
> machines. |
72 |
> None of the file we're serving are smaller than 300MB so caching |
73 |
> content in RAM never happens. I don't know if webserving off a SAN with |
74 |
> smaller files would see less SAN traffic when Linux/Apache caches |
75 |
> content. |
76 |
> We're doing almost no volume management. I need to talk to the |
77 |
> admin to set it up so I have no idea if it's easier under Cisco and how |
78 |
> the client reacts to changes. |
79 |
> |
80 |
> This was probably a bit outside the data you were looking for, but |
81 |
> if you want to hear more shoot me an email. I'll probably have another |
82 |
> update early next week. |
83 |
> |
84 |
> kashani |
85 |
> |
86 |
> On Tue, 20 Apr 2004, Bryn Hughes wrote: |
87 |
> |
88 |
>> Doesn't look like I've had any bites out there so I'll share my |
89 |
>> findings at least for any future googlers... |
90 |
>> |
91 |
>> The good news is iSCSI on Linux works. I have been able to export |
92 |
>> block devices from one Linux machine and mount them on another. |
93 |
>> Apparently the Windows drivers are well tested too but I don't have |
94 |
>> any |
95 |
>> Windows machines to test with. The initiator sees the exported iSCSI |
96 |
>> LUN as if it were a locally attached SCSI disk. If you export an LVM |
97 |
>> volume it appears to the remote machine as a local SCSI disk and you |
98 |
>> can do all the usual stuff to it - i.e. you need to either create a |
99 |
>> partition table or use lvm on the remote machine to portion out your |
100 |
>> virtual device. |
101 |
>> |
102 |
>> The bad news is the iSCSI target driver is lacking a few key |
103 |
>> functions. |
104 |
>> It doesn't seem to handle resizing volumes very well - I tried |
105 |
>> resizing an exported LVM volume and all hell broke loose. The biggest |
106 |
>> problem however is that it doesn't appear to let you make |
107 |
>> configuration |
108 |
>> changes without restarting the target daemon. This is a major problem |
109 |
>> as it means you have to unmount _ALL_ iSCSI volumes before you can add |
110 |
>> a new one on the target server. This wouldn't make for much of a SAN |
111 |
>> if every time you wanted to make a configuration change you had to |
112 |
>> down |
113 |
>> all your servers! |
114 |
>> |
115 |
>> So here's hoping that more developers will join in on the iSCSI target |
116 |
>> project listed below at ardistech.com. There's definitely something |
117 |
>> brewing here which will make SAN's a whole lot cheaper and easier to |
118 |
>> implement for the average environment. Give it some more development |
119 |
>> time and it will actually be viable! |
120 |
>> |
121 |
>> Bryn |
122 |
>> |
123 |
>> |
124 |
>> On Apr 15, 2004, at 6:48 PM, Bryn Hughes wrote: |
125 |
>> |
126 |
>>> I'm looking for experiences implementing iSCSI on Linux - not just as |
127 |
>>> an initiator but as a target as well. I've found these two projects: |
128 |
>>> |
129 |
>>> This is the iSCSI initiator project started by Cisco |
130 |
>>> http://sourceforge.net/projects/linux-iscsi |
131 |
>>> |
132 |
>>> This is the ardistech iSCSI target project |
133 |
>>> http://www.ardistech.com/iscsi/ |
134 |
>>> |
135 |
>>> In theory it is possible to create iSCSI volumes using EVMS or |
136 |
>>> similar |
137 |
>>> and then mount them as block devices over ethernet to another host |
138 |
>>> (could be Windows, Novell, Linux, OS X, whatever). Basically you can |
139 |
>>> build yourself a SAN using free software and normal ethernet |
140 |
>>> (obviously gigabit preferred). |
141 |
>> -- |
142 |
>> ~ |
143 |
>> ~ |
144 |
>> :wq |
145 |
>> |
146 |
>> |
147 |
>> |
148 |
-- |
149 |
~ |
150 |
~ |
151 |
:wq |