1 |
Am Sun, 14 May 2017 02:59:41 +0100 |
2 |
schrieb lee <lee@××××××××.de>: |
3 |
|
4 |
> Kai Krakow <hurikhan77@×××××.com> writes: |
5 |
> |
6 |
> > Am Sat, 29 Apr 2017 20:38:24 +0100 |
7 |
> > schrieb lee <lee@××××××××.de>: |
8 |
> > |
9 |
> >> Kai Krakow <hurikhan77@×××××.com> writes: |
10 |
> >> |
11 |
> [...] |
12 |
> [...] |
13 |
> [...] |
14 |
> >> |
15 |
> >> Yes, I'm using it mostly for backups/copies. |
16 |
> >> |
17 |
> >> The problem is that ftp is ideal for the purpose, yet users find it |
18 |
> >> too difficult to use, and nobody uses it. So there must be |
19 |
> >> something else as good or better which is easier to use and which |
20 |
> >> ppl do use. |
21 |
> > |
22 |
> > Well, I don't see how FTP is declining, except that it is |
23 |
> > unencrypted. You can still use FTP with TLS handshaking, most sites |
24 |
> > should support it these days but almost none forces correct |
25 |
> > certificates because it is usually implemented wrong on the server |
26 |
> > side (by giving you ftp.yourdomain.tld as the hostname instead of |
27 |
> > ftp.hostingprovider.tld which the TLS cert has been issued for). |
28 |
> > That makes it rather pointless to use. In linux, lftp is one of the |
29 |
> > few FTP clients supporting TLS out-of-the-box by default, plus it |
30 |
> > forces correct certificates. |
31 |
> |
32 |
> These certificates are a very stupid thing. They are utterly |
33 |
> complicated, you have to self-sign them which produces warnings, and |
34 |
> they require to have the host name within them as if the host wasn't |
35 |
> known by several different names. |
36 |
|
37 |
Use LetsEncrypt then, you can add any number of host names you want, as |
38 |
far as I know. But you need a temporary web server to prove ownership |
39 |
of the server/hostname and sign the certificates. |
40 |
|
41 |
> > But I found FTP being extra slow on small files, that's why I |
42 |
> > suggested to use rsync instead. That means, where you could use |
43 |
> > sftp (ssh+ftp), you can usually also use ssh+rsync which is |
44 |
> > faster. |
45 |
> |
46 |
> That requires shell access. |
47 |
> |
48 |
> What do you consider "small files"? I haven't observed a slowdown |
49 |
> like that, but I haven't been looking for it, either. |
50 |
|
51 |
Transfer 10000 smallish files (like web assets, php files) to a server |
52 |
with FTP, then try rsync. You should see a very big difference in time |
53 |
needed. That's due to the connection overhead of FTP. |
54 |
|
55 |
> > There's also the mirror command in lftp, which can be pretty fast, |
56 |
> > too, on incremental updates but still much slower than rsync. |
57 |
> > |
58 |
> >> I don't see how they would transfer files without ftp when ftp is |
59 |
> >> the ideal solution. |
60 |
> > |
61 |
> > You simply don't. FTP is still there and used. If you see something |
62 |
> > like "sftp" (ssh+ftp, not ftp+ssl which I would refer to as ftps), |
63 |
> > this is usually only ftp wrapped into ssh for security reasons. It |
64 |
> > just using ftp through a tunnel, but to the core it's the ftp |
65 |
> > protocol. In the end, it's not much different to scp, as ftp is |
66 |
> > really just only a special shell with some special commands to |
67 |
> > setup a file transfer channel that's not prone to interact with |
68 |
> > terminal escape sequences in whatever way those may be implemented, |
69 |
> > something that e.g. rzsz needs to work around. |
70 |
> > |
71 |
> > In the early BBS days, where you couldn't establish a second |
72 |
> > transfer channel like FTP does it using TCP, you had to send |
73 |
> > special escape sequences to put the terminal into file transfer |
74 |
> > mode, and then send the file. By that time, you used rzsz from the |
75 |
> > remote shell to initiate a file transfer. This is more the idea of |
76 |
> > how scp implements a file transfer behind the scenes. |
77 |
> |
78 |
> IIRC, I used xmodem or something like that back then, and rzsz never |
79 |
> worked. |
80 |
|
81 |
Yes, or xmodem... ;-) |
82 |
|
83 |
> > FTP also added some nice features like site-to-site transfers where |
84 |
> > the data endpoints both are on remote sites, and your local site |
85 |
> > only is the control channel. This directly transfers data from one |
86 |
> > remote site to another without going through your local connection |
87 |
> > (which may be slow due to the dial-up nature of most customer |
88 |
> > internet connections). |
89 |
> |
90 |
> Interesting, I didn't know that. How do you do that? |
91 |
|
92 |
You need a client that supports this. I remember LeechFTP for Windows |
93 |
supported it back then. The client needs to log in to both FTP servers |
94 |
and then shuffle correct PORT commands between them, so that the data |
95 |
connection is directly established between both. |
96 |
|
97 |
That feature is also the reason why this looks so overly complicated |
98 |
and incompatible to firewalls. When FTP was designed, there was a real |
99 |
need to directly transfer files between servers as your connection was |
100 |
usually a slow modem connection below 2400 baud, or some other slow |
101 |
connection. Or even one that wouldn't transfer binary data at all... |
102 |
|
103 |
> > Also, FTP is able to stream multiple files in a single connection |
104 |
> > for transferring many small files, by using tar as the transport |
105 |
> > protocol, thus reducing the overhead of establishing a new |
106 |
> > connection per file. Apparently, I know only few clients that |
107 |
> > support that, and even fewer servers which that would with. |
108 |
> > |
109 |
> > FTP can be pretty powerful, as you see. It's just victim of its poor |
110 |
> > implementation in most FTP clients that makes you feel it's mostly |
111 |
> > declined. If wrapped into a more secure tunnel (TLS, ssh), FTP is |
112 |
> > still a very good choice for transferring files, tho not the most |
113 |
> > efficient. Depending on your use case, you get away much better |
114 |
> > using more efficient protocols like rsync. |
115 |
> |
116 |
> So there isn't a better solution than ftp. That's good to know |
117 |
> because I can say there isn't a better solution, and if ppl don't |
118 |
> want to use it, they can send emails or DVDs. |
119 |
|
120 |
It depends... It's a simple, well supported protocol, easy to implement |
121 |
on both server and client sides. It's not the most efficient one |
122 |
probably, but it works. And that's what counts. |
123 |
|
124 |
Other modern protocols may work much better, have a richer feature set, |
125 |
and are easy to use on the client side, too. But due to the richer |
126 |
feature set, bigger attack surface, etc, they are usually much more |
127 |
complicated to implement correctly on the server side. Look at HTTPS, |
128 |
version HTTP/1.1: It supports all sort of things... Encryption, range |
129 |
transfer, resume, uploads, downloads, authentication (many different |
130 |
implementations), you could even transfer the checksums to see if |
131 |
the files match... You need to implement all this in the server to be |
132 |
compliant, even if the client doesn't care. This needs to be patched |
133 |
for security updates because it is a big piece of software. A simple |
134 |
FTP server is usually already secure by its pure age, there are no more |
135 |
security holes to fix. |
136 |
|
137 |
So your assumption "there isn't a better solution than ftp" is not |
138 |
right on its own. It may be the simplest solution for your use case. |
139 |
But it's definitely not the best solution to transfer files if you look |
140 |
at security, safety, or efficiency. |
141 |
|
142 |
|
143 |
-- |
144 |
Regards, |
145 |
Kai |
146 |
|
147 |
Replies to list-only preferred. |