1 |
Brian Micek wrote: |
2 |
> I don't think you understand what I'm proposing. I am currently |
3 |
> cat(1)ing /dev/urandom on TCP port 22 in hopes to discourage hackers who |
4 |
> attempt to break into my system. Its beyond me how this is treading on |
5 |
> dangerous ground, what systems I'll endanger or what is morally wrong |
6 |
> with doing this. |
7 |
|
8 |
If it's beyond you, then perhaps you need to do further research into how |
9 |
things work before deploying your solution. Your stated goal was to cause |
10 |
a core dump through spewage. To restate that, you want *to crash software |
11 |
on a remote system.* Which is to say, you want to *cause damage to a |
12 |
remote system.* |
13 |
|
14 |
So, to explain this in a more tangible way, assume three hosts, A, B, and |
15 |
C. A is you. B is an attacker. C is an innocent bystander. |
16 |
|
17 |
It's possible, using several features of IP for B to connect the output of |
18 |
ports from A to ports on C. That is, B can create a connection from A to C |
19 |
using legitimate TCP behaviors, that neither C nor A would otherwise have |
20 |
initiated. |
21 |
|
22 |
Your "solution" of cat-ing /dev/urandom is, in effect, creating a binary |
23 |
character generator *which never stops generating characters* (though it |
24 |
will periodically delay in doing so, and it does exhaust your true entropy |
25 |
on your system, which is harmful if you have any reason for randomness |
26 |
(cryptography, password generation, complex simulations, game theory |
27 |
decision models, etc). For us oldtimers... those of us who've been around |
28 |
the block a few (hundred thousand) times... we remember the earliest DoS |
29 |
attacks, which created connections from the chargen to echo or discard |
30 |
ports on various machines, simply to consume bandwidth and processor. It |
31 |
sounds like a great avenue of attack against your "solution." |
32 |
|
33 |
Think a little broader. The reason I can level this criticism at all is |
34 |
because you're looking only at a tiny subset of the consequences of your |
35 |
technology. When one looks at a much broader range of possible outcomes |
36 |
and possible MIS-uses of the technology, when one looks at the boundaries |
37 |
of a problem statement and looks for how things will cross those |
38 |
boundaries, that's how you create actual security and assurance against |
39 |
adverse events. |
40 |
|
41 |
There's a reason why pretty much every major security organization comes |
42 |
down against "active response" (aka "strikeback" or "offensive response" or |
43 |
"retribution" or, my personal favorite, "vengence") strategies and |
44 |
approaches. These strategies almost invariably lead to unintended |
45 |
consequences which can damage uninvolved third parties, which are |
46 |
predictable, preventable, and undesirable. That's what makes these |
47 |
strategies a generally bad idea, and why security professionals argue |
48 |
against them. |
49 |
|
50 |
The line you don't want to cross has to do with sending responses to |
51 |
someone else. If you want to stop them from talking to you, fine. If you |
52 |
want to blacklist them from talking to your networks, fine. But when you |
53 |
reach your hand back toward them, you cross the line and become part of the |
54 |
problem, rather than part of the solution. |
55 |
|
56 |
-Bill |
57 |
-- |
58 |
William Yang |
59 |
wyang@××××.net |
60 |
-- |
61 |
gentoo-security@g.o mailing list |