home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
linuxmafia.com 2013
/
2013.06.linuxmafia.com
/
linuxmafia.com
/
pub
/
linux
/
security
/
sshd_sentry
/
README
next >
Wrap
Internet Message Format
|
2004-10-29
|
5KB
Date: Tue, 12 Oct 2004 11:29:30 -0400 (EDT)
From: Victor Danilchenko <danilche@cs.umass.edu>
To: secureshell@securityfocus.com
Subject: Re: OpenSSH -- a way to block recurrent login failures?
Further update, in case anyone cares:
I have implemented the client/server functionality, via
server-push. It won't scale well for large installations, but for medium
or small ones, server-push will work much better than client-pull.
basically the clients try to contact the server each time they blacklist
a new host, and the server maintains an aggregated blacklist. Each time
the aggregated blacklist is updated (when a blacklisting request is made
by three individual clients), the updated blacklist is pushed out to all
the clients -- the server splits the list of clients into a number of
queues, and forks a child to handle the distribution to each queue. The
list of clients is constructed by the expedient of simply registering
the IP of every host that attempts a connection to the server. It's
rather simplistic, but it's been working fine on my network.
Note that this is an alpha-grade release, and the server will
dump a good deal of info (I run it in a terminal in foreground). I
haven't even gotten around to writing in the explicit verbosity flag
into it.
The code is at http://phobos.cs.umass.edu/~danilche/sshd_sentry
-- there's the server code, the client code, and also the SRPM
containing the client and the startup script. Note that my SRPM symlinks
the client into /etc/cron.hourly -- this is for our specific
installation; feel free to remove that line from the spec filebefore
building your own, should you wish to use the RPM.
On Mon, 27 Sep 2004, Victor Danilchenko wrote:
>On Tue, 21 Sep 2004, Victor Danilchenko wrote:
>
>> Hi,
>>
>> We are looking for a way to temporarily block hosts from which
>>we receive a given number of sequential failed login attempts, not
>>necessarily within the same SSH session (so MaxAuthTries is not
>>enough).
>>The best solution I could come up with so far would be to run OpenSSH
>>through TCPWrappers, and set up a log watcher daemon which would edit
>>/etc/hosts.deny on the fly based on the tracked number of failed
>>logins
>>for each logged host.
>>
>> Is there a better solution known for the sort of problems we
>>have been plagued with lately -- repeated brute-force crack attempts
>>from remote hosts? I looked on FreshMeat and I searched the mailing
>>lists, only to come up empty-handed.
>>
>> Thanks in advance,
>
> Thanks to all who replied with the suggestions. Alas, none of
>them were quite suitable.
>
> The IPTables manipulation is a fine idea, but we need a solution
>that runs in a very heterogeneous environment. At the very least, we
>are
>looking at protecting Redhat Linux, OS/X, and Solaris systems.
>
> Portsentry is IMO a little too complicated to easily deploy on a
>wide number of systems -- we need a fire-and-forget solution (ideally a
>simple modification to the sshd_config file, but that obviously is not
>in the cards).
>
> In the end, I wrote a perl script that did solved the problem
>the brute way -- trail the SSHD logs, and modify /etc/hosts.deny on the
>fly. Attached in this script, should anyone here find it useful. The
>next logical step would be to turn this into a distributed solution
>where blacklists could be shared among individual nodes. Would be nice
>to have a DNS-based blacklisting solution eventually, similar to how
>SPAMming can be handled by MTAs...
>
> Note that the attached script has been stripped of our
>information before being posted, so a typo or two may have crept in
>somewhere during the cleanup editing. It currently works on OS/X and
>Linux, I haven't yed added the code to make it work on Solaris.
>
>