Some recent security stuff
At my Day Job I've recently been doing a lot
of work improving the security of our networks. We recently started
implementing snort sensors in a variety of places, which started off just
as something done on my own time because I wanted better visibility of
traffic inside of our firewalls, not just edge traffic. Within a few
weeks, however, it detected a blackhat "pivoting" - using one system to
compromise and move to another one. :-( Bad juju.
However, it underscored the need (to management) for improved security. So
we embarked on a campaign and started testing evaluating a whole variety of
things (commercial and opensource). I'd already been doing DNS filtering
(see here for a separate write-up on it). However,
we also took OpenDNS for a test-drive. I
was skeptical at first, but I gotta say, it's been treating us well. They've
got a lot more DNS query data to look at than I do and are doing what I'd
do with RPZ if I had their data and their resouces. :-) It's been quite
effective.
In addition to blocking bad things, I'm also using it to better detect
systems infected with bots (making DNS queries for known-bad things) or
infected with adware/spyware.
We also started using bluecoat web
filtering. It's also done quite well at blocking hostile websites. It
won't catch everything (such as some zero-day phish URLs, though it does
a fair job even catching those). But they respond quickly to updates - I
can submit a URL to them as "phishing" or "malnet" or whatever and they
start blocking it fairly quickly. I'm working on having phish detected
more quickly and the URLs being automatically submitted to bluecoat also.
Additionally, I did a "bake-off" of the intrusion detection systems of two
different providers - Lastline, and
Cyphort. Both have pros 'n cons and
I don't want to come right out 'n say one is better than the other. The
Lastline solution is clearly more mature and polished. But on the other hand,
the Cyphort solution is already detecting some Mac OSX malware as well.
In my testing, they both performed admirably. One thing that stood out was
that no one solution detects everything. We also have Palo Alto Firewalls
and a FireEye and there's some overlap on what solutions detect which
problems. For instance, most of the commercial solutions still seem best
at detecting malware being downloaded or emailed, but snort is still better
at detecting bad behavior of malware already installed or systems already
infected. The pivot we detected last year? I re-tested using that same
mechanism and nobody picked it up as suspicious activity except for snort
with the emerging-threats ruleset. Doh.
This isn't to say the commercial solutions are inferior - just different.
They're far superior at detecting hostile content being downloaded by the
users (or getting past the spam/phish filters). Some will even interface
with bluecoat and/or OpenDNS so when they detect (for instance) a phish
with a hostile attachment or a URL proven to point at hostile content, they'll
let OpenDNS and/or Bluecoat know and start blocking that immediately.
They'll also let you upload arbitrary binaries to their analysis engines
which then get executed in a
virtual, instrumented environment (or even the execution simulated so the
malware is less able to detect it's running in an instrumented environment)
and then the behavior of the binary analyzed for suspect behavior and
documented. What this means is I can see what DNS queries a piece of
malware will make or what IPs it tries to talk to and then go sift DNS query
logs or firewall logs to look for that activity anywhere else in my network.
Or I can use a log watching tool like "oak" to watch the logs for that
activity. Nice!
Some want to be setup inline. The Palo Alto guys, for instance, really
would prefer you use their appliance as a real firewall. Then they can
proactively block anything they deem hostile. However, we've seen enough
false positives with the PAFW's anti-virus checks that there's no way I'm
prepared to do that. Not to mention it would mean using it to replace
other existing firewalls AND routers if you wanted it to see all of the
traffic you really should be monitoring.
In an ideal world, one would monitor all traffic everywhere. However,
this just isn't practical - especially in a company that makes
supercomputers that move huge amounts of data. :-) But what I'm telling
anyone who will listen is the days of monitoring just your edge traffic
are gone. Done with. Over. Insufficient. We really need to monitor (at least)
all WAN traffic, as well as any traffic that crosses any security boundary
(ie, all traffic to/from other offices, to/from DMZ segments, between
different DMZ segments, to/from VPN clients, to/from the internet, etc).
And that brings up a separate issue. Once upon a time, it was sufficient
to put a firewall between the outside world and your DMZs, between your DMZs
and your inside networks, and that was that. Not anymore. We really need
to start doing what some college campuses do. IT at a college usually
understands that their users are their biggest threat. So in addition
to firewalling at the edge, they also often do firewalling between
end-user segments and server segments. In today's world, businesses should
be firewalling at the edge, at security boundaries, and also between users
and servers, between different departments, etc. For instance, there's no
reason a user in Sales should be able to access Engineering servers in a
rational IT world. If there's some info on that Engineering server that Sales
needs access to, then the data is in the wrong place and it should go on some
shared server in a network segment separate from the Engineering nets.
Why? See, today's bad actors are not attacking the services you expose
to the internet so much anymore. Oh, they still try, sure, but most
businesses do at least a fair job of defending those systems/services.
Usually they get one of your users to
hand over the keys to their laptop/desktop with a phishing email or a
drive-by or watering-hole attack. Once they've got control of some user's
system they can use it to proxy traffic, launch other attacks, scan your
interior nets, whatever they want. So frankly, your users ARE the
threat in today's world. The business organization may trust them, but the
IT infrastructure must NOT. And by segmenting your interior networks and
firewalling between them and exposing only what you must you A) limit the
damage a compromised user system can do and B) increase the chance you
detect that the system is compromised before it can do too much
damage (because you'll be putting all of intrusion detection sensors on
each of these firewall ports, right, monitoring all traffic that crosses
any security boundary).
Next on my hitlist will be even better logging. We already do quite a bit
of logging but searching those logs can be tedious and time consuming and
there's very little event correlation going on. I use a tool called oak to
watch for "interesting" things in various logs, but it's a stop-gap effort
at best. There are commercial solutions both for logging and for watching
those logs and correlating events to an attack or single actor. They're also
dreadfully expensive. And at SGI everything MUST be built to scale. So
in my case I gotta be able to keep the log data distributed (so I don't have
to try to shove it all over the WAN to a single point), but they need to be
indexed, and centrally searchable, and ideally there should be some system
watching those logs with a battery of rules/fingerprints/whatever to watch
for. Allegedly, logstash+kibana+elasticsearch will solve the logging/searching
problem for me. I just haven't had time to try to set it up. And I've heard
good things about a tool called "prelude" for the analysis/correlation side
of things, but haven't had a chance to spin them up and try them out...
yet...