Monday, December 14, 2009

Securing Web Content

Former ICSI visitor Joakim Koskela recently presented a joint paper on securing web content at the Re-Architecting the Internet workshop held at CoNext 2009. The paper is available as:
  • Joakim Koskela, Nicholas Weaver, Andrei Gurtov, Mark Allman. Securing Web Content. ACM CoNext Workshop on ReArchitecting the Internet (ReArch), December 2009.

Friday, November 13, 2009

CCS'09 paper on automatic protocol reverse-engineering

At this week's CCS conference we presented a technique for automating protocol reverse-engineering from executable programs and its application to botnet C&C protocols. This is joint work with the BitBlaze team at UC Berkeley. MIT Technology Review has published an article on our work.

Thursday, November 12, 2009

IMC '09 Paper on Characterizing Residential Broadband Traffic

Last week at IMC we presented initial work on characterizing residential broadband traffic. The paper is:

IMC '09 Paper on Calibrating Enterprise Packet Trace Measurements

Last week we presented a paper at IMC on calibrating a set of packet traces taken by simultaneously tapping multiple switch ports within a large enterprise. We present a set of techniques, the pitfalls of not calibrating such packet traces and a quite initial traffic breakdown from LBNL's enterprise network. The paper is:

Thursday, September 17, 2009

Bro Tutorial at ACSAC

A heads-up for folks interested in learning more about using Bro effectively: In addition to the Bro workshop next month, we will also be giving a one-day Bro tutorial at this year's ACSAC conference in Honolulu, Hawaii.

Friday, August 21, 2009

Postdoctoral Fellowship Opening

The International Computer Science Institute (ICSI) invites applications for a postdoctoral Fellow position in the area of high-performance network security monitoring. The Fellow will be working with ICSI's networking group on designing, implementing, and evaluating novel approaches to highly concurrent network traffic analyses in large-scale network environments. The work will focus on exploiting the concurrency potential of both commodity and special-purpose hardware platforms, as well as on building novel programming & execution environments tailored to the target domain.

See the full job description for information on how to apply.

Thursday, August 13, 2009

Bro Workshop Registration Open

The registration for the next Bro Workshop is now open. See the previous blog posting for more information.

Monday, July 27, 2009

Bro Workshop 2009, the 2nd.

Update: See the workshop's web page for more information.

The Bro team and the Lawrence Berkeley National Lab are pleased to announce a further "Bro Workshop", a 2.5-day Bro training event that will take place in Berkeley, CA, on October 13-15, 2009.

The workshop is primarily targeted at site security personnel wishing to learn more about how Bro works, how to use its scripting language and how to generally customize the system based on a site's local policy.

Similar to previous workshops, the agenda will be an informal mix of tutorial-style presentations and hands-on lab sessions. No prior knowledge about using Bro is assumed though attendees should be familiar with Unix shell usage as well as with typical networking tools like tcpdump and Wireshark.

All participants are expected to bring a Unix-based (Linux, Mac OS X, FreeBSD) laptop with a working Bro configuration. We will provide sample trace files to work with.

This workshop will again be hosted by the Lawrence Berkeley National Lab, and it will be located at the Hotel Durant in Berkeley. We will soon provide a web site with more detailed registration and location information. To facilitate a productive lab environment, the number of attendees will be limited to 30 people. A registration fee of $125 will be charged.

Monday, June 8, 2009

Introducing the ICSI Netalyzr

Today we're very happy to announce public availability of the ICSI Netalyzr. Our goal was to build a service that shows you in detail what's up with your network connection, whatever network you might find yourself in, whenever something's not working, or when you're simply curious. The numerous tests conducted by the Netalyzr include HTTP proxy discovery, HTTP caching behavior, NAT detection, TCP & UDP port filtering, DNS resolver behavior, IPv6 connectivity, connection latency, bandwidth, and buffer properties, and more.

All you need is a Java-enabled browser and a visit to http://netalyzr.icsi.berkeley.edu.

We hope you'll find the site as useful as we do. We're very keen to hear your feedback, whether it's interesting results, suggestions for improvements, or any issues you've encountered.

Go forth and netalyze!

Thursday, April 23, 2009

LEET'09 paper on orchestration of spamming campaigns

At yesterday's LEET'09 workshop we presented an inside look at how spammers orchestrate their campaigns, based on a 10-month infiltration of the Storm botnet. This is joint work with UCSD as part of our CCIED effort.

Monday, April 13, 2009

User-Oriented Networking Talk at FIND PI Meeting

Slides from a talk at the NSF FIND PI meeting last week:

Wednesday, April 1, 2009

New Paper on Efficient Application Placement in Large WWW Apps

The following paper is about techniques for aiding systems that swap large applications in and out of use (e.g., generic platforms for web applications). It will be presented at WWW this month:

New Paper on Ephemeral Port Selection

The following paper on the efficacy of various ways to generate obscure ephemeral ports appears this month:

Thursday, February 19, 2009

Summer Internship Applications Now Being Accepted

The Networking Group is now accepting applications for Summer 2009 internships. Applicants should be Ph.D. students with a solid background in networking and/or security. To apply, send a resume to summer@icir.org, and arrange for a letter of reference to be sent to that address too. The deadline is Monday, March 2nd, 2009.

Friday, January 9, 2009

How to Report a Bro Problem

Generally, when you see Bro doing something you believe it shouldn't, the best thing to do is opening a ticket in the Bro tracker, including information how to reproduce the issue. In particular, your ticket should come with the following:

  • The Bro version you're using (if working directly from the Subversion repository, the branch and revision number.)

  • A small trace in libpcap format demonstrating the effect (assuming the problem doesn't happen right at startup already).

  • The command-line you're using to run Bro with the trace. (Please run the Bro binary directly rather than using the bro.rc wrapper from the BroLite environment.)

  • Any non-standard scripts you're using (but please only those necessary; ideally just a small code snippet).

  • The output you're seeing along with a description what you'd expect Bro to do instead.

  • If you encounter a crash, information from the core dump, such as a stack backtrace, can be very helpful. See below for more on this.

It is crucial for us to have away of reliably reproducing the effect you're seeing. Unfortunately, reproducing problems can be rather tricky with Bro because more often than not, they occur only either in very rare situations or after Bro has been running for some time. In particular, getting a small trace showing a particular effect can be a real problem. In the following, I'll summarize some strategies to this end.

How Do I Get a Trace File?

Since Bro is usually running live, coming up with a small trace file can turn out to be a challenge. Often it works to best to start with a large trace triggering the problem, and then successively thin it out as much a possible.

To get to the initial, large trace, here are few things you can try:

  • Capture a trace with tcpdump, either on the same interface Bro is running on, or on another host where you can generate traffic of the kind likely triggering the problem (e.g., if you're seeing problems with the HTTP analyzer, record some of your Web browsing on your desktop.) When using tcpdump, don't forget to record complete packets (tcpdump -s 0 ...).

    You can reduce the amount of traffic captured by using the same BPF filter as Bro is using. If you add print-filter to Bro's command-line, it will print its BPF filter to stdout, which you can copy over to tcpdump.

  • Bro's command-line option -w <trace> records all packets processed by Bro to the given the trace file. You can then later run Bro offline on this trace and it will process the packets in the same way as it did live. This is particularly helpful with problems which only occur after Bro has been running for some time. For example, sometimes crashes are triggered by a particular kind of traffic only occurring rarely. Running Bro live with -w and then, after the crash, offline on the recorded trace might, with a little bit of luck, reproduce the the problem reliably.

    However, be careful with -w: it can result in huge trace files, quickly filling up your disk. (One way to mitigate the space issues is to periodically delete the trace file by configuring rotate-logs.bro accordingly.)

  • Finally, you can try running Bro on some publically available trace files, such as anonymized FTP traffic, headers-only enterprise traffic, or Defcon traffic. Some of these particularly stress certain components of Bro (e.g., the Defcon traces contain tons of scans).

Once you have a trace which demonstrates the effect, you will often notice that it's pretty big, in particular if recorded from the link you're monitoring. Therefore, the next step is to shrink its size as much as possible. Here are a few things you can try to this end:

  • Very often, a single connection is able to demonstrate the problem. If you can identify which one it is (e.g., from one of Bro's *.log files) you can extract the connection's packets from the trace with tcpdump by filtering for its 4-tuple of addresses and ports:

    tcpdump -r large.trace -w small.trace \
       host <ip1> and port <port1> \
       and host <ip2> and port <port2>.
    

  • If you can't reduce the problem to a connection, try to identify either a host pair or a single host triggering it, and filter down the trace accordingly.

  • You can try to extract a smaller time slice from the trace using the TCPslice utility. For example, to extract the first 100 seconds from the trace:

    tcpslice +100 <in >out
    

    Alternatively, tcpdump extracts the first n packets with its option -c <n>.

Getting More Information After a Crash.

If Bro crashes, a core dump can be very helpful to nail down the problem. Examining a core is not for the faint of heart but can reveal extremely useful information ...

First, you should configure Bro with the option --enable-debug and recompile; this will disable all compiler optimizations and thus make the core dump more useful (don't expect great performance of this version though; compiling Bro without optimization has a noticeable impact on its CPU usage.). Then enable core dumps if you don't have already (e.g., ulimit -c unlimited if you're using a bash).

Once Bro has crashed, start gdb with the Bro binary and the file containing the dump. (Alternatively, you can also run Bro directly inside gdb instead of working from a core file.) The first helpful information to include with your tracker ticket is a stack backtrace, which you get with gdb's bt command:

gdb bro core
[...]
> bt
....

If the crash occurs inside Bro's script interpreter, the next thing to do is identifying the line of script code processed just before the abnormal termination. Look for methods in the stack backtrace which belong to any of the script interpreter's classes; roughly speaking, these are all classes with names ending in Expr, Stmt, or Val. Then climb up the stack with up until you reach the first of these methods. The object to which this is pointing, will have a Location object, which in turn contains the file name and line number of the corresponding piece of script code. Continuing the example from above, here's how to get that information:

>up
>...
>up
>print this->location->filename
>print this->location->first_line

If the crash occurs while processing input packets but you cannot directly tell which connection is responsible (and thus not extract its packets from the trace as suggested above), try getting the 4-tuple of the connection currently being processed from the core dump. To this end again examine the stack backtrace, this time looking for methods belonging to the Connection class. The connection class has members orig_addr/resp_addr and orig_port/resp_port storing (pointers to) the IP addresses and ports respectively:

>up
>...
>up
>printf "%08x:%04x %08x:%04x\n", \
    *this->orig_addr, this->orig_port, \
    *this->resp_addr, this->resp_port

Note that these values are stored in network byte order so you will need flip the bytes around if you are on a low-endian machine (which is why the above example prints them in hex). For example, if an IP address prints as 0100007f, that's 127.0.0.1.