[fusion_text]
Without the pornography industry the Internet wouldn’t be half as big and popular as it currently is. Okay, maybe “half as big” is a stretch. After all, if it weren’t for e-commerce, I’d actually have to get dressed, leave the house, and go to three different stores to fulfill my deodorant, wine, and apparel needs (THANK YOU Amazon Prime). That’s pretty cool. However, that doesn’t change the fact that there’s an awful lot of network traffic dedicated to surfing for things on the Internet that are best left unmentioned in a family-friendly blog post such as this one. As network and systems administrators it behooves us to get a rough idea of what’s consuming our bandwidth and where our traffic is traveling to and from.
Lucky for us there’s already a protocol out there for exactly this purpose. Cisco has “Netflow”, other vendors support “sFlow”, and at this point a standard called IPFix has emerged. If we harken back to our high school reading of Shakespeare, to quote Juliet, “A [network analysis protocol] by any other name would smell just as sweet”. In other words, it doesn’t what you call it they all do roughly the same thing. For our purposes here we’ll call it “Flow”.
Flow provides the ability to collect basic information about IP network traffic as it enters or exits an interface. By analyzing the data provided, we can determine things like source, destination, class of service, and causes of congestion.
Statistics are generated based on the interfaces configured to export traffic information to a listening/collector system. On its face it may seems like the best way for this collector system to slice and dice and present the data it receives is via a per-interface view. However, there are a number of reasons why looking at this kind of data from a site/subnet perspective is superior.
First, it’s important to ask what problem Flow is trying to solve for you. Are you more interested in seeing what devices your traffic traverses or simply that file server data transfers between your New York and San Francisco offices are abnormally large compared to other remote sites? The latter is considerably easier to represent and explain to your user community than the former.
Second, you want to consider how much horsepower and bandwidth you have dedicated to tracking traffic statistics. If you’re aggregating traffic on a site/subnet basis your collector system and network are consuming much less of both. No such luxury exists if your system has to track and process that information on every interface from every router on your network.
The final reason is configuration burden. Consider a scenario where your WAN architecture is a traditional “hub ‘n spoke” layout. You have five remote sites and one core that most most traffic travels to/from. Using the best practice rule where you configure the least number of Flow exporters for a maximum view of your traffic means you’d only have to configure Flow on your core router interfaces. You’re still getting a fairly complete picture of your bandwidth quantity/application mix without the hassle and potential for errors that arise when having to configure your core router plus all your remote interfaces. Yes, this is a small five-site example. What if your WAN is 30 remote sites? Or 300? Or 5000?
It’s our job as network engineers and administrators to make sure information is flowing quickly and efficiently across our networks. Flow is a fantastic tool that we can deploy to help us meet that objective. And, it’s got the added benefit of telling us which users are doing the most shopping, streaming the most music, and, of course, surfing for the most …. Unmentionable stuff out in cyberspace. [/fusion_builder_column][/fusion_builder_row][/fusion_builder_container][/fusion_text]
See how Netreo delivers value and automates network traffic analysis