Save Your Network – Protecting Manufacturing Data from Deadly Breaches

Today we're going to take a look at threats specific to manufacturing that's spilling over into biotech and food production as well. Real quick on the agenda, we're going to describe the problems specific to manufacturing. I'll hit on some network behavioral anomaly detection definitions so that we understand the context of what I'm talking about. We'll look at how to protect those things that are being targeted for theft inside of manufacturing. Look at the risk of insider threat, and how address that. Also, the importance of audit trails for doing incident response on the tail end of an event. Time permitting, we'll look at StealthWatch as a solution in these areas. Real quick, on the problem statement, there are two major areas of risk or target that affect manufacturing that don't really apply to a lot of other industries. Intellectual property, design documents, those types of things, blueprints are big money. That's been true for as long as there has been manufacturing.

Not in just the things being delivered, but the ways that different manufacturers are able to create efficiencies around manufacturing. How can you reduce the amount of time, the amount of cost? Of course, that has an impact on the competitiveness of the products being delivered. On the other side, one thing that has become increasingly important is protecting M&A data. Manufacturers by nature have to acquire smaller organizations, either to get the efficiencies that that small organization has, acquire their blueprints, deal with the changes in the market. Acquisitions, more so than mergers, are a routine part of life inside of manufacturing. You can imagine a smaller organization that's being acquisition is the end game for them. It's the biggest thing that's going to happen in their lives. They can't be acquired a second time. As a result, we're seeing acquisition targets increasingly attacking, not necessarily through hacking mechanisms, but through social engineering, through bribery to encourage someone inside of the acquiring organization to provide competitive intelligence.

It can be used to make the acquisition more possible for the organization. We've seen that several times in the last couple of years with data being stolen, so that when you go to the negotiating table, the acquisition target has more useful information, or has a better negotiating point than the acquiring organization. Of course, it has a big impact on the bottom line of that acquisition. Who are attacking? Who are the aggressors in this scenario? The first, and probably the least important, are the activists, the Anonymous of the world, and those that would want to hurt manufacturing. We've seen this a couple of times in the last few years, where an executive said something that they shouldn't have said, or manufacturing processes are aggravating the activists.

They feel they need to have a conversation about it, and [inaudible 03:41] the conversations are stealing information and publishing it to shame the manufacturer, to shame the organization, or denial of service to hurt their bottom line. That is a thing that happens. You see that, but a lot less scary than really the rest of these. Competitors and acquisition targets are an issue inside of manufacturing. Those that want the blueprints, want the information to allow and enable them to be more competitive. Maybe they're not even stealing the blueprint, but knowing what the blueprint is would allow them to create something to leapfrog over the targeted organization. We have seen a nation state attacks against manufacturers. The PLA attribution from Mandiant a couple of years ago saw China stealing intellectual property from manufacturers to make Chinese-based manufacturers more competitive globally. That's a known problem that we have in different geopolitical situations.

Then you have the insider threat. That's someone that is working for organization that's inside. They have a number of reasons to hurt their employer or the organization. It could be money. It could be ideology. They're upset. It could be disgruntlement. It could be that they're being bribed, or that they have a geopolitical leaning towards one of these others. When we look at those aggressors, and then when you start looking at it from the vulnerabilities that manufacturing has, one of them is the porous access to data. Different things like PCI scopes or even HIPAA, you can more granularly control who has access to data. The intellectual property, whether it's blueprints, formulas, whether it's M&A data, people have to interact with that data to collect it, to catalog it, to improve it, to create next year's design. Because of that access, it's difficult to control and monitor how that information is being used.

You also have partners, distributors, contractors that are accessing the data in some manner. In global manufacturing, you have geo-distribution of those teams, both all over the world accessing that same set of data. That becomes more of an issue or [inaudible 06:28] an issue when you look at how diverse manufacturing networks are. You have all the problems that all the other enterprises have with BYOD, laptops, and mobile devices, diverse types of end points. You also have manufacturing devices that may be running proprietary code, may be running code that's out of support, may be running in recent months on Windows XP that has reached end of life and the supports through Microsoft. How do you deal with all of those different types of devices? Those devices are connected and largely able to communicate with each other with the typical Internet of Things type of problems.

You have many types of users, more so than we've seen in many other enterprise organizations, the engineers, the folks on the floor. There's contractors and reps. There's a lot of issues there. Patching and supporting these small batch or limited use applications is much harder than the more broadly used applications. There's not a lot of guidance on how to harden these devices. Because of that, you have a lot of different ways to get into the network. You can break in through something that's doing welding. A welding machine might be a way to get deep into the network, and then controlling that access is the challenge. In addressing the detection piece, I'm going to go over network behavioral anomaly detection as a school of thought. NBAD is IDS, intrusion detection. Dr.

Dorothy Denning, when she coined the term IDS in 1986, broke up the schools of detection into signature, anomaly and behavioral based, and we'll look how that applies in a second. Network behavioral anomaly detection is not so concerned with what is in a packet, but how many packets of that type there are. You're counting a lot of things. When you start counting, for instance, how many [vices 08:50] an engineer normally accesses from the blueprint data it monitored in a period of time. You calculate all the engineers and create a mathematical baseline for what's normal, and then you start counting each user access and compare it to that baseline. That's, in a nutshell, how NBAD works, and because we're counting packets, NetFlow, because it has the counters and the NetFlow export, it became a really great source of this data because we can essentially convert the entire network into a surveillance probe for doing this type of monitoring. It allows us to see the [east web 09:27] conversations, not just the things going across chokepoints.

However, sometimes NBAD or even NBA, which is [similar to 09:34] NBAD or a subset of NBAD, are lumped into NetFlow security tools. Just because something doesn't have flow doesn't mean it's not processing at an NBAD level. If you're not familiar with NetFlow, it's metadata of communications going across an interface. NetFlow properly was invented by Cisco. NetFlow version 9, there's an informational RFC listed here, 3950. IPFIX, which is sometimes called NetFlow version 10 really is the open source standard for metalogging. NetFlow isn't [SFlow 10:09]. [SFlow 10:10] is one out of N packets. NetFlow is a log entry about a communication crossing an interface, or rather a switch or something along those lines. Other manufacturers aside from Cisco have their own brand of NetFlow, from a P Cap level, very much the same as NetFlow.

[inaudible 10:32] they're listed there. Here on the lower section, you can see it's just the time started and the interface it's on, this IP to this IP, or this port to this port. There's a lot of fields that you can put in the NetFlow. Just think of it in general terms of a metalog of communications. I mentioned a minute ago that Dr. Denning came up with three ways of detecting any type of breach, whether it's on the network or on the end point. When we tend to think about detection, we think about signature-based detection. Signature-based detection is when we look at the object before it enters the subject, before it enters the target. For instance, a signature-based IDS is looking for a string of patterns. If you see those patterns, you know that hack is associated with a SQL injection attack against [CVE 11:28], XYZ, whatever it is.

In the physical security realm, this would be akin to looking for, if you run something through an X-ray machine, looking for something that looks like a bomb. It has certain patterns to it. In even older times, if you're [feeding 11:43] someone poison, and you can smell the poison or see the poison, you're inspecting the object. Behavioral based detection is once the object has been consumed by the target, you watch the behaviors of the target looking for known bad behaviors. In that oldest scenario, you'd give the wine to the cup bearer for the king. He would drink the wine, and then you would watch him to see if he did something that's known bad like dying, passing out, choking, those types of things. From that, you're able to determine that the object must have been bad because the behavior that occurred in the target subsequently was also bad. In a network perspective, sandboxing does this.

You put a file inside of the sandbox, and then you watch what happens when the file is executed. From a network perspective, what you're doing is you're looking for known bad behaviors like scanning the network, trying to break through a firewall, using policies or protocols that shouldn't be used. Those types of things that are known bad. Malware sandboxing that does this. NBAD does this. Host intrusion protection systems do some level of this. Security event logging solutions like SIMS are designed to do things like this as well, looking for things like machines on the network that are brute forcing passwords, or doing some other activities that are known bad behaviors. Anomaly based detection is similar to behavioral based detection in that you're watching the behavior of the victim, but you're watching the behavior of the victim against known good behavior.

In the physical security realm, if someone always shows up at work at 7 o'clock in the morning, but then all of the sudden, they don't, they show up at 8 o'clock or 9 o'clock or the next day, that's anomalous. It's deviating from what is known to be normal. On the network side, if desktops communicating with email servers over POP3 all day long, and then all of the sudden, one of them starts talking over SSH or doing something that's strange. The construction of the packets is different. It's collecting too much data. Anything that's mathematically abnormal is really where an anomaly fits in. When we look at how these things fit together, the way we write new signatures is through behavioral based detection. When a 0-day piece of malware makes it into the network, we might know what the malware was, but the actions that it does were known.

We had analyzed that file and created a new signature. In the same thing, anomaly creates behavioral checks. If there's a new bad behavior that we begin to see in the network or in the host, what happens with anomaly detection is it's a thing that we don't know what it is. We analyze it. We figure out that that behavior was attached to data hoarding or data saving or whatever it may be, and we create a behavioral signature. That's the lifecycle of threat detection. As I have here, in [inaudible 15:02] boils down, signature based detection is the best, most reliable way of catching known exploits. 0-day is best caught with behavioral based detection, either sandboxing or behavioral based detection. Then credential abuse. What I mean by this is where there's nothing to detect. A user is abusing their privilege to do something, like Edward Snowden.

He was hired by Booz Allen to work on behalf of the NSA to administer the data. He had access to these machines. There is a normal amount of data that he would collect, and there's an abnormal amount. If you look at all the Booz Allen contractors at the NSA in that similar role, and you look at how much data, it might be 10 megs or 20 megs of data exchanged in a day. After he had collected several gigabytes or even close to a terabyte of data, that's anomalous. It's mathematically wrong. Theres no other way to detect those types of attacks because you can't throw a flag that he connected to the machine. He was supposed to. There was no malware used. There was no malware to detect, either behavioral or otherwise. There was no known bad behavior. He was supposed to upload and download data. It's a matter of mathematical deviation here. When I talk about credential abuse, malware-less attacks, there's no malware.

It's authorized users, or a type of malware that's otherwise undetectable theoretically. A challenge you run into inside of behavioral and anomaly detection that you don't have signature based detection is that there are many shades of abnormal. I gave the example of the guy that showed up at work, always shows up at 7. Then he shows up at 7:05. That is anomalous, but you certainly don't call the police at 7:04 when he's not there. When do you do that? At what point is it so mathematically strange? Or maybe there's multiple events that indicate that something is wrong, that would lead to take an action like calling the police or his family or something along those lines. Where signature detection can be Boolean, if the pattern is matched, and you can run some additional checks on it, the output of signature Boolean based detection is always a green light or a red light. It's a logical high or a logical low, 1 or 0. When you're dealing with things that are different shades, and infinite shades of gray if you will, inside of anomaly detection, you have to have a way to reliable deal with those different shades.

When do you alarm people? The way that I advocate is using algorithmic detection through indexes or indices. You assign an index, and in the case of StealthWatch, you use something called a concern index, it's assigned to every entity, every host on the network. You start out in the monitoring period with a score of zero. If something is mathematically abnormal, but only slightly abnormal, the host might accumulate a few points or increment the index a few points. If it does something that's radically far from normal, he'll accumulate more points quickly. If he's doing things that are known bad, like scanning the network, as I have up here, or pivoting activity, those points might accumulate very fast. The cumulative odd behavior gets to a point that, based upon the risk of the host, requires an alarm. Some hosts we'll be more concerned about. Those hosts have access to intellectual property information or the acquisition information will have a much lower threshold, so maybe at 10,000 points, an alarm is triggered.

An incidence response is begun, whereas someone on a quarantined or isolated guest network might have a million points, or two million points, as the threshold before we start caring. That's to say to do in that effectively in practice, doing this algorithmic detection. Algorithmic meaning, how do we increment the counters? Then these counters mean indexes that are compared to the risk of those hosts. There's a handful of things. This is an exhaustive list. We've talked about some of the things today. Manufacturers, also manufacturers for governments. I've spent time in the Department of Defense, and we trusted several different manufacturers to produce hardware for us that would be useful. If that fell into the hands of other nation states, it could be used against us in a combat situation. We had state secrets in this. We had competitive information with M&A. We have research and intellectual property. This data exists somewhere inside of the network. These are the crown jewels. When I say crown jewels, I mean things we don't want stolen, that we can't have stolen.

There's cost. We sort of already hit on this. Some of these aren't applicable to manufacturing. Criminals want to make money, activists want to shame you, state sponsored, wants to improve their geopolitical stance, whether it's against your nation, or it's improving the economic competitiveness of their like manufacturers. Inside of monitoring for crown jewels, we don't really care about the perimeter as much. We do; it's not irrelevant. What's really important is monitoring the east-west traffic. How are the workstations communicating with the servers? How are the servers communicating with the servers? How is data being accessed? The first thing we have to do is log and monitor every communication inside the network. Of course, as brought up, NetFlow is a great way of doing that. We were working with a global manufacturer earlier this month. We were able to turn on North American monitoring in less than 30 minutes to see virtually every conversation happening in North America, whether it's inside the factories or inside of office locations or the rest.

We were collecting and analyzing all of that data very quickly just by enabling that on different parts of the network, the routers and switches, the firewalls. We flipped on that data, and we were able to start analyzing those communications. Once you have the data coming in, to do network behavioral anomaly detection, you have to do something called enclaving. Inside of StealthWatch we call that host groups. You create something called crown jewels. You wouldn't call it crown jewels. You'd probably call it design documents or acquisition data, and we would put into these buckets these enclaves, the host groups, where is the data. Which machine houses the data? Then we would map out how that information is being accessed. Inside of StealthWatch specifically, we can count every byte that a host is collecting. [inaudible 22:39] baseline, we compare that to normal. Suspect data hoarding is when a host collects more data than is mathematically anomalous and compared to the risk.

If we only expected an engineer to touch 50 megs of data in a day, and he just downloaded 50 gigabytes of data in a day, that would trigger data hoarding. It might not be from one source. It might be that he's collecting it from many different sources around the network. The corollary to that is target data hoarding, which is where we're monitoring how much data is being taken off of the server. It's looking at how much information each server is giving to multiple destinations. This is important when an attacker tries to circumvent monitoring by taking control of many end points, and using those end points as internal bots to go and collect this data to circumvent detection. We looked at that. We also look at total data. How much total data is being exchanged between hosts? Then data loss is counting how many bytes every host sends out of the network.

How much information is being exfiltrated over a monitored period of time? The other way of protecting the data is looking at validating segmentation. Once you know that this is where the data is, these are the folks that should access the data. These are the machines that are providing support, whether it's pushing patches or doing DNS or something along those lines to these in-scope machines, the only place we should see data are on these triangles. Here, actually on this line. They might download things from the internet. You have these hosts talking, these hosts talking, and these hosts talking. We baseline what is normal, but before we even baseline what is normal, we would say there should never be traffic here. There should never be any traffic here. These are logical mappings. We don't really care how the data leaves. I was working with a customer who had data in a quarantined segment like this, and the attackers were able to steal credentials from the administrator and create a VPN tunnel, an [LTTP 24:58] tunnel, to bypass the segmentation using those stolen credentials.

You don't want to watch for this type of exfiltration at the physical level, at the network level. You want to look it at a logical level. I don't care how the data goes from here to here. It should never do that. We're going to monitor on that, and if it ever happens, if segmentation is ever broken, if policies were violated, an alarm would come out of that using what we call a host lock violation or segmentation violation. Same thing here. These have a threshold of zero. These do have thresholds, but we're going to baseline what is normal. What's the normal amount of data that goes between these in a day? If that becomes excessive, we're going to alert on that. We're not just looking at validating that nothing should happen, but when things should happen, making sure that it stays within the realm of normal. The privileges aren't being abused. What that allows us to do is calculate events. The segmentation violation from crown jewels authorized to an invalid server.

This host, this user spoke to someone he should not have. We were able to trigger those events very quickly. That�s from the crown jewel perspective. When we look at another piece of this, it's the insider. When we're talking about the insider, from a computer science perspective it falls into three different buckets. You have the actual human person, and by the way, thanks to my boss, Tom Kennedy, for letting me use his face. He just looks like the perfect insider. You have the human being that can be compromised. You have the credentials, meaning how they actually communicate with these end points, the password, the certificate, however this computer knows that this person is connected to the machine. Then you have the actual machine. Each one of these can be targeted for exploit by the adversaries. When you look at the person, there's everything that's wrong in the human condition can be exploited as a vulnerability to convert this person.

I mentioned M&A. Just a few months ago, I was working with a manufacturer, who had this issue where someone internally was being bribed to provide information. That falls under the vulnerability of financial hardship and greed, and then you just bribe. Give them money. Why go through all the trouble of trying to craft a sophisticated cyberattack when $1,000 or $10,000 worth of bribes will do? Everything else here, fear, loneliness. You can kidnap people. You can take pictures, extortion. This is not an exhaustive list, but really any way that you can con or manipulate a human being, a fault or vulnerability, for those inside the network. On the credentials threat, those are normally passwords. Passwords are a horrible way to do authentication, but they're also the most common way.

You can brute force it. We've gone through the scenario. For weak passwords with the computation we have today, me and a counterpart built a machine for $380 that can go through 4 million hashes a second, trying to crack a password. That, a few years ago, would have been very expensive. Now, pretty much anyone can crack a password, or crack hashes, but a brute force a password. It's becoming increasingly easy to do that. Another thing is we share a lot of information. Human beings share information on Facebook and other social media that help you figure out what their dictionary might be. Our personal dictionary, so if you know the person's wife is Sherry, you add Sherry to the dictionary. If you know they like fishing, you add fishing terms into the dictionary. Doing a hybrid of social media farming and then building a dictionary there allows you to do it even in a fraction of the time that even a brute force will allow. Multi-domain usage of passwords.

If someone's using the same password at work that they use on a website on the internet, so you'd attack the internet website. You'd do a SQL injection or some other type of attack, and you have their password or their hashes which give you their passwords. Then you retry those passwords on their work credentials. We see that increasingly occurring. Keyloggers, analog to digital conversion means someone is typing it in. Either you just watch them type it in, or someone looking over the cube. You can put a camera to record these things. GoPros are very cheap, so if you want to steal someone's credentials in your office, you can just get an HD or a 4K HD camera and just point it at someone's keyboard and record it very simply that way. It's a very cheap exploit to steal credentials. Then the credentials are in transit, in transmission. I typed them in the computer.

They're going to go to a server. If you have some compromised point in the middle that can log those credentials in transit, you can extract those. Now, instead of having the user do the bad things, you can pose as the user. By the way, I'm going in the order of the worst possible thing, to the worst horrible thing. Compromising a user gives you a long list of possibilities. There's a shorter list of possibilities when you just have their credentials. Then we have the endpoint as yet a shorter list. When the end point compromises, the human being downloads something they shouldn't, they click something, they install something they shouldn't, machines communicate over port. You can use those open ports to infect a machine, things like malware worms. Supply chain is something that's come up in the news. The machine is being shipped to you.

It may be compromised, or the installation media may be compromised to put controlware on that machine to beacon out to the end folks. If you have an insider who also wants to [inflate 32:02] someone's end point, if the guy leaves the machine unlocked, you've got a compromised insider who uses his colleague's machine. It looks like the colleague is participating in the attack. In these things, [I'm always 32:21] supposed to deliver statistics because it validates it, but some of these statistics are pretty impossible right now. When you deal with insiders, there's no malware to track. If it's actually the real insider, especially a technical insider, he is in a very good position to cover his own tracks and destroy evidence. They're going to understand. An insider has a better understanding what the response would look like because he's inside and has access to information that others wouldn't.

He knows people and the rest. There's an increasing availability of tools and information that enables an insider to get away with it, to get away with the theft. Finally, from a law enforcement perspective, it becomes difficult, unless you're doing some very smart logging, to actually attribute the attack to human being. How do you know it just wasn't the end point that was compromised, or that his credentials weren't stolen? How can you at a legal level prove that it was the human being? That said, some of [inaudible 33:26] are hard. The only thing that I could find that would be useful in this conversation is something from Carnegie Mellon CERT. They looked at the breakdown of the insider threats that were actually detected. These were the detected insider threats. IT sabotage in the angry guy that blows things up when he leaves. We'll leave that one off the table for right now. For someone that's looking for financial gain, they're being bribed.

They can sell the information, that type of thing. Seventy-five percent of those detected hacks were done with credentials. Eighty-five percent of those 75% were done with their own credentials. Almost none of them compromised an account. They weren't using someone else's account. If they did, it was one those situations where someone showed up and used the keyboard of someone else. This was pretty surprising. Almost all, or 84%, of those attackers were non-technical. They were just the guy that would take the bribe and follow instructions. They happened during normal working hours. Same thing on business advantage. If you have an engineer or a sales guy that is leaving an organization and going to another organization, they can take competitive information, whether it's the Rolodex information, blueprint information. It's almost always authorized access, almost always their own credentials, and they almost never do anything sophisticated to compromise the account.

It's trusted folks taking stuff and leaving with it. I see this all the time. I saw this downtown Chicago just a few weeks ago with an executive taking a lot of data off the network. He trigger suspect data hoarding. We came to find out in the investigation that he was on his way out. It's very common. As a matter of fact, I joked to say that when we see a suspect data hoarding event, you tie it to the user, and you'll almost every time find their LinkedIn profile recently updated. They're getting ready to move on. A few things. How do you reduce insider vulnerability? How do you reduce the possibility of exploits? The human being part is the same things we've been doing in US military, US government forever because you have to give people access to data to be operational. Things like background checks with ongoing as well as psychological checks, and of course, that opens up a whole can of worms on civil liberties and the rest, but those are the concrete things that could be done. Dealing with credentials, you need to deal with better authentication.

Two-factor authentication, using expiring tokens or certificates, or biometrics, or even just changing passwords and making it more complex. Then dealing with end points, things like malware sandboxing, improved policy, reduction of entry points, all of those things can lead to reduced vulnerability from the insider threat perspective. A couple of things we do inside of StealthWatch. Using identity information from solutions like Cisco's identity services engine, or Lancope's identity box. We can map the user identities and the MAC addresses to the communications coming off of NetFlow. There's also some NetFlow devices that can provide user information in the NetFlow export. From that, in this example, we can user Ethel was in Atlanta and Boston at basically the same time.

As scary as this is, it's a lot scarier if they're in Detroit and they're in Beijing at the same time. Monitoring for geographic user anomaly. This is suspect data hoarding. When we mathematically calculate that a host should consume no more than 100 megs of data from the network in a day based upon his peers. We observed that he collected 11 gigabytes of data in a day. That's going to create your suspect data hoarding anomaly. Suspect data loss is very similar. The threshold is calculated as 500 megs for data to leave the network for this type of host, but we saw 8.3 gigs of data leave the network going somewhere. Reason for investigation. Second to last here, I'm going to talk about using audit trails to deal with response and reduce risk. For one, if insiders are aware that every communication is being logged, analyzed, and kept, it will reduce the number of incidents of crime happening. If they're aware of it because they don't like evidence.

Criminals don't evidence. They don't like consequences. If they believe there is no monitoring going on, the occurrence will be higher. Collecting audit logs, we'll talk about it in different types, so that you have evidence, concrete evidence. It reduces the risk, greatly reduces the risk. The note on this is as you are doing the collection, evidence inside of computer systems decays almost instantly. In the physical world, the fingerprint can stick around for a long time. We're talking microseconds, milliseconds, that type of frame, when we're talking about computer science. You have to collect the data and have a plan for collecting all the evidence long before there is actually an event.

These are the things that we're talking about. Logging. System logs from firewalls and from firewalls and servers. Collecting all of that, that log management at the firewall level, and at the log management level. Signature based events should also be collected, so you have basically log entries that need to be collected so you can look at the log entries. The challenge with log entries is that they're highly processed. They're not really data anymore. They're some level of information. Those can be scrutinized, and a lot easier than lower level data. Full packet capture is exactly what it sounds like. It's collecting every packet, the header to the payload, and storing that. Then that flows the metadata going across the interfaces that we discussed earlier. You really have sys logs. You have NetFlow, and you have P Cap to collect. NetFlow is going to give you the breadth of everything happening across the organization, the global organization. P Cap is going to be chokepoint. The benefit of NetFlow is breadth, and it's also data retention. You can keep it a lot longer, and when you need it, it's easier to query because of the semantics of the NetFlow fields are known.

You can run queries quicker. Then, as needed, you would pivot down to P Cap. These are the evidence sources you want to keep as audit trails. We've already hit on this, but when you look at NetFlow compared to P Cap, you might put a P Cap collector somewhere around here. You wouldn't see this guy talking to this guy, or this guy talking to this guy. This is of course about the simplest network diagram that could exist. You can imagine all the different places you would have to put P Cap probes, the cost and the rest, where you can instantly have visibility at least at the meta level of the logs for prosecution and investigations immediately. Further going down that line, taking about user attribution, you can now monitor at the user level. Forgive the resolution of this.

I don't know why it's so blurry. We can map all the conversations to actual user names. Now, instead of us monitoring computers, we're monitoring computers with their users. We can run a user level report. Inside of StealthWatch, you can run a user snapshot to give a summary of everything a specific user has been doing across all of the machines they've been logged into. Coming close to the time, but I just want to go over a handful of points about Lancope and StealthWatch for those of you who aren't familiar. The company was founded in 2000 to do network behavioral anomaly detection. At the time, I was a network security officer at the Naval Postgraduate School. We were doing research. The research at Lancope was founded out of Georgia Tech. For 15 years, we've been really the only prime time show for network behavioral anomaly detection, catching these insider threats, catching these sophisticated attacks, 0-day threats, malware breakouts, providing forensic logging.

The company has grown radically as the market has realized the importance of these threats. Again, we provide advanced detection, deep detection, so the east-west analysis, policy violation, and insider threats. That is the company overview. We've already talked about this in different ways, but StealthWatch converts the network into the sensor. It's not about deploying more probes. There will not be a security tool that you'll deploy this year that will require less effort and give a quicker return than StealthWatch. It's again monitoring every device, every user, what they're doing, who they're communicating with, and providing detailed forensic records when needed. This is the, know every host. As a host communicating on the network, StealthWatch creates a host profile. As we see a user, we create a user profile.

We discovered it because it was talking. We heard the communication, and we create that host profile. We record every conversation to be used for investigative purposes when there is an incident. We do know there are. There is never a shortage of incidents. We baseline what is normal. What does normal manufacturing traffic look like? How do the plants communicate with headquarter? What is normal, so that we can alert the change when something aside from that occurs? Then retention is massive. Many of our customers have six months or a years worth of contextual information about their network, about the conversations that can be used to investigate. We're doing investigations. I was at a conference yesterday working through this, but an investigation needs to determine a couple of things, and one of them is the impact. If a specific tool, maybe it's a malware sandboxing like Cisco's [Fireant 45:16] technology or an IPS solution, or maybe it's third party, but when you become aware of an incident, you need to determine the impact of the incident.

Machine A is infected. Okay, if machine A was infected [at 45:31] Monday, what does it do ahead of the infection? What was the impact of it? Who did that host communicate with during the infection? Did it download data? Did it exfiltrate data? That's why that breadth of lateral conversation is so important. The other thing aside from determining impact in investigation is determining the retroanalysis. Is this the first time this has happened in the network? Or is this a systematic ongoing problem? Is the only machine impacted by this, or do we have many across the globe? Having access to all that data instantly, we can go back and resolve those questions. The breakdown of the tools. Deploying. There's one SMC that handles all global reporting. There's a FlowCollector. It's normally deployed per continent in global instances and all the NetFlow is sent to the flow collector. We have FlowSensors to generate the NetFlow, where there's limited or poor NetFlow features on the route switch infrastructure. The FlowReplicator or UDP director can help the NetFlow to collect to the StealthWatch FlowCollector and other tools.

Then we also have integration with Cisco's [inaudible 46:52] engine and the StealthWatch ID box..