When monitoring and optimizing complex technology systems, both direct observation of packets and self-reporting logs can be employed, each with its own strengths and weaknesses in terms of reliability, validity, cost, and complexity.
All complex systems operate within an environment of uncertainty. No matter the system under consideration, the benefits of digitization over traditional analog or even anecdotal evidence are staggering. Reliability, reproducibility, and interoperability are the main reasons many organizations are building a culture of “Data Driven Decisions”. The oldest adage in computer science is “Garbage-In = Garbage-Out” and conversely if you improve your data source, you improve your outcome.
While the primary objective is to get the right information to the right decision maker at the right time, the strengths, weaknesses and tradeoffs of various approaches are critical to understand and prioritize to select the right solution. The primary tenet of many organizations is to build confidence in the sources of data. Once there is confidence in the data, automation is key and becomes increasingly important. Higher quality source data improves automation resulting in faster response times and more effective security operations. However, attackers are increasingly leveraging the same approach with AI and automation more and more.
The Importance of the Network
As technology systems have evolved from the single core microprocessor to vastly distributed services running across multiple hybrid platforms, the network has become increasingly important, securing itself as the backbone of technology. The overall capacity in both speed and latency continues to progress dramatically. In addition, networking options have demonstrated strong flexibility in keeping up with the features and functionality required to enable the new era of applications, architectures, and disparate platforms.
The increased demand for system utilization and network reliability has promoted the network to the most critical element of the technology stack. Every cyber incident involves networking elements, and organizations leveraging network intelligence present a more formidable target and are able to respond more effectively. In modern technology stacks, the network is the most fundamental component that enables everything and therefore is the most critical element of any cybersecurity solution. Connecting users to resources, and subsystems to each other, as well as enabling proper scalability all require robust, performant networks.
Self-Reporting via Log Collection & Analysis
The most basic form of information gathering is simply asking the subject a series of questions, also known as self-reporting. In our complex technology stacks, this is represented primarily as telemetry or logs. Each individual element provides a reasonable view of their own perceived functional status and debugging details. While in many cases this is perfectly viable for the task at hand when considering a single component or system from a single vendor. However, it becomes much more complex as the technology stack diversifies into service-based and then further into multi-geography, multi-vendor micro-services.
The introduction of distributed cloud and hybrid resources with sophisticated access and billing techniques, compounds the situation even more. The complex interactions between elements end up with an increased probability of “finger-pointing”, as well as often providing sometimes misleading telemetry. This is even more critical when it comes to cybersecurity, where detection and response is a priority. The ability of a security organization to consume, interpret and react to a variety of logs, relies on experts, and the experts require reliable data. Normalization and correlation of disparate telemetry and logs has become commonplace and ends up consuming a nontrivial effort for each organization.
Direct Observation via Passive Packet Analysis
Observation is the most fundamental form of analysis and continues to be a key component of any continuous improvement process. Developing a view of the current state of any subject, relies on observations Specialized sensors that help with making these observations are highly optimized to the task at hand of providing the most reliable and timely information to the decision makers. This is achieved through a method of monitoring and analysis called Passive Packet Analysis. Passive Packet Analysis allows packets to be examined as they traverse the network without interfering with the active flow of data.
One example of this is asset discovery. As many organizations grow, their reliance on digital assets, automation and the number of network entities, also grows. Managing the asset database is critical for compliance, contingency, prioritization, financial, and most importantly security. However, many times new devices are introduced into the network either by accident (failure of a process), or on purpose (shadow IT). Using periodic active scans requires network resources and can appear as a possible security threat. A better solution is passively monitoring network traffic through direct observation, which provides a zero-touch, always-on detection method for any device or application introduced into the technology stack.
Observation Strategy
In practice, a combination of direct observation through passive packet analysis and self-reporting through log collection, alerting, and detecting may be employed to leverage the strengths of both approaches and mitigate their respective weaknesses. Passive Packet Analysis provides reliable and objective data, while log analysis can offer insights into the system’s internal state and operations.
The choice between passive packet analysis and log collection, or the optimal combination of the two, depends on the specific requirements, constraints, and characteristics of the complex technology system being monitored, optimized, and secured. Factors such as organization maturity, system complexity, criticality, available resources, automation targets, security requirements, and optimization goals should be considered when selecting the appropriate observation strategy.
Log Collection & Analysis |
Passive Packet Analysis |
|
Reliability | Most systems provide some form of telemetry often related to debugging. Utilizing this for monitoring is viable but consumes resources from the system itself. If a system is optimized for volatile loads, the prioritization of the telemetry can often be lower than the primary function. | Using a dedicated system for the observation across many observation points ensures deterministic priority. In volatile or compromised systems when the load causes abnormal behavior a reliable monitoring solution is critical. |
Validity | When systems present debugging information, the primary consumer of this information is the support or development teams. However, when the information is presented as telemetry to third parties, the definitions of qualitative measures are not always revealed. Health status is usually green, and the user has little control to adjust the threshold. | Using an independent third-party to work on raw network observations provides the most accurate representation. Even during initial testing of a solution, the raw measurements can be used to validate self-reported values and align expectations. Thresholds can be configured independently across different factors based on the context. |
Cost | All monitoring solutions require collection, correlation, and decision support elements. Self-reporting is the entry-level of monitoring, because some form of it is usually available, such as web scraping or basic APIs. | Adding direct observation sensors to a monitoring solution is an incremental cost but is often offset by other factors. Using fractional monitoring where direct observation is dynamic and flexible with the ability to monitor different paths or elements based on current conditions, enables the crawl, walk, run approach to security and operational monitoring. |
Complexity | While Self-reporting may appear to be the simplest, the variability between vendors and reverse engineering required adds up. Normalization becomes non-trivial when contextual correlation is required to achieve proper insight to support critical decisions. Some organizations attempt to “roll their own” because the measurements appear to be accessible and relatable, however they often run into challenges with normalization and correlation. | Directly observing a measurement or network provides the simplest form of acquisition. However, developing the insight from the observation requires expertise to build appropriate facts, insight, or context to support critical decisions. Most organizations rely on dedicated monitoring solutions to ensure expertise is aligned. |
As indicated in the comparison table there are significant differences between passive packet analysis and log collection for analysis. Using both where appropriate provides the most flexibility across all dimensions.
WireX Systems Ne2ition Platform enables new opportunities to leverage the value of passive packet analysis to any organization allowing security teams to achieve faster incident response and threat hunting. The Ne2ition Platform is built on state-of-the-art proprietary technology called WireX Systems Contextual Capture™. Contextual Capture™ continuously translates packet data into comprehensive, human-readable intelligence that can be immediately understood by anyone on the security team, including even entry-level agents. Contextual Capture™ utilizes direct observation of network traffic, while producing detailed actionable insights, a broad array of secondary use cases and unmatched visibility to the organization as a whole. WireX Systems Ne2ition Platform with Contextual Capture™ allows security teams to easily focus on clear and relevant data during day-to-day investigations without wasting precious time on tedious, manual examination of individual network sessions. Ultimately empowering organizations to process more investigations in less time with more accuracy.