Complete visibility of network activity is a critical aspect of situational awareness that involves ensuring that the right information is available to the right analyst at the right time. As with most data, having a longer history is always preferable, but for many forensic situations it is critically required to have a complete understanding of the relevant timeline. Having months of detailed network transactions and payload analytics as opposed to only having hours/days of packets or months of just flow metadata, significantly enhances the effectiveness, depth, and reliability of cyber investigations. All aspects of network evaluation and incident response, including investigation times, baselining and trending, detecting low data rate exfiltration, and responding to zero-day events, are greatly impacted by the granularity and temporal scope of the data. While looking at flow data provides some context, a complete historical record is even more critical to detecting patterns, determining the scope of the damage (especially what was accessed) and preventing future threats.
Investigation Timeframe
Gathering facts is the initial first step of any investigation. As with any investigation, having a partial view of the facts can sometimes lead to false assumptions. Therefore having a complete set of historical information can be a game changer for a thorough investigation. In this section we discuss the limitations of short term packet recording and the value of the significant history with payload detail.
Days or Hours of Packets
Investigations based only on a few days of data may help identify or respond to the “obvious” threats but often misses the bigger picture. The investigation might conclude quickly, potentially overlooking slower, more sophisticated attacks that unfold over weeks or months. Working with packets requires significant network, protocol, and application expertise. Reconstructing sessions and building situational awareness through “follow the packets” can lead to tunnel-vision. Taking alerts from one system and then building packet queries for reconstruction, introduces manual processes bogging down experienced team members that are spending precious time on “gluing” back the bits and bytes from packets in order to understand what happened. With a limited time window of a few days, it is close to impossible to find patient zero.
Think of it as trying to watch a full length movie by looking at the millions of frames it is made of.
While the data is all there, the skillset and amount of effort required in order to “make sense” of the data is extremely high.
Now, in order to compare this to the real world, imagine you have a limited amount of storage, containing thousands of different films, with all the frames running mixed together, and you’re trying to understand the story of specific frames from a specific movie – that’s how your network works!
This means that in the base case scenario, where the data is still available, (before it is recycled) it will be a long tedious job for an experienced analyst – in the more common case, the data will simply be long gone.
Months of Detail
A more extended dataset allows for a comprehensive analysis over time, enabling investigators to identify patterns, trends, and anomalies that only emerge over longer periods. This depth of data can be overwhelming if it requires the same level of expertise to reconstruct scenarios. Investigators that can quickly and easily search back in time for same or similar scenarios improve the overall effectiveness of the investigation. When an investigator has identified a new scenario, they can quickly build a new signature to detect it moving forward.
Baselining and Trending
While baselining and trending are primarily for optimization and planning, having a relevant historical view of observed metrics is important for security investigators to understand the status of the protected resources. Using the same source data for multiple purposes ensures that this data is prioritized and validated.
Baselining
Establishing a baseline—understanding what normal network behavior looks like—is crucial in detecting anomalies. With only days of data, baselines might not account for weekly or monthly variations in network traffic, leading to a higher rate of false positives or negatives. Even experienced analysts need time to build intuition with a local environment. Large organizations are constantly evolving and the expertise on the team will change as well, so it takes time for new staff members to familiarize themselves with the local environment. Experts with access to detailed historical records are better equipped to adapt to change and prioritize anomalies appropriately.
Trending
Months of data provide a broader perspective, allowing analysts to observe trends and seasonal variations in network traffic. This is essential to accurately detect anomalies. This long-term view helps differentiate between genuine threats and periodic increases in traffic due to business operations. In addition, a broad historical view enables projections to account for seasonal or situational planning for specific aspects of the technology stack. While basic trending is straightforward, many scenarios have nuances that are important to track. Identifying an insecure application, protocol, or behavior observed in the system and then working with the stakeholders to transition away from that business behavior, is critical to ensure a more secure environment. This involves tracking existing “known” anomalies that may trend up or down over time.. As well as detecting “new” instances of the same vulnerabilities. Using this trending process, the organization is empowered to be much more proactive.
Low Data Rate Data Exfiltration
Detection with Days of Data
Short-term data can make it challenging to detect low data rate exfiltration techniques, where attackers slowly siphon off data to avoid detection. These tactics require time to identify, as the malicious traffic might blend in with normal activities over short periods. Itinerant activity is normal within complex environments; however, attackers leverage their knowledge of monitoring systems and relatively short historical capture to patiently extract valuable information.
Advantages of Months of Detailed Data
With an extensive dataset, it’s easier to spot subtle, prolonged attempts at data exfiltration. Analysts can observe traffic over time to identify consistent, unusual outflows of data, even at low data rates, which might indicate a sophisticated, stealthy attack. Correlating itinerant behavior over longer history reveals a method of intent and therefore raises the visibility for further investigation. If a single itinerant low volume session raises suspicion, it can quickly be correlated to historical patterns during the investigation and be flagged appropriately.
Ability to Respond to Zero-Day Notifications
Immediate Response
Days of data is almost never sufficient to identify zero-day attacks.
The primary question is if the systems are actively under attack and then if they remain vulnerable. As with most sophisticated infiltrations, the initial entry to compromise a foothold is quickly turned into lateral movement or backdoor installation. In these cases, if the zero-day notification refers to the initial attack, it may not be observed in the short window of packet recording. It is also important to note that with historical data, organizations can see if they were compromised prior to the “zero day” event. Just because a zero-day notification was received, it does not mean that the vulnerability did not exist for days, months and sometimes even years prior.
Comprehensive Response with Months of Data
Having a complete historical dataset enables organizations to look back through months of transactions to determine if the zero-day vulnerability was exploited before its discovery. This retrospective analysis can uncover breaches that occurred before the vulnerability was known, allowing for a more informed and effective response. Additionally, if the zero-day signature is involved in lateral movement, the historical contextual records enable the investigator to map the scope of the movement and prioritize quarantine and deep forensic analysis.
Conclusion
The temporal depth of network transaction data plays a pivotal role in the effectiveness of cybersecurity investigations. While short-term data can offer quick insights for immediate threats, the nuanced understanding required for comprehensive security posture assessment, trend analysis, sophisticated attack detection, and informed responses to emerging threats necessitates months of data or more. Long-term data not only improves the quality of investigations, but also enhances the overall security resilience of organizations by enabling more accurate threat detection, in-depth analysis, and effective mitigation strategies.
Does your network analysis solution provide these capabilities?
Complete History
What is the effective historical range of data available for exploration and investigation?
The WireX Ne2tition platform optimizes storage for payload data reconstructed as Contextual Records. These records are indexed and compressed for typically 9 to 12 months of availability offering up to 20-times longer data retention than standard NDR tools.
Complete Depth
Does the solution make transaction detail accessible to junior operators, or does the workflow involve packet query and reconstruction by expert users?
WireX Systems Ne2ition produces Contextual Records in real-time which provide human readable details that do NOT require application or protocol experts to develop a situational understanding. With hundreds of protocols (and growing) and thousands of multi-dimensional information elements (and growing), Contextual Records provide deep insight to any operator. This helps organizations grow less experienced staff to become valuable as top-level analysts and helps top-level analysts do their job in a fraction of the time, building a more effective overall security operation.
Complete Breadth
Does the solution require complex deployment or specialized hardware? What are the limits of deployment in hybrid models?
WireX has years of network monitoring experience which has culminated in highly efficient software, known as the WireX Ne2ition platform. The WireX platform is built to observe, correlate, construct, index and store Contextual Records. No specialized hardware is required and the software is deployable in many form factors based on the environmental needs. Deployments are rapid and integrated into various orchestration systems to ensure the ability to monitor anywhere (including an upcoming fractional network sensor for endpoints). WireX integrates seamlessly with leading cybersecurity solutions including both on-premise and cloud environments.
Complete Visibility
Does your solution record everything or only when detections occur?
WireX Ne2ition is designed to capture, correlate, create, index and store Contextual Records for everything, all the time. Of course, when detection occurs the information is immediately available, but historical data records are also available to provide complete activity capture.
It is important to understand the complete picture of capabilities when selecting network instrumentation. WireX Ne2ition with Contextual Capture ™ provides complete accessible insights from direct network observation providing organizations unmatched visibility.