Skip to content

Implementing an integrated security program requires diligence and foresight. You have to balance current needs while anticipating future security scenarios.

 

We’ve put together this list of recommendations to make your security planning process a little easier. Keep the following tenets in mind as you develop your security program and you’ll be well positioned to adapt to known and unknown situations.

 

Tenet 1: Create a robust foundation of telemetry and visibility

Telemetry is the data you collect from your endpoints—servers, production workstations, cloud workloads, and containers. Visibility is what that telemetry data gives you; it’s the view into what’s happening and where it’s happening.

While each organization has its own unique mix of blind spots, there are also common security concerns that are exacerbated when you don’t have the visibility that robust telemetry data provides. These include:

  • Processes running on hosts
  • Inbound and outbound network connections
  • New user and/or group creation
  • Permission changes on files, directories, and user accounts
  • Changes to local firewalls
  • Opening of ports
  • New scheduled tasks

You can address these issues and help future proof your organization by efficiently collecting as much telemetry data as possible. While you can’t know everything that’s going to happen, nor can you predict where and how new threats will emerge, your response to challenges improves with the more data you have.

 

Tenet 2: Limit the number of methods for collecting data

Every data source carries its own unique qualities and quirks. That means the data from each collection point requires a workflow and a normalization process. In isolation this isn’t much of an issue, but overall complexity increases as more data sources—and more points of failure—are introduced into a security program.

 

This complexity extends to data management as well. If you’re working with commercial products, multiple data sources will require multiple vendor relationships. If you’re going the DIY route, you’ll confront a variety of maintenance and customization needs for each data source.

 

Finding a single definitive data source isn’t the end goal. There are too many different needs in an organization for one source to serve everyone. Instead, look to apply an 80/20 approach. Seek out a robust data collection source, like osquery, that can address 80% of your coverage needs, then supplement the remaining 20% of use cases with a handful of other sources. This 80/20 approach reduces complexity and creates efficiencies in the normalization and application of your data.

 

Tenet 3: Know the teams and use cases you need to support

Building an integrated security program requires knowing who you’re building that program for—and security programs shouldn’t be limited to security teams. IT, DevOps, asset management teams, insider threat and fraud groups, and compliance teams will likely intersect with your program at some point. Talk to and keep these various groups in mind as you map out current and future use cases. These use cases will shape your decisions around telemetry, data collection, tools, and workflows.

See how Comcast used diverse perspectives to spot security use cases and spur adoption of a single agent.

 

Tenet 4: Embrace automation and orchestration

Automation and orchestration work best when they’re combined with human insight. It’s not an either-or proposition. Provide an analyst with automation and orchestration tools and you enhance their capabilities. You don’t replace them with a machine-driven alternative.

 

Well-constructed automation and orchestration workflows reduce employee toil by minimizing the mundane and rote parts of analysis. Case in point: Playbooks can be expressed and executed upon through code rather than documentation. This reduces error, keeps people sharp, and lets analysts focus on tasks that require human problem solving.

 

The automation-orchestration combination improves overall security as well. An automated system can be configured to kill a known bad process that’s identified by your tooling, eliminating a malicious threat before it can cause problems. For other issues, an automated alert can be fed through an orchestration system to ping an analyst through Slack or a similar communication channel. The analyst evaluates the threat and initiates a response that’s sent back through the orchestrator and acted upon by your systems. This kind of human-in-the-loop flow between automation, orchestration, and analysts creates efficiencies in time to detection and response.

 

The takeaways here are to look for opportunities to automate repetitive tasks, use orchestration to refine your workflows, and apply your most important asset—your people—in ways that best harness their intelligence and creativity.