security analytics and ueba buyer’s guide · buyer’s guide security analytics and ueba...

11
Security Analytics and UEBA Buyer’s Guide This document is designed to guide security teams in building requests for information (RFI) and requests for proposals (RFP) to compare and ultimately purchase user and entity behavioral analytics (UEBA) geared for cybersecurity use cases. Buyer’s Guide Guide www.microfocus.com Security

Upload: others

Post on 10-Apr-2020

12 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

Security Analytics and UEBA Buyer’s GuideThis document is designed to guide security teams in building requests for information (RFI) and requests for proposals (RFP) to compare and ultimately purchase user and entity behavioral analytics (UEBA) geared for cybersecurity use cases.

Buyer’s Guide

Guidewww.microfocus.com

Security

Page 2: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

Table of ContentsIncrease Risk Visibility by Augmenting Existing Security Tools . . . . . . . . . . . . . . . . . . . . . . . 2

Data Sources and Risk Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Understanding Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Buyer’s Checklist: Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

The Math: Machine Learning and Behavioral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Buyers Checklist: Advanced Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Incident Response and Investigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Buyer’s Checklist: Incident Response and Investigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Big Data Architecture and Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Buyers Checklist: Big Data Deployment and Ongoing Operations . . . . . . . . . . . . . . . . . . . 7

Appendix: Consolidated Buyers Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Page 3: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

Buyer’s GuideSecurity Analytics and UEBA Buyer’s Guide

2

Increase Risk Visibility by Augmenting Existing Security ToolsSecurity analytics increases risk visibility and improves the ROI of exist-ing security tools through the following capabilities:

1. Greater risk visibility than existing, fragmented tools .

2. Prioritized threat leads, which reduces the noise and false positives currently overwhelming security teams .

3. Accelerated incident investigation and root-cause analysis through access to context and raw events via an intuitive, click-through interface . This, reduces the time of investigation, the number of FTE investigators per incident, and costs associated with hiring outside consultants .

4. Increased return on existing security investments—security information and event management (SIEM) systems, malware threat-detection tools, endpoint detection response (EDR) and endpoint platform protection (EPP), and network data loss prevention (DLP) technology—through automatic prioritization of threats and risks .

5. Reduced operational costs through unsupervised machine learning to automate statistical measurement of normal and anomalous behaviors at scale, removing the need to manage complex threshold-, rule-, or policy-based environments .

This document is designed to guide security teams in building use-ful requests for information (RFI) and requests for proposals (RFP) to compare and ultimately purchase a user and entity behavioral analytics (UEBA) product geared for cybersecurity use cases . The document is divided into five sections, beginning with business justification for this technology and going on to define the critical capabilities of a complete security analytics product .

The appendix contains a checklist and comparison chart to assist with selecting products .

Data Sources and Risk CoverageThere is an adage: “If you cannot see it, you cannot stop it .” This is very true with security analytics products . Risk visibility is based on the data sources consumed . Let’s say an attack occurs across an endpoint, a compromised account, an application, and a repository storing data . A thief then exfiltrates the data after staging it in a server. Some type of log data from each of the assets involved must be examined to capture a complete picture of the indicators of compromise (IOC) of this particu-lar attack . By examining the logs of only one or two assets, the attack might be detected, but not recognized and stopped . We know this is a

common problem by looking back at past attacks . If a security analytics product is capturing endpoint activities, Active Directory (AD) logs, IP repository logs, and server logs, the analytics engine will match, cor-relate, surface, and alert you to the attack, showing the entire lifecycle .

When evaluating tools, the first thing to consider is the breadth of the data sources they can consume . Most security analytics products can consume the metadata and log data collected by SIEM products, or identity data and access data from AD and other Lightweight Directory Access Protocol (LDAP) directory stores . These data sources are a good start but incomplete for the attack described above . One clear point of differentiation in these products is the number and types of data stores and sources a security analytics product can consume . Look for a broad data-collection set that matches your internal infrastructure, so that you can optimize risk visibility over time .

Another important consideration is if and how the analytics engine can automatically correlate and enrich multiple data sources so that they can be utilized by several analytic models to provide greater context . Joining datasets allows multiple lines of evidence to be combined . This increases accuracy, timeliness, and context of detection . For example, a compromised account may have an unusual process running (end-point data), issue suspicious DNS queries (network data), and exhibit anomalous access to a network share (server access data) . To math-ematically stitch together an accurate picture, the data feeds must be properly correlated and the discovered attack steps consolidated into a combined risk score with related activities surfacing all of the entities involved (account, machine, file, and application).

Understanding Data SourcesSIEM DataA SIEM is not an original data source, but is instead a collection point for a variety of security data from user directories, server logs, and security tools . One great advantage of SIEM data, compared with other data sources, is that it is immediately available to load into a security analytics product . Analyzing this data in an analytics engine can surface insider attacks through user access and activity events . It can likewise detect malware and account takeover attacks by analyzing a mix of perimeter defense and net-flow data. However, most SIEM deployments collect a limited number of system logs due to the extreme cost of these sys-tems . And very few deployments collect data from servers or critical enterprise applications or repositories . SIEM data is a great place to start. Still, in most cases, the logs collected in your SIEM do not offer enough visibility to detect advanced attacks .

Page 4: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

3www .microfocus .com

Directory DataAD and LDAP directory information is the most common data source for security analytics products. It offers a system the ability to understand role, organization, authentication, and access rights—all of which can be baselined. Directory data that includes authentication events effectively detects account compromises . When server access data is included, it is somewhat effective at detecting insider attacks, although false posi-tives are much more common when only a single data source is used .

VPN, Proxy, NetFlowSecurity analytics products that collect network communications ben-efit from increased visibility into advanced attacks penetrating your net-work . This includes information such as data transfer volumes, data transfer locations, unusual connections between machines, and com-munications to unusual internal and external sources . Network informa-tion can detect some aspect of an insider attack, but it does this with very limited context and is blind to machines that are off the network. NetFlow data is successful at seeing outside attacks, where anomalies involving communications with outside websites, command and con-trol calls, and large unusual data movements can all be easily detected .

EndpointJust a few vendors in the security analytics space collect endpoint data . Others consume data via an existing endpoint (DLP or EDR, for example), either directly or through a SIEM . Endpoint data is a rich source for user activity, both online and offline, as well as for application, network, and cloud activity. It offers context for malware-based attacks and insider attacks. The information includes file activity (print, copy, paste, post, download), device activity (USB or Bluetooth activity, copy, download), application activity, and a variety of network communications .

Data/IP Repository LogsOnly a few vendors offer data consumption from data/IP repositories, yet this can be extremely important in giving context to advanced-tar-geted and insider attacks . Security analytics products that consume real-time data feeds or log data (from source code management, product lifecycle management, electronic-content management, and broad repositories like SharePoint) offer significant visibility into threats within the enterprise . Critical-threat context can be collected even with a small set of columns in the source log data (such as Timestamp, User, IP Address, Action, and Resource Path) . This includes detecting departing employees taking sensitive data out the door and attackers attempting to stage data for exfiltration. Although rich in event information, this data can be difficult to collect, because logging systems are often turned off by IT to reduce hardware and network overhead . Companies that are

serious about protecting IP should have these systems feeding into their security analytics product .

ERP/Enterprise Applications and DatabasesTwo additional data sources are important for security analytics . The first is log data and access-control data to major enterprise applica-tions, such as SAP and Oracle operations systems . Collecting this data will offer additional context and coverage of attacks against business processes, financial data, and HR records. The second data source is structured data (SQL databases), which provides context around sen-sitive data (in the case of databases PII and PHI data) by capturing its identity at the source. Log data from these systems will offer attack visibility by insiders (especially privileged users) and targeted-attack account takeovers .

Data EnrichmentOrganizations generate or acquire additional data that can be valuable to a security analytics tool. When correlated with analytics-defined in-cident data, other data (such as watch lists, threat-intelligence feeds, alerts from security, governance, and control tools) can create incident context and increase detection speed . Enrichment data can be used to weigh analytic scores higher or lower, call out and alert specific events defined by the enrichment tool, or add incident context for a more com-plete picture of an unfolding attack .

The greater the number of different data sets collected and corrobo-rated, the more threats can be surfaced quicker and more accurately . For threat detection, the number of different data sources is more im-portant than the volume within a single data source .

Buyer’s Checklist: Data Sources❑ Data-feed correlation (correlating multiple feeds and understanding

relative risk across all entities)

❑ Data from SIEMs

❑ Data from AD and other LDAP directory stores

❑ Authentication and access attempts

❑ AD events logs stored in systems such as HP ArcSight, McAfee ESM, IBM QRader

❑ Windows security events

❑ Data from intellectual property or data repositories, such as file shares, databases, source code systems (e.g., Perforce and GitHub Enterprise), and other repository logs ≤ Data from network feeds or taps such as NetFlow

Page 5: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

Buyer’s GuideSecurity Analytics and UEBA Buyer’s Guide

4

❑ Data from endpoints

❑ Data directly from machines and systems such as operating systems, servers, enterprise applications, access control lists (ACLs), expenses, email, fileshares, web proxies, and printer logs

❑ Data enrichment sources

❑ Security tools alerts (DLP, NAC, GRC)

❑ Threat-intelligence feeds

❑ Watchlists (users, executables, etc .)

The Math: Machine Learning and Behavioral AnalysisThe goal of a security analytics solution is to detect threats across your organization—especially threats that often go undetected by existing tools—quickly and accurately. Behavioral analytics measures a unique baseline for each entity within an organization . As baselines evolve, anomalous events are recognized . These events should be tied to the entities taking part in an event. Entities include users/accounts, ma-chines, applications, files, and other digital assets. As entities become involved in anomalous activities, probabilistic methods can quantify just how anomalous an event is, computing an appropriate risk score. This risk score is then used to create a prioritized view of which entities need to be investigated first.

Machine LearningTraditionally, security managers were required to specify a hard-coded threshold—such as “How many MB of attachments should be al-lowed?”—to define risk. This forced a single number to be globally ap-plied across a large population of individuals and business processes, making threat definition brittle and ineffective, as well as expensive to de-ploy and maintain . Machine learning methods compute dynamic base-lines from data, separately for each entity. For example, the HR director who constantly sends resume attachments should be treated differently than the contractor who almost never sends out attachments . This ap-proach results in a set of very precise and powerful mathematical finger-prints that define normal baselines of activities across an organization.

Probabilistic Math ModelsTraditional, non-probabilistic security systems rely on rules and thresh-olds that generate alerts and activate mitigating controls, where those rules are Boolean (either fire or do not fire). For example, if the Boolean rule is set to alert on any data movement over 350GB, the rule will kick in at 351GB, but not at 349GB . A Boolean-based approach cannot de-termine how anomalous an activity must have been to trigger a rule . If a user moves 380GB on average—a number that is required by his/her job—the rule will nonetheless activate, creating a false positive.

In contrast, probabilistic math measures events using continuous num-bers from, say, 0% to 100% . It covers the spectrum between the per-fectly normal to the extremely risky . This allows us to view events relative to other similar events that are meaningful, and it can measure how dif-ferent or how anomalous they are . For example, a user usually moves an average of 15GB of data from his/her computer to the cloud each day, but one day, the same user moves 400GB in an hour . We can describe that action as extremely different from that user’s normal behavior and de-scribe that difference. In the example above, a probabilistic model would understand that the user who just sent out 400GB (but averages 380GB) is not as threatening as the user who just sent out 400GB, but normally sends out only 15GB . This eliminates reliance on rules and thresholds .

Buyers Checklist: Advanced Analytics❑ Data-feed correlation (correlating multiple feeds and understanding

relative risk across all entities)

❑ Per-entity coverage: user/account, machine/device, file/asset

❑ Unsupervised machine learning

❑ Probabilistic math-based rules for risk scoring

❑ Probabilistic math models included out-of-the-box:

❑ Time

❑ Volume

❑ Source

❑ Origin/destination

❑ Users

❑ Statistical peer groups

❑ Entity-to-event-to-entity-based correlation for risk scoring

❑ Multiple, out-of-the-box, probabilistic math models for threat coverage (e .g ., Bayesian models, clustering, neural networks, logistic regression, etc .) to detect:

❑ Account misuse

❑ Compromised account

❑ Command and Control (C2)

❑ Data-staging

❑ Data exfiltration

❑ Insider threat

❑ Internal recon

❑ Infected host

❑ Lateral movement

Page 6: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

5www .microfocus .com

Incident Response and InvestigationsThe goal of security analytics, when applied to incident response, is to recognize and surface a threat clearly and accurately . It will deliver actionable data about the threat to a security team so that it can be stopped before any data compromise . The following are requirements for a successful security analytics product .

Ease of UseSecurity tools have been plagued with complex user interfaces that fail to show where the greatest risk lies and what the context of that risk is . Instead, dashboards will show numbers of events, amount of changes over time, or focus on a single event, such as a risky IP address or possi-ble malware signature . These tools also take days or weeks of training to master . Advanced analytics should easily show you the “who” involved, in terms of the entities: the user/account, the machine(s), the applica-tions, and the files or digital assets. The results should also be prioritized so that a security manager can quickly recognize which incident has the greatest risk, which poses the second-most risk, and so on .

Incident ContextOnce the risk is prioritized, a simple click should bring the investigator to a screen showing the incident’s context . The context screen should make it immediately clear what entities are involved (user/account, ma-chine, file(s), and even applications). It should also indicate what makes the incident so anomalous (sensitivity of data moved, volume of data moved, location of data accessed, type of executables involved, time of activity, and so on) . By communicating this information in plain language and through easy-to-understand graphics, no special training should be required. As a result, a level 1 SOC analyst can effectively utilize the tool, recognize and validate the threat, and react properly to it .

Actionable Incident Forensics and ExportWith detection moving from post-incident to mid-incident, the speed of forensic investigation and mitigating actions becomes critical . Security analytics products must provide event-level evidence that defines an attack, so the incident can be quickly validated and moved into an inci-dent response process . The entire attack lifecycle should be simple to explore and export—from details of the actions on an end-user machine (applications, files, file activity, cut, copy, paste, sync, cloud) to how the attack is moving across the network to which IP or other sensitive data is under threat. Evidence to prove incidents—like collusion between insid-ers, multiple account compromises, infected machines, and improperly accessed repositories—must be presented at a useful level of detail for investigative teams .

For security teams with SIEM products, the value comes in linking event evidence with involved entities, as this is usually not provided in the SIEM . Incident data automatically sent from your security analytics tool to your SIEM will greatly transform incident response . For security teams that do not utilize a SIEM product, this would need to be provided in the security analytics tool as part of the incident response process . This gives the investigator the ability to move easily from incident alert and risk, to incident context, to detailed event forensics, to even deeper free-form investigation . Once evidence has been collected, the security analytics product should have the capability to export the evidence beyond the SIEM to case-management or forensic-evidence products .

Incident Response Process IntegrationThese capabilities are different from incident forensics for initial investi-gation and validation. Here, they are process-related, the importance of which is still driven by the fact that attacks are surfaced while in prog-ress, so timely response is required . Security teams at all companies should have an incident response playbook that defines the proper pro-cess for different types of incidents. That includes alerting the response team, alerting groups outside of security (legal, IT, HR, PR, etc.), and ac-tivating any manual response processes or automated downstream IT controls in DLP, access control, or other security tools . It is important that, as the initial source of the incident, the security analytics product includes integrated workflow and REST API (Representational State Transfer Application Programming Interface), so that surface threat and incident data can integrate with incident response processes and tools .

Workflow systems should include automated alerts through text or email, scaling to even dozens of users in enterprise incident response teams. Workflow systems should also be flexible enough to activate at a variety of stages in the incident process and be based on different variables of an incident . For example, it should notify security teams or SOC investigators when a risk score exceeds 60, then alert the full response team when the risk reaches 80 and if a user matches a name on a watchlist. Notifications should be refined based on activity con-nected to geography, a group of users, or type of file.

API integration is critical for exporting incident-event evidence into fo-rensic investigation tools, as well as activating downstream-automated IT controls . In some cases, security teams will want to automatically push SIEM data into the security analytics product, analyze it, and push it back out to the SIEM tool . In other cases, native outbound integrations (e .g ., Phantom, Open DXL, REST calls into JIRA and ServiceNow) enable automated responses wherever possible because a security team has

Page 7: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

Buyer’s GuideSecurity Analytics and UEBA Buyer’s Guide

6

a matter of hours to react to an unfolding attack. Workflow should play a part in the processes as forensic data, and control activation should occur based on certain incidents being surfaced .

Buyer’s Checklist: Incident Response and Investigations❑ Incident dashboards

❑ Incident prioritization by risk ≤ Incident context view

❑ Event-level incident forensics and free-form investigation

❑ Workflow incident communications

❑ Workflow with script/API calls

❑ SIEM integration

❑ REST API

Big Data Architecture and DeploymentWhen looking at the successful use of analytics in the business intel-ligence (BI) and retail web markets, it is important to understand that value comes from the large amounts of transaction and process data being analyzed by these systems . This is true big data . The same applies when talking about analytics in security . User and application events collected from a single IP repository can easily add up to billions of events each month . When you add multiple sources, those billions can unfold over days or even hours . The architecture of security analytics products must support big data environments . Familiar names in the big data, data warehouse, and data management space are Cloudera, Hortonworks, and MapR. These companies provide predefined big data infrastructures to the BI teams at larger companies or to cloud and managed service providers . Security analytics products are sup-porting these same frameworks .

Larger companies are very likely to have big data or data lake infra-structures deployed in shared, private cloud architectures . Security managers at these organizations should explore utilizing these envi-ronments, if they exist, to leverage existing infrastructure for stream-lining the deployment process . There is a need in the market for big data strategies to show more value; perceptive security managers can leverage this need to gain resources and support for big data security analytics projects. Look for security analytics products that completely support big data standards . Beware of products that claim support but

require additional traditional database/server hardware and software to operate .

Companies that do not have big data strategies should utilize hardware optimized for big data such as equipment available HP, Supermicro, etc., or cloud services offered by managed services firms or by the vendor providing the product . Appliance-form factors and cloud-based de-ployments such as Amazon Web Services, Google Compute Platform, and Microsoft Azure offer advantages, including faster time to market, elimination of the sometimes long and painful hardware procurement process, and reduction in hardware management and overhead .

Deployment and Operational FlexibilityAnother critical area of evaluation is deployment and operations of the security analytics product . As mentioned in the ease-of-use section, a major failing of many security tools has been an inability to manage the product in an operational environment . This results in a product that looks great in a demo but fails in the field. It, unfortunately, includes many GRC, DLP, and SIEM deployments . These failings are often due to the volume and complexity of the rules, policies, and thresholds required to meet a company’s operational needs .

Security analytics products should be evaluated in terms of deployment and operational flexibility so that security teams minimize the “surprises” found after product deployment. When considering deployment—after the big data architecture decision is made—consider factors such as rollout speed and complexity, data source integration, machine learn-ing time, analytics and analytics adjustments, the rules required in a deployment, the ability to add new analytics to cover new threats, and the ability to start small and easily expand .

Speed of Rollout and Data Source IntegrationYou should validate that the connectors to SIEM or other data sources are relatively easy to deploy and can collect historical log data, so that machine learning can be accomplished in a shorter time frame . If sys-tem or application log data is among the initial data sources, ensure log data is turned on. If the product includes endpoint agents, find out if endpoint agents must be deployed with the initial deployment, or if they can be added at a later time (to accommodate gold-image and resource requirements and schedules) .

Page 8: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

7www .microfocus .com

Rolling Out Machine Learning, Analytics, and RulesMachine learning capabilities should not require customization of sche-mas or the setting of rules or thresholds . At most, some tuning may be required to adjust the weighing of normal baselines to account for sensi-tive data, privileged users, or risky processes . Good machine learning will automatically define these things over time. If the system does not have machine learning, then it is likely that structures and schemas will need to be defined through a consulting engagement every few years to establish normal baselines .

Like machine learning, probabilistic, math-based analytics should not need customization or rule definitions. A good security analytics prod-uct will self-learn from the data . If the product requires rules or thresh-olds to set alerts or risk levels, the security team will need to clearly understand what limitations these rules create . Can the system detect previously unknown or undefined anomalies? How easy is it to manage, change, or update the rules or thresholds? Can new rules be added, and how easy is it to manage rules when I have hundreds of them? Finally, truly mature security analytics products may even allow in-house data science teams to configure and tune the actual mathematical models used by the solution, or to develop and deploy their own custom models .

Continued OperationsOnce your security analytics product is running, it is critical to expand the system to cover more data sources, endpoints, and analytics for wider or emerging threat detection . Determine if the product has the flexibility to expand across these areas, and what, if any, are the consult-ing requirements to accomplish this . Finally, if custom schemas were re-quired or rules were written to define anomalies, you should understand what the long-term limitations and management requirements are go-ing to be, and weigh this against products that offer greater automation with machine learning and probabilistic math .

Performance and ScalabilityIt goes without saying that a security analytics solution must be de-signed to scale . For many companies, security analytics will be run on

your company’s entire population across multiple data sources . This amounts to billions of transactions consuming terabytes of storage per month . That is no small task, and the solution you choose must be architected with this in mind, from the ground up . Further, solutions need to be able to quickly scale upward and process data in real time . This reduces the time between detection of the threat and the actual time the threat commenced .

Buyers Checklist: Big Data Deployment and Ongoing Operations❑ Big data, Hadoop-based architecture

❑ Adheres to big data open-source standards

❑ Scales to meet data source collection and event requirements

❑ Offers both onsite and third-party cloud deployment

❑ Offers vendor cloud deployment

❑ Easily integrates data sources (SIEM, directory, repository, etc .)

❑ Easily adds new data sources after deployment

❑ Machine learning is fully automated instead of being built at deployment and based on rules

❑ Anomaly detection analytics are fully automated and not built at deployment and based on rules

❑ Updates to purchased machine learning and analytic models are included in the license

❑ Existing analytic models can be easily configured or tuned, if desired

❑ New analytic models can be easily added over time to cover new threats

❑ Optional endpoint agent

❑ Endpoint agent deploys through standard management tools

❑ Endpoint agents can be deployed at any time

❑ Endpoint agent does not overload machine or network

Page 9: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

Buyer’s GuideSecurity Analytics and UEBA Buyer’s Guide

8

Appendix: Consolidated Buyers ChecklistVendor A Vendor B Vendor C

Data SourcesData feed correlation (correlating multiple feeds and understanding relative risk across all entities)

Data from SIEM products (Core)

Data from AD and other LDAP directory stores (Core)

Structure and role information

Authentication and access attempts

Data from network feeds or taps

Data from endpoint

Vendor-provided endpoint sensor

Vendor integrates with existing endpoint agents

Data from IP/data repositories (e.g. PLM, SCM, CMS, ECM, network shares)

Data directly from servers, enterprise applications, and access control lists (ACLs)

Data from structured data sources (e.g., SQL)

Data enrichment sources

Security tools alerts (DLP, NAC, GRC)

Threat-intelligence feeds

Watchlists (users, executables, etc.)

Advanced AnalyticsEntity coverage: user/account, machine/device, file/asset

Semi and unsupervised machine learning

Probabilistic math-based rules for risk-scoring

Probabilistic math models included out-of-the-box

Time

Volume

Source

Origin/destination

Geography

Entity to event to entity-based correlation for risk-scoring

Multiple probabilistic math model usage for threat coverage (e.g., Bayesian models, clustering, neural networks, logistic regression, etc.) included out-of-the- box

Compromised account

Insider threat

Command and control (C2)

Lateral movement

Data staging

Data exfiltration

Endpoint-specific models

EDR

Insider threat

Continued on next page

Page 10: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

9www .microfocus .com

Vendor A Vendor B Vendor CIncident Response and InvestigationsIncident dashboards

Incident prioritization by risk

Incident context view

Event-level incident forensics and free-form investigation

Workflow incident communications

Workflow with script/API calls

SIEM integration—direct two-way connector

REST API

Big Data Deployment and Ongoing OperationsBig data, Hadoop-based architecture

Adheres to big data open-source standards

Scales to meet data source collection and event requirements

Offers both onsite and third-party cloud deployment

Offers appliance form factor

Easily integrates data sources (SIEM, directory, repository, etc.)

Endpoint agent deploys through standard management tools

Endpoint agents can be deployed at any time

Endpoint agent does not overload machine or network

Easily adds new data sources after deployment

Machine learning is fully automated, instead of being built at deployment and based on rules

Updates to purchased machine learning and analytic models are included in the license

Existing analytic models can be easily configured or tuned, if desired

New analytic models can be easily added to cover new threats

Page 11: Security Analytics and UEBA Buyer’s Guide · Buyer’s Guide Security Analytics and UEBA Buyer’s Guide 4 Data from endpoints Data directly from machines and systems such as operating

164-000030-002  |  I  |  08/19  |  © 2019 Micro Focus or one of its affiliates. Micro Focus and the Micro Focus logo, among others, are trademarks or  registered trademarks of Micro Focus or its subsidiaries or affiliated companies in the United Kingdom, United States and other countries. All other  marks are the property of their respective owners.

Contact us at:www.microfocus.com

Like what you read? Share it.