First coined by Forrester in 2010, the term ‘zero trust’ refers to a new approach to security that relies on continuously verifying the trustworthiness of every device, user and application in an enterprise.
Prior to this notion of zero trust, most security teams relied on a “trust but verify” approach that emphasized a strong defensive perimeter. This model assumes anything within the network perimeter (including an organization’s users, resources and applications) is trustworthy, so security teams granted access and privileges to those users and resources by default. In contrast, anything outside the perimeter had to be cleared before gaining access.
Where traditional security says, “trust but verify,” zero trust says, “never trust, always verify.” Zero trust security never really ‘clears’ anything. Instead, zero trust considers all resources to be external to an organization’s network, continuously verifying users, resources, devices and applications before granting only the minimum level of access required. Establishing a zero trust security program involves coordination between several IT components and requires a comprehensive approach.
How has the concept of zero trust changed over time?
Zero trust implementations have changed over time. Despite the catchy name, organizations don’t need to be zero trust absolutists – always verifying everything would be impractical, if not impossible.
Instead, zero trust evolved from a binary concept where nothing is inherently safe and everything needs to be verified to something much more nuanced and dynamic. Today, zero trust incorporates broader data sets, risk principles and dynamic risk-based policies to provide a firm foundation for making access decisions and performing continuous monitoring. Zero trust defense draws from a variety of sources including threat intelligence, network logs, endpoint data and other information to assess access requests and user behavior. In addition to Forrester, Gartner and NIST have recently published documents advocating zero trust and expanding on this broader, more dynamic approach.
Recently, interest in zero trust has spiked, driven by market trends that accelerated as a result of the global pandemic, including:
- Accelerated digital transformation (the adoption of new and emerging technology and solutions to modernize and accelerate business interactions with customers, employees and partners)
- Migration to cloud / SaaS
- Remote work
- Evaporation of VPN-protected trust zones (network perimeter) and the realization that firewalls are less useful for detecting and blocking attacks from the inside and cannot protect subjects outside of the enterprise perimeter
How is zero trust different from previous approaches to IT security?
Previously, in most corporate IT environments, trust was established mostly as a function of location. Users accessed corporate resources, from a corporate-owned computer, from within a corporate campus. Being physically present on a corporate campus implied that a user had met the vetting and credentialing requirements to gain access to corporate IT resources, typically residing in a local data center. The “Trusted Zone” was protected by permitted (protective) technologies such as firewalls, intrusion detection/protection and other resources.
Over time, the campus IT perimeters were expanded to include remote and satellite offices, effectively expanding the Trusted Zone bubble through secure, private connections between locations. In the early 2000s, as new access methods such as VPN and WiFi began appearing, new technologies added authentication and access credentialing to preserve the relative integrity of the perimeter. Among these were two-factor (2FA) authentication tokens and the IEEE 802.1x standard for port-based Network Access Control (NAC).
The subsequent evolutions of cloud computing, bring-your-own-device and hypermobility changed everything. Organizations now depend on IT resources well-beyond the bounds of a single Trusted Zone. Moreover, employees, partners and customers now require access to systems from any location, time and device. The resulting vulnerabilities and cracks in security ushered in a new era of hacking, when security breaches became commonplace. The perimeter of old is obsolete.
The erosion of perimeter security paved the way for zero trust. However, it’s notable that the concept wasn’t entirely new, even in 2010. While the name “zero trust” was novel and drew attention, the task of how to establish trustworthiness in the inherently untrustworthy world of the internet has been the topic of academic research for more than four decades. In fact, the founding of RSA, nearly four decades ago, was rooted in academic work performed in the late 1970s that established secure communications and transactions in untrusted space.
As the years turned into decades, and digital transformation took hold of business and society, approaches to trust continued to evolve.
Why do security teams need to consider zero trust now?
Zero trust has steadily grown more popular in recent years. However, the disruptions resulting from the COVID-19 pandemic have accelerated interest in how organizations can build resiliency after a major disruption.
Like in most other years, security and risk leaders entered the new decade with rather sophisticated plans for maturing their digital risk management practices. The initial outbreak of COVID-19, however, shifted the focus of security teams to more tactical needs, such as enabling remote workers, securing changes in operations to sustain business functions or to take advantage of new opportunities, re-assessing third-party and supply chain risks, accelerating onboarding and more. Budgets were slashed or frozen, long lists of pending projects were initially whittled down, but then rapidly accelerated. Teams are now faced with securing new digital initiatives that don’t necessary fit neatly into complex, incumbent security and risk regimes.
Zero trust offers a basis for expedient and vetted approach for organizations struggling to keep up with the pace of digital transformation.
What technologies and infrastructure should organizations have in place to support zero trust?
In August 2020, NIST published NIST Special Publication 800-207: Zero Trust Architecture, which includes logical components of a zero trust architecture, possible design scenarios and threats. It also presents a general roadmap for organizations wishing to pursue zero trust principles.
The following describes elements of the architecture and briefly outlines products and functionality in the RSA portfolio that align with zero trust architecture.
Following are descriptions of each element (as defined in NIST SP 800-207) with added references to applicable RSA products and services.
Policy Engine: This component is responsible for the ultimate decision to grant access to a resource for a given subject. The policy engine uses enterprise policy as well as input from external sources (e.g., CDM systems, threat intelligence services described below) as input to a trust algorithm to grant, deny, or revoke access to the resource. The policy engine is paired with the policy administrator component. The policy engine makes and logs the decision, and the policy administrator executes the decision.
RSA SecurID Access’s role- and attribute-based access, conditional access, and risk-based analytics are all fundamental components to the establishment of both a policy decision point and policy engine.
Policy Administrator: This component is responsible for establishing and/or shutting down the communication path between a subject and a resource. It would generate any authentication and authentication token or credential used by a client to access an enterprise resource. It is closely tied to the policy engine and relies on its decision to ultimately allow or deny a session. Some implementations may treat the policy engine and policy administrator as a single service. The policy administrator communicates with the policy enforcement point when creating the communication path. This communication is done via the control plane.
RSA SecurID Access offers a range of authentication methods and user experiences (i.e. authentication choice, BYOA) to administer authentication and determine access when requested by the policy enforcement point.
Policy Enforcement Point:
This system is responsible for enabling, monitoring, and eventually terminating connections between a subject and an enterprise resource.
This is a single logical component in zero trust architecture but may be broken into two different components: the client (e.g., agent on user’s laptop) and resource side (e.g., gateway component in front of resource that controls access) or a single portal component that acts as a gatekeeper for communication paths. Beyond the policy enforcement point is the implicit trust zone hosting the enterprise resource.
RSA Products can both determine policy decisions enforced by partner policy enforcement points (VPNs, websites, applications, etc.) and directly enforce policy at endpoint devices.
RSA SecurID Access, acting in a policy decision capacity, works with a myriad of partner devices (desktops, servers, virtual machines, web servers, portals, network devices, applications etc.) to authenticate users and determine access privileges.
RSA NetWitness Endpoint can isolate and quarantine endpoints. It can block specific processes on endpoint devices, either manually or through risk- or rule-based enforcement. As new threats are uncovered, RSA NetWitness Endpoint agents can blacklist files to ensure rapid defense from attacks such as ransomware. Importantly, it extends fully to devices both on corporate networks and ‘roaming’ or off-network devices.
Data Access Policies:
These are the attributes, rules, and policies about access to enterprise resources. This set of rules could be encoded in or dynamically generated by the policy engine. These policies are the starting point for authorizing access to a resource as they provide the basic access privileges for accounts and applications in the enterprise. These policies should be based on the defined mission roles and needs of the organization.
RSA Identity Governance and Lifecycle is an ideal starting point for authorizing access to a resource with a clear focus on governance, visibility across structured and unstructured data, and analytics and intelligence to ensure principles of least privilege can be applied.
This is responsible for creating, storing, and managing enterprise user accounts and identity records (e.g., lightweight directory access protocol (LDAP) server). This system contains the necessary user information (e.g., name, email address, certificates) and other enterprise characteristics such as role, access attributes, and assigned assets. This system often utilizes other systems (such as a PKI) for artifacts associated with user accounts. This system may be part of a larger federated community and may include non-enterprise employees or links to non-enterprise assets for collaboration.
RSA SecurID Suite integrates with all prominent identity management systems (i.e. Microsoft AD / Azure AD / AWS AD) to seamlessly integrate identities with policies, administration, and methods necessary for a zero trust architecture to function.
Security Information and Event Management (SIEM) System:
This collects security-centric information for later analysis. This data is then used to refine policies and warn of possible attacks against enterprise assets.
RSA NetWitness Platform is an evolved SIEM encompassing capabilities beyond that of a traditional log-based SIEM. It retains both log and network information in the form of native logs, full packets or as metadata. Pre-populated compliance reports help to determine alignment with specific security frameworks.
Extensive data analytics engines (including UEBA) provide sophisticated capabilities to improve security operations beyond those of traditional SIEM workflows. Data visualization enables security analysts to quickly identify risks, take action to remediate issues, and collaborate across the security organization.
This provides information from internal or external sources that help the policy engine make access decisions. These could be multiple services that take data from internal and/or multiple external sources and provide information about newly discovered attacks or vulnerabilities. This also includes blacklists, newly-identified malware, and reported attacks to other assets that the policy engine will want to deny access to from enterprise assets.
RSA NetWitness Platform offers and incorporates threat intelligence through RSA Live. RSA NetWitness Orchestrator incorporates subscription-based threat feeds, open source threat feeds, crowdsourced threat intelligence and maintains a library of historical information derived from investigating previous risks. Open source, external and crowdsourced threat intelligence give organizations visibility into new and emerging threats. Threat intelligence can trigger automated actions, which can include access decisions.
RSA SecurID Access leverages signals—internal and external—to increase assurance (positive signals) and identify threats (negative signals). For example, internal signals like user history, behavioral analytics, IP address, network, and location can be factors to determine risk-based authentication and access decisions. And external signals like threat analytics from RSA NetWitness Platform and other extended detection and response (XDR) and enterprise mobility management (EMM) systems can also be incorporated.
Network and System Activity Logs:
This enterprise system aggregates asset logs, network traffic, resource access actions, and other events that provide real-time (or near-real-time) feedback on the security posture of enterprise information systems.
RSA NetWitness Platform is designed and developed to leverage the intelligence found in logs, NetFlow, network packets and endpoint data to provide information to security organizations to identify, investigate, and resolve risky conditions and threats. It uses patented technology to collect data and process it as metadata faster than other SIEMs on the market, while also retaining raw data as needed in parallel. This process makes information readily available when investigating incidents instead of waiting for the solution to process data from a large data store.