In this series, Rick Whittington will explore the benefits and potential risks of the Cloud for organizations. Rick will incorporate knowledge he’s gained as a reformed Network Engineer with multiple disciplines in Network, Security, Global Networks, Datacenter, Campus Networks, and Cloud Networks. He’ll also incorporate his years of experience in improving enterprise infrastructure, processes, and teams. Rick has brought his cloud experience to organizations such as Capital One, Charles Schwab and his current position as a Sr. Security Engineer for a large data analytics company.
The Many Definitions of Cloud Workloads
When asking the question “What is a cloud workload?” the answer may be different based on a systems administrator or a developer’s perspective. Cloud workloads encompass a basic application, virtual machines (VM), databases, containers, serverless infrastructure, or complex applications working within a cloud environment. Each of these has its potential security risks and remediations; however, this series’ focus will be on quick tips to secure cloud VM workloads.
Cloud VMs are similar to any physical server or virtual machine you would traditionally find in your current data center, yet why does running a server in the cloud strike a particular note of fear to many organizations? In short, the general answer that has often been echoed is the lack of control or lack of understanding, usually followed by potential regulatory restrictions organizations face. Yet, using any search engine with the phrase, “how to secure AWS EC2s” will often yield similar results to “how to secure a server”. While the underlying infrastructure is a multi-tenant environment, the virtual server environment is often secured similarly to how you should protect any traditional server environment.
How Are Cloud VMs Different?
Cloud VMs are made up of the same primary attributes you would see in any server in a data center, such as:
- Operating Systems
Both AWS and Azure provide a minimal customization level compared to VM Hypervisors, where every aspect of a VM can be customized. The main factors that differentiate the two are:
- Identity and Access Management Role(s)
- Resource Limitations
While automation may seem out of place since it’s used in many traditional data centers, the difference is the integration level with additional services, such as auto-scaling and load-balancing, and the ease of consumption of cloud provider automation. Automation in traditional data centers will often never come close to the scale of automation seen in cloud providers.
So How Can I Secure Cloud VM Workloads?
Without diving into application specifics, regulatory requirements, or in-depth discussions on operating system security, the following subjects are great starting points on the path to securing your cloud VM posture.
Regardless of your organization’s industry, identifying a framework and maintaining maturity in the cloud is fundamental. Frameworks such as NIST, CISA, CSA, MITRE ATT&CK, or ISO will help identify gaps while driving requirements by providing a checklist of security items to accomplish within an architecture. Due to the interconnectivity and reliance of multiple services within a cloud provider, items such as Identity and Access Management overly provisioned for a server can provide elevated administrative permissions to all cloud administrative APIs. Using appropriate frameworks can help you identify early misconfigurations when reviewing architectures or assist in identifying overly configured permissions with detection automation. Today, these frameworks should be used within most organization’s technology departments when discussing traditional data center designs and architecture. There is no reason not to continue using these frameworks within cloud environments. It would be advantageous even to leverage multiple frameworks as they evolve to include cloud-related services. Frameworks may call out some of the following capabilities:
- Administrative Access
- Encryption Standards
- Agent Requirements
- Upgrade Requirements
- OS Hardening
Much like traditional data centers, cloud VMs will rely heavily on network services to communicate with clients, servers, or services. Securing the network is slightly different from traditional data centers in that common segmentation mechanism such as VLANs are handled a bit differently in cloud environments. In conjunction, accessing workloads within a data center provides private access for management and communication channels, limiting external exposure of services. So, are traditional solved problems within a cloud environment?
Segmentation can be accomplished in several ways within a cloud environment. Using a multi-account strategy ensures maximum segmentation from all services and would require IAM permissions to allow cross-account access to services. There are no limitations on account creation, and a multi-account strategy can be per environment or even per application. However, note that multi-accounts will increase the overall operational burden as each account will have to be maintained. Alternatively, segmentation within accounts using VPCs can still be obtained; however, the granularity of access controls to services such as S3 will rely heavily on IAM permissions and bucket policies. Segmentation between VMs in different VPCs will only occur if the VPCs have peered, routing is configured, and firewall rules permit the traffic. Alternatively, if segmentation by VPC or a multi-account strategy cannot occur, VMs can still be segmented using firewall rules at a minimum. To minimize the burden of firewall rule management, both Azure and AWS now support firewall rules’ centralized management.
When accessing server infrastructure in a traditional datacenter, it’s generally not exposed to external parties. However, it seems like a typical architecture for the cloud allows public IP addressing all instances for generic management access. What architectures can be used to limit access from public access within cloud environments?
If the other options are not viable, the reliance on firewall rules to limit inbound communication to ports may be the only option. For example, management access should be limited to specific network ranges. If both the management and client traffic is serviced on the same port, utilizing a web application firewall may be needed to remediate unauthorized access. This option for management access still traverses over the public internet and will require that all VMs have public addressing from the cloud provider. Alternatively, alternative architectures to limit external exposure may be to utilize bastion solutions or VPN Gateways VMs from traditional firewall vendors. By using one of these solutions, it is possible to restrict access to the overall VM infrastructure.
Both Azure and AWS allow VPN configuration to enable private access between a traditional data center and the cloud environment. A VPN will ensure that all communication between on-premise operators and clients is encrypted and facilitates an architecture where no public access can accidentally occur to the cloud VM infrastructure. However, this does require a minimal level of network configuration to allow appropriate routing between on-premise infrastructure and the cloud environment, and there is often a cost increase associated with data transfer over VPN. However, the benefit of separation in management traffic vs. externally service access can outweigh the cost and maybe a requirement for compliance.
Both Azure and AWS provide multiple network-level peering capabilities. While this architecture will cost more than VPN, the overall deployment will feel similar to a new data center joined into the network.
Network monitoring is often overlooked in cloud environments until a security event has occurred. However, both AWS and Azure provide many network-level monitoring capabilities. For those familiar with Netflow, flow logs provide the same level of visibility between VMs and help in unmasking dependencies. One ability leveraged with traditional data centers is the act of network capturing for analysis or detecting intrusion events. Over the past several years, cloud providers have introduced this capability to further expanding similarities and capabilities for network security teams. In most cases, cloud environments can now provide equal or more visibility than traditional data centers for a smaller cost.
Recalling the CIA (Confidentiality, Integrity, and Availability) Triangle, often availability is synonymous with business continuity. However, availability should also be considered in service uptime; after all, if a service is down, end-users and clients may be impacted. Cloud providers provide many automation capabilities to minimize the impact of uptime.
However, to fully maximize automation within cloud environments, new architectures for applications and server infrastructure should be leveraged. With traditional data centers, servers are considered family members that need care and feeding; this should not be common in cloud environments. When using solutions like Cloud Load-Balancers, servers can be expanded or removed from a pool based on many factors, including a VM showing errors to clients. In traditional data centers, engineers would remove the server from load-balancing and troubleshoot the server issues. However, in cloud environments, automation would delete the server and provision a new server in its place. Automation not only occurs for the provisioning of the server but needs to take place on the configuration of the server as well. A burden is placed on teams to ensure that the application has the capability to automate and is documented appropriately to accomplish the automation.
While this is not an all-encompassing guide to securing workloads in the cloud, the hope is that this provides you some steppingstones in viewing that the cloud can be secured for server infrastructure and provide methods to accomplish this.