Let’s continue with our blog series on NSX multitenancy. This is Part 2 where we will cover the creation and configuration of projects including RBAC policies, applying quotas, creating networking and security objects, sharing resources to projects, and applying route filtering for projects.
If you missed Part 1 where we discussed about the multitenancy models, please check out below:
Part 1 : https://vxplanet.com/2023/10/24/nsx-multitenancy-part-1-introduction-multitenancy-models/
Let’s get started and configure our first NSX project (tenant).
Background and current environment
We have a team called “Application Security” (App_Sec) who require a dedicated NSX tenant for their workloads with RBAC policies implemented for tenant administration and operations. As such, we have defined the necessary groups in LDAP (Active Directory) which will be used in NSX project for RBAC. The group and role details are as below. The groups for VPC are for Part 3.
We have already integrated NSX manager with Active Directory and assigned the Enterprise Admin role for the AD Group “NSX_Enterprise_Admins”.
We have two compute clusters “VxDC01-C01” and VxDC01-C02” prepared for NSX. Both clusters are on separate VDS and managed with separate transport node profiles in NSX. VxDC01-C01 is the primary compute cluster for the App_Sec team.
We have a single Provider edge cluster with two edge nodes. This edge cluster will be shared by the provider T0 Gateway as well as projects. Optionally, we could dedicate separate edge clusters for projects. We will discuss more about this in Part 5.
NSX Projects currently support only edge clusters configured on the “default” overlay transport zone. Custom non-default overlay transport zones are not supported. As such our compute transport nodes and edge nodes are configured with the transport zone named “nsx-overlay-transportzone” (default).
And finally, we have our T0 Provider Gateway configured on the edge cluster with the necessary BGP peering and route redistribution enabled.
Creating NSX Projects
Projects are created from the default space by the Enterprise Administrator. Let’s login as hari@vxplanet.int who is a member of “NSX_Enterprise_Admins” and navigate to Manage Projects menu.
We will add the Provider T0 Gateway and the Provider edge cluster we configured previously. We could add more than one T0 Gateway and edge cluster based on tenant requirements.
As stated previously, this edge cluster is used for supporting stateful services of the tenant, hence could be the Provider edge cluster or a dedicated edge cluster for the project.
External IPv4 Block is used for public subnet when VPC is configured within the project. We will discuss this in Part 3.
Assigning RBAC policies to projects
Next, we will assign the AD Group “AppSec_Project_Admins” with the Project Admin role in the tenant. This is done by the Enterprise Administrator.
We will now login as alec@vxplanet.int who is a member of “AppSec_Project_Admins”. Note that he wont have access to the default view and can manage networking and security objects within Project_AppSec only.
Assigning Quotas to projects
Quotas are assigned to projects from the default view by the Enterprise Administrator. Let’s log back in as hari@vxplanet.int and apply the below limits to Project_AppSec.
Maximum T1 Gateways – 10
Maximum segments – 30
Maximum VPCs – 5
Maximum VPC Subnets – 30
Enterprise Admins and Project Admins can monitor the quota status and it’s usage.
Creating Project T1 Gateways
Once logged into the project as project admin or project user, we see a subset of the networking components from the default space. Project admins and users create T1 gateways and segments. T0 Gateways and edge clusters are shared to the project from the default space once they are assigned to the project by the Enterprise Administrator.
The following options are supported for the Project T1 gateways:
- DR-only T1 gateway : This doesn’t utilize the edge cluster assigned to the project
- Active – Standby T1 Gateway : This leverages the edge cluster assigned to the Project
- Active – Active T1 Gateway : This leverages the edge cluster of the A/A Provider T0 Gateway which is assigned to the project
T1 gateways cannot be shared across projects. Also, T1 gateways configured in the default space cannot be allocated to projects.
Note : If the edge cluster assigned to the project has failure domains configured, the project T1 gateway honors this and places the T1 SR component according to the configured failure domains. More on this in Part 5.
Creating Project Segments
Currently, only overlay segments are supported in projects. VLAN backed segments are not supported.
As stated earlier, only the default overlay transport zone is supported in a project and as such, the segments will be configured automatically on the system default overlay transport zone. In our case, this will be “nsx-overlay-transportzone”.
Note that this project segment is realized on all the compute clusters configured on the overlay transport zone, viz “VxDC01-C01” and “VxDC01-C02”.
Resource sharing to Projects
Enterprise Admin can share resources from the default space to selected projects which can be consumed by the project users. Sharing of resources from one project to another project is not supported. Some examples of resources that can be shared from default space are:
- Overlay segments (those created on the default overlay transport zone)
- Security groups
- Services and Context profiles
- Profiles (DHCP, IDS, DAD etc)
We can also share resources from a project to the VPCs within the same project, but we will discuss this in Part 3.
Let’s login as hari@vxplanet.int (Enterprise Admin) and navigate to Default Space -> Inventory -> Resource Sharing
We see two system-generated shares (make sure to check “Default Shares of Projects” at the bottom pane):
- Default Share : This has resources from the default space that are shared to all the projects and VPCs. The shared resources include services, context profiles, App IDs, segment profiles, IDS Signatures etc that can be used in read-only mode within the projects.
- Default-Project_AppSec (or Default-<Project-Name> in general) : This has resources that are shared explicitly to specific projects like the T0 Gateways, Edge clusters, IP Address Blocks etc.
The shared resources will appear on the project’s view with a label of the resource’s owner.
Now let’s create a custom resource share. We will share a DHCP relay profile and a segment from the default space to Project_AppSec.
- DHCP Relay profile will enable workload VMs in Project_AppSec to get IP address information from the corporate DHCP server outside of NSX.
- The shared segment will provide a transit path for the Project_AppSec VMs to access the databases hosted in a security zone in the default space.
We do this from the “Resource Sharing” tab under “Inventory” in the default space.
Right now, we will share only with the project and not to VPCs within the project. We will cover resource sharing to VPCs in Part 3
Let’s log back in as alec@vxplanet.int (project Admin) and confirm that the shared resources are available.
Onboarding workload VMs to Projects
Now that we created the App_Sec project T1 gateway and the workload segment, we can onboard workload VMs and leverage the shared DHCP profile for dynamic IP assignment and the shared segment “LS-Security_transit” to securely connect to the databases hosted in the security zone.
Onboarding VMs to a project is as simple as attaching the VM to the segment that is owned by the project (Eg: LS-InsiteApp01). Note that attaching the VM to the shared segment “LS-Security_transit” will not make it a member of Project_AppSec, as it is not owned by Project_AppSec.
Let’s connect the two workload VMs – App01 and App02 to the segment “LS-InsiteApp01” and attach the shared DHCP relay profile to the segment.
Both VMs have successfully received IP configuration through the DHCP relay profile. They are also added as members of the project Project_AppSec.
Now let’s leverage the shared segment “LS-Security_transit” to provide a transit path for the VMs to access the databases hosted on the security zone (100.98.11.2) on the default space. We have few options to do this:
- Attach VMs to this shared segment “LS-Security_transit” -> This will not make the VMs member of Project_AppSec as this shared segment is not owned by this project. Hence we will not consider this.
- Dual nic the VMs so that second interface attach to “LS-Security_transit” -> Adds complications, hence we will not consider this.
- Attach “LS-Security_transit” as a service interface to project T1 Gateway -> Simple approach without changes to workload VMs and we will consider this.
Let’s create a new service interface on the project T1 gateway “LR-T1-InsiteApp01” and attach the shared segment “LS-Security_transit”.
Next we will add a static route on the Project T1 gateway to the secure zone 100.98.11.0/24 to use this service interface as the next hop.
Now let’s login to one of the project VMs and confirm that we have the correct route to the secure zone.
Success!!! Notice that, on the Project T1 gateway, traffic to secure zone has been routed to next-hop 192.168.50.1, which is on the default space (as expected 😊 )
Distributed Firewall in Projects
Whenever an NSX project is created, the system creates a default security group for the project. The group has a naming format of “ORG-default-PROJECT-<Project_name>-default”. This group represents the entire project and has all the segments and VMs of the project. This default group is available both within the project itself as well as on the default space.
The system also auto-creates the below DFW policies and rules within each project. With the default policies, each workload VM can only communicate with other workload VMs in the same project. Inbound access to and outbound access from the projects are not allowed, but we could write additional policies based on the project requirements.
Let’s add a DFW rule to allow project workloads to access external networks.
Note that the default Drop – any – any rule at the bottom of the project DFW category has effect only within the project and not across projects or default space.
As the default project security group is available on the default space, the Enterprise Administrator can write policies to control project traffic from the default space. Anything written on the default space will override the one at the project level for the respective DFW category. An Enterprise Admin at the default space can also lockdown an entire project if an emergency arises.
For Eg: let’s write a DFW policy in the default space to block outbound access from the project “Project_AppSec”. This will override the one that we created earlier within the project.
Let’s review the rules applied to the App01 VM. We see that rule 1044 (applied at the default space) overrides rule 1043 (applied at project context).
Project Route Advertisement Control
The Enterprise Administrator in the default space can implement route filtering to prevent projects from advertising un-approved subnets to the Provider T0 gateway. This is currently done via API.
We currently have a single subnet 192.168.20.0/24 in Project_AppSec. Let’s implement route advertisement control so that Project_AppSec cannot advertise other subnets to the Provider T0 gateway. They could still use those un-approved subnets within the project as private subnets.
The first step is to create a prefix list that has the whitelisted subnets for Project_AppSec.
We then associate this prefix-list to a project route filter that is mapped to the project Project_AppSec.
Now let’s create a test segment called “LS-Private” and check the status of route advertisement from the project T1 gateway.
We see that only the whitelisted subnet 192.168.20.0/24 is advertised upstream from the project T1 gateway and the new network 192.168.65.0/24 is rejected.
Now let’s wrap up!!! This has been a lengthy article, and I hope you are still there 😊
We will meet in Part 3 to discuss on NSX Virtual Private Clouds (VPCs), see you shortly.
I hope this article was informative. Thanks for reading.
Continue reading? Here are the other parts of this series:
Part 1 – Introduction & Multitenancy models :
https://vxplanet.com/2023/10/24/nsx-multitenancy-part-1-introduction-multitenancy-models/
Part 3 – Virtual Private Clouds (VPCs)
https://vxplanet.com/2023/11/05/nsx-multitenancy-part-3-virtual-private-clouds-vpcs/
Part 4 : Stateful Active-Active Gateways in Projects
https://vxplanet.com/2023/11/07/nsx-multitenancy-part-4-stateful-active-active-gateways-in-projects/
Part 5 : Edge Cluster Considerations and Failure Domains
https://vxplanet.com/2024/01/23/nsx-multitenancy-part-5-edge-cluster-considerations-and-failure-domains/
Part 6 : Integration with NSX Advanced Load balancer
https://vxplanet.com/2024/01/29/nsx-multitenancy-part-6-integration-with-nsx-advanced-load-balancer/