Released in HCX 4.8, Multi-Mesh is a feature that allows multiple IX service meshes to be deployed into a source cluster.
This now allows for a new level of concurrency previously not available.
Traditionally, only a single IX service mesh could be created per source cluster <> target cluster pair as shown below:
Single Mesh

A single IX service mesh supports up to around 2Gbps / 0.25GBps / 900GBph / 21,600GBpd.
When migrating PetaBytes of data, this could become an issue for customers.
Multi-Mesh allows for more then one IX mesh per source cluster <> target cluster pair as shown below:
Multi-Mesh

By scaling out the IX service meshes, you can scale out your throughput, lets look a quick calculation:

So, we can now reach nearly 3TB per hour throughput with a simple 3 service mesh deployment, if there are not other network constraints to consider.
You cannot use multiple meshes to migrate the same VM i.e. different disks on different meshes, you need to distribute your VMs across the available service meshes, this is done VM by VM.
If using Mobility Groups, you can set a different service mesh per mobility group.
Sample Use Cases:
- With multi-mesh, you can plan your migrations accordingly, a good use case is to split the larger VMs onto one mesh and the smaller VMs onto one or more other meshes. this way the smaller VMs are not queuing behind the larger VMs. You may be able to migrate 100’s of smaller VMs whilst migrating 10 or so larger VMs.
- Another similar use case would be to split your larger VMs across meshes, this way you reduce the overall migration time, worst case, the VMs can be reverse migrated quicker on separate meshes.
- Another use case could be different switchover schedules for batches, you could have 1 or more batches, each batch being a mobility group. The VMs could sync in parallel or staggered, whilst one or more are syncing, another mesh could be switching over. This can be done with a single service mesh but not with the same throughput.
Note – You cannot have more than 1 x IX appliance on a single ESXi host as you will have Mobility Agent issues. If using multi-mesh you will have more then one IX in a cluster so you should create an Anti-Affinity IX DRS Rule.
Consider placing the IX appliances on specific ESXi hosts, I like to manually place on the first nodes in a cluster.
Consider overriding DRS settings for the IX appliances and setting them to to Manual so they do not move between hosts, only in an HA event. DRS should move other VMs off a busy host instead.