Building an enterprise datacenter is easy. When building multiple datacenters, and trying to share workloads, that is an animal of a totally different colour.
The goal is to have a separation of services into availability zones (which in itself, has multiple definitions).
From a very high level perspective, I looked at VMware NSX, Microsoft SDN, OpenStack, and ProxMox to see how they function.
VMware NSX
First up is VMware NSX. NSX comes in two flavours: a) NSX-V, better known as NSX-vSphere, which uses VxLAN as the encapsulation layer, and b) NSX-T, better known as NSX Transformers, which uses Geneve as the encapsulation layer. VMware appears to be no longer adding services to NSX-V, and is only performing bug fixes. VMware is instead putting the development effort into NSX-T, which has ties into a broader range of platforms such as KVM, as well as ESXi/vSphere. Since NSX-V is, for all intents and purposes, being deprecated, I will focus on NSX-T. The key technology document supplied by VMware appears to be DOC-39405 - NSX-T 2.4 Multisite - a document which goes into great detail on how to build a multi-site NSX solution.
Based upon that document, it is difficult to design a two-semi-autonomous-datacenter solution. Basically, the lack of ‘local egress’ on stretched vlans without massive, all-or-nothing scripted gateway migration reduces perceived flexibility of their solution. That is, if you want to have a controller instance at each site, licensing costs are driven up by about double. And there are limitations to WAN round trip times if you want to split the controller across sites, which gets into split brain problems and resiliency. As another observation, VMware seems to be going to more of the MS PowerShell for automation. (and leaving Python in the dust?)
One more item is Brief History of VMware NSX saying "Active-active multi-site deployment is a joke and works almost as well as stretched data center fabric control plane - when you lose the inter-site link in an active-active setup, one of the sites shuts down".
With VMware no longer allowing kernel level insertions, there are very few effective avenues for providing third party routing/encapsulation engines. Cisco's ACI is said to have a plug-in for vSphere, and for insertion into each virtualization guest (which is a bit grotesque, I think). [2019/11/16 - Update - Cisco programs directly against the VMware API now for the things they need to do]
Microsoft Hyper-V
So... moving on to Microsoft Hyper-V with the (Software Defined Networking) SDN add-on. Most of the docs, as of this writing, appear to reference Windows 2016, when Windows 2019 is 'out there'. And in those docs, the underlay/overlay does not appear to handle IPv6!! And in a similar manner to VMware, if two separate datacenters are required, a lot of SCCM and controller licensing is probably going to have to be doubled up. And I'm not really sure how two sites will connect or inter-operate.
Another interesting fact is that the Microsoft SDN solution appears to use the Open vSwitch engine under the hood. But it appears as though they have disabled or hidden all the interesting command line and troubleshooting tools for Open vSwitch. More embrace, extend, and extinguish philosophy?
The documentation didn't provide any indication how the Microsoft VxLAN encapsulation would interact with, say Cisco Nexus switch VxLAN VTEP provisioning, which would allow third party devices access to the various VxLAN domains.
And as a final observation, Microsoft SDN does use the VxLAN encapsulation capability. Cisco is said to have an ACI shim to fit in there to help with the network side provisioning aspects.
OpenStack
One open source possibility I've been examining, and I have been examining this one in more depth: OpenStack. No licensing fees. Automated via Python. Runs VxLAN as the encapsulation layer. Knows how to work with Open vSwitch and keep the related engine diagnostic commands visible. It can also integrate with Free Range Routing for the BGP-EVPN side of things. Cisco has a tool called Cisco Nexus Fabric OpenStack Enabler for integrating OpenStack into a Cisco Nexus VxLAN enabled solution. With a standard multi-site deployment, OpenStack works with a single instance of Horizon and Keystone. I will need to experiment with that to understand the ramifications for Availability Zones.
ProxMox
Last on my list of candidates, is ProxMox which is a complete open-source platform for enterprise virtualization. It is designed mostly as a single site solution. But having access to the underlying networking, and with a simple to use API, engineering the appropriate integration with the network and with other sites is straight-forward. There is easy access to Free Range Routing, Open vSwitch, LXC, KVM, iproute2, ifupdown2, and related toolsets.