Building an open source plugin to Cisco APIC will enable the joint management of compute as well as networking through standards based Redfish APIs. This is something that doesn't exist in the industry today but will be critical to support lights out operations of data centers that consist of Cisco ACI infrastructure. 

There is a synergistic mapping of DMTF Redfish Fabrics schemas and Cisco ACI APIs. Both schema sets support multi-tenant communications across a data center fabric. The following UML type diagram illustrates a suggested mapping between Redfish schemas and Cisco ACI APIs. Due to the versatility of the ACI API set, there are a number of ways to come up with mapping ACI APIs to the Redfish Fabric schema. The suggested mapping below is just one way that this could be done and will of course require verification by the design team that implements the plugin. 



Starting from the top, Cisco's 'Policy Universe' can be mapped to a Redfish 'Fabric'. The Redfish Fabric describes a per vendor Ethernet Fabric and its physical and logical makeup. In this instance, the unique UUID for the Redfish Fabric can be applied on a per ACI instance basis. There is no equivalent 'Tenant' schema in Redfish to ACI, rather a single Tenant can be defined for ACI that describes an 'APIC Infrastructure Tenant' all ACI Application Profiles, Bridge Domains and VRFs then belong to that Tenant. When Redfish based pools of addressing are applied to an entire Fabric in order to set up each fabric's control plane overlay and underlay, they can be applied at this level. ACI 'Application Profiles' can be mapped to a Redfish Zone with ZoneType='ZoneOfZones' each 'Application Profile' will then consist of one or more 'Endpoint Groups' - this exactly maps to the relationships between Redfish Zones with ZoneType='ZoneOfZones' consisting of one or more  Zones with ZoneType='ZoneOfEndpoints'. There are a number of definitions Cisco have for Endpoint Groups but for the purposes of this mapping exercise, we can think of an ACI Endpoint Group consisting of a traditional VLAN and subnet with each Endpoint being a host address on that subnet. 

The following steps can be used to set up host-to-host communications across a Fabric:

  1. Set up a VLAN domain and assign a range of VLANs to it. The VLAN range used will come from the AddressPool linked to the Zone with ZoneType='Default' and apply to all switch ports facing end hosts across the Fabric
  2. Specify VLAN domain port members. These are the leaf switches and interfaces that the VLAN domain apply to. The switches and Port list will come from the Port 'ConnectedPort' link(s) assigned by a northbound client as part of setting up the Fabric.
  3. Create a Tenant
  4. Create a Bridge Domain per Zone with ZoneType='ZoneOfEndpoints'
  5. Allocate an IP subnet to the Bridge domain. the Host subnet can be obtained from the AddressPool linked to a Zone with ZoneType=ZoneOfEndpoints 
  6. Create a VRF and Application Profile per Zone with ZoneType=ZoneOfZones
  7. Create an Endpoint Group for every Zone with ZoneType=ZoneOfEndpoints with links to the ContainedByZone of the ZoneOfZones above. The EPG VLAN tag will come from the AddressPool linked to the Zone with ZoneType='ZoneOfEndpoints' and must be in the VLAN domain it is linked to.
  8. Associate the EPG to the set of ports listed in the Zone with ZoneType-ZoneOfEndpoints - for each Linked Endpoint, look at the 'ConnectedPorts' property for the switch port to associate the EPG

Note that inter EPG communication policy dictating QoS and access control will be any-to-any in the first instance. This is due to a lack of support in Redfish for ACI 'Contracts'. There is a mechanism in Redfish called 'connections' that can be used for this purpose, but will require a new 'ConnectionsPolicy' schema to operate effectively. This will be worked on as part of an upcoming DMTF release.