Skip to main content
Version: DPX vPlus 7.3

Supported platforms requirements

Nutanix AHV

Disk attachment

Connection URL: https://PRISM_HOST:9440/api/nutanix/v3 (Prism Central or Prism Elements)

info

Note: when connecting via Prism Central, the same credentials will be used to access all Prism Elements

SourceDestinationPortsDescription
NodePrism Elements (and optionally Prism Central if used)9440/tcpAPI access to the Nutanix manager

OpenStack

Disk attachment

Connection URL: https://KEYSTONE_HOST:5000/v3

SourceDestinationPortsDescription
NodeKeystone, Nova, Glance, Cinderports that were defined in endpoints for OpenStack servicesAPI access to the OpenStack management services - using endpoint type that has been specified in hypervisor manager details
NodeCeph monitors3300/tcp, 6789/tcpif Ceph RBD is used as the backend storage - used to collect changed-blocks lists from Ceph

SSH transfer

Connection URL: https://KEYSTONE_HOST:5000/v3

info

Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync

SourceDestinationPortsDescription
NodeHypervisor22/tcpSSH access
HypervisorNodenetcat port range defined in node configuration - by default 16000-16999/tcpoptional netcat access for data transfer
NodeCeph monitors3300/tcp, 6789/tcp, 10809/tcpif Ceph RBD is used as the backend storage - used for data transfer over NBD

Virtuozzo

SSH transfer

Connection URL: https://KEYSTONE_HOST:5000/v3

info

Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync

SourceDestinationPortsDescription
NodeHypervisor22/tcpSSH access
HypervisorNodenetcat port range defined in node configuration - by default 16000-16999/tcpoptional netcat access for data transfer
NodeCeph monitors3300/tcp, 6789/tcp, 10809/tcpif Ceph RBD is used as the backend storage - used for data transfer over NBD

OpenNebula

Disk attachment

Connection URL: https://MANAGER_HOST

SourceDestinationPortsDescription
NodeManager HostXML-RPC API port - 2633/tcp by defaultAPI access to the OpenNebula management services

oVirt/RHV/OLVM

Disk attachment

Connection URL: https://MANAGER_HOST/ovirt-engine/api

SourceDestinationPortsDescription
NodeoVirt/RHV/OLVM manager443/tcpoVirt/RHV/OLVM API access

Disk Image Transfer

Connection URL: https://MANAGER_HOST/ovirt-engine/api

SourceDestinationPortsDescription
NodeoVirt/RHV/OLVM manager443/tcpoVirt/RHV/OLVM API access
NodeoVirt/RHV/OLVM hypervisor54322/tcpoVirt/RHV/OLVM ImageIO services - for data transfer (primary source)
NodeoVirt/RHV/OLVM manager54323/tcpoVirt/RHV/OLVM ImageIO services - for data transfer (fallback to ImageIO Proxy)

SSH Transfer

Connection URL: https://MANAGER_HOST/ovirt-engine/api

info

Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync

SourceDestinationPortsDescription
NodeoVirt/RHV/OLVM manager443/tcpoVirt/RHV/OLVM API access
NodeoVirt/RHV/OLVM hypervisor22/tcpSSH access for data transfer
oVirt/RHV/OLVM hypervisorNodenetcat port range defined in node configuration - by default 16000-16999/tcpoptional netcat access for data transfer

Change-Block Tracking

Connection URL: https://MANAGER_HOST/ovirt-engine/api

SourceDestinationPortsDescription
NodeoVirt/RHV/OLVM manager443/tcpoVirt/RHV/OLVM API access
NodeoVirt/RHV/OLVM hypervisor54322/tcpoVirt/RHV/OLVM ImageIO services - for data transfer
NodeoVirt/RHV/OLVM manager54323/tcpoVirt/RHV/OLVM ImageIO services - for data transfer

Citrix XenServer/xcp-ng

info

Note: all hosts in the pool must be defined

Single image (XVA-based)

SourceDestinationPortsDescription
NodeHypervisor443/tcpAPI access (for data transfer management IP is used, unless transfer NIC parameter is configured in hypervisor details)

Changed-Block Tracking

SourceDestinationPortsDescription
NodeHypervisor443/tcpAPI access (for data transfer management IP is used, unless transfer NIC parameter is configured in hypervisor details)
NodeHypervisor10809/tcpNBD access (data transfer IP is returned by hypervisor)

Proxmox VE

Export storage repository

SourceDestinationPortsDescription
NodeHypervisor22/tcpSSH access
HypervisorNodeIf Node is hosting staging space: 111/tcp, 111/UDP, 2049/tcp, 2049/UDP, ports specified in /etc/sysconfig/nfs - variables MOUNTD_PORT (TCP and UDP), STATD_PORT (TCP and UDP), LOCKD_TCPPORT (TCP), LOCKD_UDPPORT (UDP), otherwise please check the documentation of your NFS storage providerif staging space (export storage domain) is hosted on the Node - NFS access
Node and hypervisorshared NFS storagecheck the documentation of your NFS storage providerif staging space (export storage domain) is hosted on the shared storage - NFS access

Openshift

Connection URL: https://API_HOST:6443

SourceDestinationPortsDescription
NodeKubernetes API host6443/tcpAPI access
NodeOpenshift Workers2049/tcp, 2049/udpNFS connection
Openshift WorkersNode2049/tcp, 2049/udpNFS connection
NodeOpenshift Workers30000-32767/tcpaccess to service exposed by DPX vPlus plugin

Azure Stack HCI

SourceDestinationPortsDescription
NodeDPX vPlus Agent50881/tcp for http connection, 50882/tcp for https connectionDPX vPlus Agent access and data transfer, firewall rules are added automatically during agent installation

SC//Platform

Export Storage Domain

Connection URL: https://MANAGER_HOST

SourceDestinationPortsDescription
NodeSC//Platform manager443/tcpAPI access
NodeSC//Platform hosts445/tcpSMB transfer
SC//Platform hostsNode445/tcpSMB transfer

Disk Attachment

Connection URL: https://MANAGER_HOST

SourceDestinationPortsDescription
NodeSC//Platform manager443/tcpAPI access

Microsoft 365

SourceDestinationPortsDescription
NodeMicrosoft 365443/tcpMicrosoft 365 API access

You can find more detailed descriptions of Office 365 URLs and IP address ranges on this page.

To successfully synchronize M365 user account, it must fulfill following requirements:

  • has an email,
  • is not filtered by location, country or office location (user filter in UI),
  • field user type is set to Member,
  • has a license or is a shared mailbox.

Security Requirements

User Permissions

User vprotect must be a member of group "disk".

Sudo privileges are required for the following commands:

DPX vPlus Node:

  • /usr/bin/targetcli
  • /usr/sbin/exportfs
  • /usr/sbin/kpartx
  • /usr/sbin/dmsetup
  • /usr/bin/qemu-nbd
  • /usr/bin/guestmount
  • /usr/bin/fusermount
  • /bin/mount
  • /bin/umount
  • /usr/sbin/parted
  • /usr/sbin/nbd-client
  • /usr/bin/tee
  • /opt/vprotect/scripts/vs/privileged.sh
  • /usr/bin/yum
  • /usr/sbin/mkfs.xfs
  • /usr/sbin/fstrim
  • /usr/sbin/xfs_growfs
  • /usr/bin/docker
  • /usr/bin/rbd
  • /usr/bin/chown
  • /usr/sbin/nvme
  • /bin/cp
  • /sbin/depmod
  • /usr/sbin/modprobe
  • /bin/bash
  • /usr/local/sbin/nbd-client
  • /bin/make

DPX vPlus server:

  • /opt/vprotect/scripts/application/vp_license.sh
  • /bin/umount
  • /bin/mount

SELinux

PERMISSIVE - currently it interferes with the mountable backups (file-level restore) mechanism. Optionally can be changed to ENFORCING if the file-level restore is not required.