Complete Cloud Solution
The latest version of CloudISP on high performance servers produced by DATACOM in Brazil is now available. The solution offers cutting edge technology in the areas of software defined storage, high performance cloud with up to 3 layers of virtualization, hardware with up to 4TB of RAM, 256 vCPUs, any combination of NVME/SSD/SAS/SATA disks. In conjunction with the supply and guarantee provided by DATACOM, INT6 installs, trains and offers 24hx7d support for all Cloud functionalities, using the experience of those who already operate multiple public clouds in several ISPs and companies in Brazil.
CloudISP uses the same scale-out and resource-control technologies used by cloud computing giants. As well, DMServer hardware has high scalability and an architecture that favors low operating costs.
Total Redundancy
Compute Node to another. As well, the Storage solution offers full redundancy in case of failures in Storages Nodes or in the disks that are part of the Pool made available to the VMs. The replication used can be configured, from a minimum of 2 to typically 3 or 4 copies. The greater the Storage capacity, the greater its level of redundancy.
Multiple Layers of Virtualization
Through Nested Virtualization technology, the solution can operate with up to 3 levels of virtual machines – L0, L1 and L2. The L0 layer is managed by the Cloud Service Provider, owner of the hardware infrastructure, with N3 support from INT6, allowing the controlled allocation of Storage space, vCPUs and Memory for the various Virtual Server customers – called Tenants. The L1 layer is directly controlled by the Tenants, either to directly allocate Windows or Linux virtual machines, or to deploy Virtualizing Hosts that will provide the KVM or Hyper-V Hypervisor service, and then deploy the third level of Virtualization, called L2. .
Software-Defined Storage with Virtually Unlimited Capacity
Software Defined Storage (SDS) technology allows the use of hundreds of Nodes, with thousands of disks, mixed in any way between NVME, SSD, SAS and SATA and even positioned in different geographic locations. However, due to good practices of redundancy and availability, from a certain size it is strongly recommended to divide the resources in different Clusters. In this way, in practice, the size limits of the solution are never reached before the deployment of a new Cluster takes place. The solution uses Disk Pools classified by technology, allowing for example that NVME areas are made available for certain VMs, while others use SSD and SAS, and even large amounts of SATA area are consumed by instances that handle Backup. The SDS solution natively offers Block Storage, File Storage and Object Storage with a high level of redundancy and performance through Device Drivers directly available in the Linux Kernel or via iSCSI for Windows servers.
High Performance through Selective Hardware Passthrough
For instances that need high processing power of network packets or even high speed access to NVME disks, the platform allows selective access to certain parts of the Hardware through PCI Passthrough. In this way, virtual instances directly see these hardware modules and can enjoy native processing speeds. All this with full control by the Cloud Provider, which determines exactly which modules will be accessed by each Tenant’s virtual instance.
Hardware based on Open Compute Project Architecture
Created by a consortium of companies led by Facebook, the Open Compute Project advocates high processing power in an open, modular architecture, and especially with lower operating costs than the architectures currently adopted by IT computing environments. A practical example is the operating temperature of the servers, which were designed to run continuously at 40º, saving a lot of energy, one of the main recurring cost items of a Datacenter.
Graphical Web Interface for Management
Through a responsive Web interface, it is possible to manage the solution, both from an administrative point of view, and from the point of view of the Cloud customer who only sees its resources. The administrator defines and monitors the Computing, Network and Storage resources, as well as the Quotas defined for each Client – Tenant. Tenants in turn access the interface to start, stop, restart VMs, define firewall rules, define and attach Volumes to VMs, as well as take Snapshots of Volumes and Instances.
Reference Architecture
2 as Storage Nodes. In this way, we have redundancy in case of failure of any node in the solution.
- 10G/40G/100G Ethernet/IP network: It is essential that internal networks use speeds above 10G, and in some cases 100G may be necessary, in the interconnection between Compute Nodes and Storages Nodes, especially when we have several NVME disks in Storages nodes
- High density and speed of vCPU and RAM: The computational load evolves very quickly, and it is not possible to waste physical space and energy with servers that do not use the latest technology. In this reference architecture, we use servers with up to 256 vCPUs and 4TB of RAM at 3200 MT/s.
- Horizontal Growth: From the addition of more Nodes it is possible to increase the computational or storage capacity, reaching hundreds of nodes in real scenarios.
Use Cases – Bitcom Service Provider
both the residential and business markets. In the implementation of its latest Computing Cloud cluster, it chose to acquire DATACOM DM-SV01 servers, divided into 2 distinct DataCenters, redundantly interconnected with High Speed Fiber. The following figure illustrates the possibilities of using this new CloudISP Cloud.
At the L0 virtualization level, only Bitcom and INT6 have administrative access, and are able to allocate resources to different Tenants, having full control of the environment.
At the L1 level, some Tenants have more privileged access to the DATACOM servers’ hardware, enabling the use of instances that can run Virtualization services. This is the case for Tenants A and B.
In the case of Tenant A, a Microsoft Windows 2019 Server Datacenter Edition Cluster was implemented, including Shared Storage and High Redundancy. Client A itself has an administrative interface on the Microsoft Cluster and can control the L2 virtualization level VMs. All this management done by client A is independent of Bitcom actions, since the hardware resources are already correctly reserved for Tenant A. The case of Tenant B is similar, with the difference that the virtualization is Linux KVM instead of Windows Hyper-V.
For the situations of the other Tenants C, D and others, we have a typical situation of a Computing Cloud, where from images standardized by Bitcom for the main operating systems on the market, the client itself manages its instances, within the agreed limits of computational resources. available for use.