IBM Cloud Compute
Second Part of IBM Cloud Series
In this piece we will talk a bit more on compute resources in IBM Cloud, meaning those resources available on the platform marked usually as instances. The idea of this is to basically share our experience from working mainly on the classic service part as opposed to IBM Cloud’s latest VPC (Virtual Private Cloud) Gen2 platforms.
IBM Cloud offer a number of services, which can roughly be divided into two:
- Private cloud – including bare-metal servers, custom hypervisors like VMWare etc.
- Public cloud – VPC Gen2 and classic virtual instances
We are not commenting on the private cloud solutions in these articles , where customers receive control over their environment and handle the hypervisor themselves, but are focusing on the public cloud and classic instances. .
In order to have a more robust environment especially from a security perspective, we would delve a bit more into the classic instances as they differ to VPC in a number of ways:
- Offer dedicated FW appliances with advanced features like IPS/IDS, syslog shipping, multi-VLAN management, more advanced S2S tunneling, etc.
- Offer end-to-end solutions for backup
The classic instances are divided into four main categories – public, dedicated, transient, reserved. Transient and reserved are more like a niche implementation, where you either get some resource, which is never guaranteed or reserve a resource, which you do not use at the moment.
Public and dedicated instances are the main topic here as we are discussing mainly production implementations, which need a consistent uptime. Dedicated resources sound attractive, however, maybe a bit of an overkill for a normal workload implementation. Thus in order to stay close to the public cloud idea, we would recommend the public instance. Please be aware of some of their limitations like max network throughput, which might at some point affect your block storage performance as block storage is only iSCSI in IBM Cloud.
DataCenters and Pods
Before you even start deploying your instances, we advise to take a good look at the data centres that you would use. Should you decide to use more than one, make sure to perform a through analysis. Similar to what we noted in the previous article, you should always test before starting, as from our experience IBM Cloud data centres seem to differ significantly from one another. For some reason you could have reduced performance let’s say in block storage in DataCentre01 from the same volume block storage in DataCentre02. There are a number of specifics and the best way is to test them or consult with a more experienced implementor (or contact us separately) as the online support is limited in this case.
If you have been working for a while with IBM Cloud, you would have reached the concept of the pods. Let’s say you are starting to provision virtual instances and you need to assess the availability of the instances. In no time you would have to start using the so-called placement groups. When you define a spread rule on a placement group, IBM makes sure to place your virtual instances on different physical nodes. Thus, when a physical node goes down you would not lose more than one instance at the same time.
Placement groups work only within the same pods, so imagine pods like totally different physical infrastructures from one another. Within different pods you can have:
- Virtual instances in different VLANs by default
- Virtual instances on different physical nodes
So, if you need to securely separate virtual instances between each other, different pods are the first thing to do.
Please bear in mind that not all data centres support more than one pod, so this is something that should also be taken into consideration. Placement groups on the other side can have up to 5 instances in them. Experience is needed here to balance the environment or you could feel like you are endlessly spinning plates.
We will focus more on storage and security in another article, but if I have to point out of one the obvious weaknesses of the classic virtual instances, this is their inability to provide standard encryption on their local drives. Every instance comes with a SAN-attached local drive with a selection of sizes, however, this is not encrypted and needs a custom implementation for a full end-to-end encryption. Thus custom techniques should be used to encrypt template images on IBM Cloud and afterwards deploy instances from them.
Our experience shows that this creates some issues later on, but if you have a requirement for end-to-end encryption, you will have to do it custom.
Hypervisor and Availability
By default IBM Cloud uses Citrix Xenserver to virtualize the instances it provides to customers. One of things that is unusual about this is the lack of a high-availability concept. In many cases we had issues with our virtual instances and they needed to manually be relocated to different physical nodes, which made the process slower.
Another thing that needs to be taken into consideration is the ability to take care of Citrix drivers especially when working with Windows instances. Citrix drivers by default are included in the update of Windows itself, however, they can cause issues by resetting the network addresses of the instance itself. So it is very important to exclude these from the regular updates, however, you should make sure to update them as well manually from time to time as they lead to some challenges related to compatibility.
In most cases, the best practice is to plan these well for longer maintenance as when you lose access to the virtual instance, you would lose access to the console as well. Once again, support will have to be involved.
The End 🙂
In this part, we tried to focus on some of the main challenges that can be faced when working on IBM Cloud in relation to compute resources. Even if there are specifics, there are a lot of different tools in IBM Cloud to work on them or around them, so you will have the best possible environment for your needs.
If you would like more assistance take a look at our public cloud services section and contact us if we can help more.