How to actually migrate your AIX data into the Cloud
So you have decided to migrate your AIX and Unix workloads into the Cloud, whether one of the major public clouds such as Skytap on Azure or IBM PowerVS or a dedicated private cloud environment such as L3C Cloud. Here we share our experiences when migrating AIX data into cloud for the major public cloud service providers. It’s great and makes a lot of sense to migrate AIX and legacy Unix to cloud. How do you actually get everything there while still running critical applications that usually don’t afford much downtime?
As our main specialisation is moving Unix-based environments, we often have to deal with critical workloads including DBs, clusterware and legacy applications. Here are the main considerations.
Connectivity is key. Before planning data migration, one should have a clear understanding of how to set up connectivity and what capabilities it offers in the particular provider.
For smaller environments, our clients often utilise one of the site-to-site (S2S) VPN implementations. This helps avoid the additional costs associated with dedicated physical connectivity.
The main advantage here is that most of the requirements for this are usually in place. You have an on-site VPN concentrator (very often a firewall appliance), in-house skills to configure, high-speed Internet connectivity and S2S service within the public cloud provider. However, when using S2S VPN connectivity to move more significant amounts of data, one should always consider the capabilities of the local VPN concentrator or firewall appliance. Most of the branded and reliable devices do have limitations when it comes to transferring encrypted data on a VPN tunnel. This being said, S2S VPN is often the starting point for migration.
Beyond this most public cloud providers offer dedicated physical connectivity to their services, for example:
They vary in their characteristics, however, the common point is that for a particular price one gets direct access to the network one will use in the public cloud provider. Thus avoiding having to deal with encryption of traffic, support of tunnels, etc.
Mostly suppliers offer 1 Gbps or 10 Gbps connectivities (therefore one can do the maths with respect to data transfer times). Please consider that a 10Gbps port would require switching equipment able to support it.
A final and often overlooked consideration is measuring the actual latency to the target point at the cloud provider. Not all public cloud providers offer AIX in all their regions so it’s important to check. The cloud provider should be able to set some expectations on latency from point to point for its various services. If not, a simple ping to the cloud region or even a short PoC will do.
Two Streams of AIX data into Cloud
When seeking approaches to data migration, one should always have in mind the two streams. We always divide data into OS-related and application-specific. We find this really important as it determines the data migration approaches.
Generally the approach to migrate data related to OS, configurations, registries, etc. is not the same as the approach to migrate DB specific data.
For instance if you use mksysb to backup and restore everything on AIX OS, you should really avoid including the OracleDB elements as there is no guarantee for consistency of DB data. Applications and DBs have their own specific perceptions of how data should be stored, migrated, backed up or restored. Usually one approach is chosen for the application/DB and another for the underlying operating system.
Live / Non-Live
Of course both terms are not exactly correct when it comes to migration, however, the idea is to divide the main approaches for migration of data based upon:
- Downtime window
- Required/available tools and software licensing
- Data transfer period
- Cutover period
- Change freezes
- Migration costs
Non-live migrations aim to optimise migration costs and the existing tools and licences. This approach is mainly used for non-critical applications and services. There one can ednure reasonable downtime and a long change freeze without affecting the business needs. From an OS perspective it may enforce simple copying instruments like rsync, archiving tools like tar, etc. Looking at DB, it usually requires stopping the DB, dumping (or backing up in some specific way) its data, copying it over and restoring manually. Non-live migrations aim to be cost effective but can add to the length of the project.
On the other hand, live migrations are what usually applies to the majority of the workloads. They aim to reduce the necessary downtime, change freezes, optimise cutover period but usually come with higher migration costs (services, licenses, connectivity). Live migrations usually enforce some more sophisticated tools to replicate data in OS incrementally (although rsync does a pretty good job here as well). On the DB side, it usually employs more complex (and expensive) instruments like Oracle DataGuard or DB2 HADR. Moreover, the skill requirement to plan and implement this is very high, especially if one does not have previous experience.
A high level representation of a live migration on AIX with Oracle DB would be:
- Mksysb of local AIX and import into cloud
- Rsync for main copy and incremental afterwards
- Installation of Oracle DB
- Implementation of Oracle DataGuard
- Oracle DB data sync
Of course, there are a number of additional activities required to complete this successfully like network changes, backup, monitoring, etc. However, the main advantage of live migrations is that business users get to work on the source environment while data is being copied to the cloud. The downtime usually includes the simple moment of cutover at the mutually agreed time.
What if I have an enormous amount of data but don’t have existing licences for DataGuard or HADR etc
Live migrations are especially effective on considerable data sizes. However you may not have existing licences for Oracle DataGuard or DB2 HADR and the cost for a one time project maybe prohibitive.
If you have a significant amount of data from your backup and archive systems most of the major cloud providers offer a service with a migration appliance. These vary by name such as Skytap Advanced Import Appliance, AWS Snowball and IBM Cloud Mass Data Migration. Еssentially they all perform the same function – providing tooling and physical infrastructure (i.e. physical appliances with storage on-site). With that you move your data physically to the cloud without the need of expensive connectivity. You need to request the appliance, copy data on it and ship it back to the provider data centers. Of course we have simplified the overall approach but the concept of the appliance should be clear.
Hopefully we have cleared some of the mysticism around getting your actual data into the cloud, whichever cloud provider you choose. Although we have focused on AIX/Unix the overall concepts are the same to any environment and there are certainly many providers that can assist with x86 based migrations. However if you are looking to migrate AIX or other Unix workloads to a cloud platform then we would be delighted to share our expertise in more detail with you.