This paper focuses on listing down the technical details and sequence of steps which are needed to accomplish the automation of Google Cloud Virtual Machine deployment along with SAP HANA DB instance installation using Terraform and Ansible scripts. Intention is not to teach about terraform or/and ansible scripting or SAP HANA but sharing my hands-on experience with assumption that reader is having basic knowledge of terraform/ansible and SAP Basis. One can also read it as an extension of my earlier blog Automation of SAP Applications Deployment in Google Cloud https://blogs.sap.com/2021/09/01/automation-of-sap-applications-deployment-in-google-cloud/
Conventional way of having a SAP HANA Database instance installed is to deploy the virtual machine based on worked out sizing using Google Console ot Terraform/Google Deployment Manager as IaC tool with identified underlying Operating System (SLES/RHEL), create the mount points at OS layer and map the storage needed for respective HANA DB mount points, download the SAP HANA media, uncompress it and set the ownership & permissions of the installation files and at the end, kick off the interactive HANA installation with intermittent manual inputs as & when it is prompted. Using both tools i.e. Terraform and Ansible, with few initial hits & trials, we were able to successfully create a single click script to accomplish this task in one go. We tested this automation script with both operating systems – SUSE Linux Enterprise Server – SLES and Red Hat Enterprise Linux – RHEL which are certified operating systems for SAP HANA DB on Google Cloud https://cloud.google.com/solutions/sap/docs/sap-hana-os-support.
We split the task into four major steps as shown below. In the first two steps, we are capturing all the mandatory inputs which are needed for creation of VM and installation of SAP HANA DB instance. Apparently these inputs/values, passed to script during execution for defined variables, are stored in a template which is accepted by Terraform/Ansible runtime environment. Following steps take care of putting the automation logic together and then deployment of script.
Step 1 – As the very first step, we decided upon input parameters for VM creation like machine type, hostname, operating system image, disks, network, region & zone and other mandatory details needed to spin up a VM in GCP. Recorded these parameter values in an input parameter file. Sample screen is given below:
#-------INPUT VARIABLES FOR TERRAFORM RESOURCE PROVISIONING---------------- #Instance# instance_name =sap-hana-vm project =sap-automation-112233 vm_network =default zone =us-central1-a boot_disk_size =20 image =rhel-sap-cloud/rhel-8-4-sap-ha machine_type =n1-highmem-32 device_name =sap-hana-vm-OS disktype =pd-balanced #address# address_name =hana-ip address_type =EXTERNAL region =us-central1 network_tier =STANDARD
Step 2 – Value for Input parameters for HANA DB installation like DB ID, Instance number, system usage – production/test/development, SYSTEM user password for SYSTEMDB and Tenant DB etc.. are captured in the same input.patameters file in the project source repository. Sample given below:
#Enter SAP HANA System ID: var6 sap_hana_system_id =HDB #Enter Instance Number : var7 sap_hana_instance_number =00 #Index | System Usage | Description #----------------------------------------------------------------------------- #1 | production | System is used in a production environment #2 | test | System is used for testing, not production #3 | development | System is used for development, not production #4 | custom | System usage is neither production, test nor development #Select System Usage / Enter Index : var9 sap_hana_system_usage =3 #Enter System Administrator (sidadm) Password: var17 system_administrator_password =<passwd> #Confirm System Administrator (sidadm) Password: var18 confirm_system_adm_password =<passwd> #Enter System Database User (SYSTEM) Password: var19 system_database_user_password =<passwd> #Confirm System Database User (SYSTEM) Password: var20 confirm_system_db_user_password =<passwd>
Apart from capturing input values in input parameter file, following tasks are also being performed during preparation :
> Calculating the storage needed against each HANA mount point /hana/data, /hana/logs, /hana/shared, /usr/sap
> Downloading the SAP HANA DB server installation media form SAP portal and keep it in the Google storage bucket
Step 3 – Scripting
Terraform script (.tf) is being used for deployment of Virtual Machine (VM) and Ansible script (.yaml) is utilized for SAP HANA installation. Apart from scripts, few supporting files are also being created like parameter input file, service account key file etc..
All the logic is built here in this step:
- VM creation commands are put in the terraform script(.tf) which is being called in main.sh
- HANA DB installation commands are being written in a bash script which in turn is being called in ansible(.yaml) script
- Weaving of terraform and ansible scripts
- Performing tasks after VM creation and before HANA DB installation like creation of mount points and downloading of installation files
We need to run only the main.sh and the rest of the tasks will get executed within it. Execution flow of single click script is like this: .
main.sh ↓ main.tf ↓ hanadbinstall.yaml ↓ hanaddbinstall.sh (bash)
Step 4 – Deployment
- Open the cloud shell from GCP console, go to the repository path where all terraform and ansible scripts are placed. Kick off the main.sh
- It will fetch the parameters values from inputs properties file and create the VM with defined inputs
- Post VM creation, it will pull DB installation input parameters from inputs properties file and then install the Ansible executables & will call the .yaml file
- As next step, bash script will create the defined mount points in additional disk and will tag the required storage for each mount point respectively
- Listed commands will copy the HANA DB installation files from defined google storage bucket to newly created VM in specified directory and uncompress it to have the installation executables files ready.
- As next step, it will kick off the HANA DB installation by calling the hdblcm command while mapping the DB parameters values fetched from the parameter file with variables.
- Once the HANA DB instance installation is completed, script ends with the echo message you entered at the end of script like “Installation completed” and you will have the VM deployed along with SAP HANA DB instance installed on it.
Connect to the VM using ssh and switch to sidadm user. Check the status of all HANA services (daemon, nameserver, preprocessor, indexserver etc..) using sapcontrol command. Status GREEN denotes that particular service is up & running.
Another way to test the status of HANA DB instance services is to connect HANA database using SAP HANA front end tool “Hana Studio” with db user SYSTEM. Check for HANA services status (should be “Active”) in the “Landscape” tab of Administrative perspective.
We executed this script successfully with both OS (SLES and RHEL) and HANA DB version 2.0 SP6 and it took us less than 25 minutes to have a Linux VM deployed with SAP HANA DB instance installed, up & running. The whole process took around 40% less time as compared to the conventional manual installation.
This automation script is pretty handy in spinning up the SAP HANA DB instance for any vanilla installation or any brownfield scenario, reusable with minor changes and can be used as template for factory model.
I would like to acknowledge the great help provided by my colleague Sudipta Kundu – Cloud Developer in putting together and testing those automation scripts.
I hope this technical blog will help you in understanding the various inputs, tasks and their sequence in accomplishing the automation of SAP HANA DB installation along with underlying VM deployment using terraform/ansible scripting in one go on Google Cloud Platform (GCP).
Please feel free to share your valuable feedback.