Abstract:

Application load balancing and auto scaling has led us to newer possibilities in solving reduce website traffic concurrency and manual intervention, here we looking to automate various phases of Infrastructure Lifecycle i.e., Design, Build, Operate & Optimize, Main moto is Autonomous Infrastructure are to improve agility of the business requirement and reduce manual intervention while managing the overall lifecycle of application, Website are developed in  WordPress application stack – 2 Servers, 1 ELB (1-LB, 1-Apache PHP, 1- MySQL),Patch the system to latest kernel and security updates , One test Blog site are create  under sub domain, Create self-healing automation which will monitor the Apache-PHP and MySQL process and its main purpose is reduce manual intervention monitoring the CPU utilization, Automatically create the instances or terminated the instances based on user request, Add Apache-PHP server in case of increase in CPU utilization above   70-80%, Servers are automatically create and terminate based on CPU utilization, Successfully run the website during peak hour without manual intervention.

Keywords:

Application load balancing, Auto scaling, Self-Healing.

INTRODUCTION:

Websites have many features like give updated information, Support all platform and so on, Web application services to Internet protocol television (IPTV) offers great opportunities for entertainment, e-commerce, and social networking, But major problem are occur in peak hours during festival time or else because lot of user given same request to particular server, Servers are given correct information to user based on FIFO(First-in-First-out) concept.

However, remaining users are in waiting state and getting data to late, High traffic and take more execution time, Here we reduce manual intervention, traffic control, and reduce work space using some methods, different vendors are available in cloud market, we were choose AWS(Amazon Web Service) public cloud vendor, In work flow we will explain step by step process Figure 1. then we will show you sample screen shot for each function.

Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software, Load balancing is  the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.

Working flow are first create VPC (Virtual Private Cloud) and add subnets based on  requirement then create application load balancer for load to one server to another server based on website traffic then set target groups, next create launch configuration then create auto scaling group set CPU utilization sale up or down, set maximum node and minimum node, purchasing website domain hosted in AWS Route 53 then hosting the website in route 53 hosted zone after hosting domain give some nameserver copy that nameserver paste in where you buy the domain search the add new nameserver, then create record finally view the test blog and website.

Figure 1.1 Entire work flow.

EXISTING SYSTEM:

Existing system was Classic Load Balancing. Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that are built within the EC2-Classic network. A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. You can add and remove instances from your load balancer as your needs change, without disrupting the overall flow of requests to your application.

DRAWBACKS OF EXISTING SYSTEM:

The Classic Load Balancer will route traffic evenly between Availability Zones (AZ) that are enabled in the ELB. Due to the way some clients handle DNS, load imbalance can occur if there aren’t an equal number of servers to answer requests in each AZ with this configuration. Classic load balancing is the first load balancing in ELB. Suppose we are used classic load balancing in my project it doesn’t work my concept. For example: Improper configuration of classic load balancer is high memory (RAM) utilization on the backend instances and high CPU utilization on the backend instances. Improper Web Server configuration on the backend instances.

PROPOSED SYSTEM:

In proposed system was Application load balancing. Application load balancing automatically distributes your incoming traffic across multiple target ,such as EC2 instances, containers, and IP address, in one or more availability Zones. It monitors the health of its registered targets, and routes traffic only to the health targets. Elastic load balancing scales your load balancer as your incoming traffic changes overtime. It can scale to the vast majority of workloads. Auto scaling, also referred to as auto scaling, and sometimes automatic scaling, is a cloud computing technique for dynamically allocating computational resources. Auto scaling and load balancing are related because an application typically scales based on load balancing serving capacity.

VIRTUAL PRIVATE CLOUD CONFIGURATION:

A virtual private cloud (VPC) is an on-demand configurable pool of shared resources allocated within a public cloud environment, providing a certain level of isolation between the different organizations using the resources, In AWS Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS, As now we will show you our VPC configuration sample screen shot.

Figure 1.2 VPC Architecture

APPLICATION LOAD BALANCER:

Application Load Balancer components The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability of your application. You add one or more listeners to your load balancer, connect VPC and add subnets in application load balancer.

TARGET GROUPS:

A target group tells a load balancer where to direct traffic to. EC2 instances, fixed IP addresses, When creating a load balancer, you create one or more listeners and configure listener rules to direct the traffic to one target group.

CREATE LAUNCH CONFIGURATION:

Here we will configure own templates. A launch configuration is a template that an EC2 Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.

CREATE AUTO SCALING GROUP:

Auto scaling makes sure instances scale up or down depending on the request load. AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB, tables and indexes, and Amazon Aurora, Replicas. AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance, costs, or balance between them. If you’re already using Amazon EC2 Auto Scaling to dynamically scale your Amazon EC2 instances, you can now combine it with 2 AWS Auto Scaling to scale additional resources for other AWS services. With AWS Auto Scaling, your applications always have the right resources at the right time. After successfully create Auto Scaling, Instances are automatically created and terminated based on your requirements (Ex ., CPU Utilization > 70% automatically instances are created or < 70% automatically instances are terminated ).

Figure 1.3 Auto Scaling Architecture

DOMAIN PURCHASING:

We were purchase Domain and Hosting in godaddy.com our domain name “www.innovativecodesacademy.in” Successfully purchasing domain in godaddy.com then install word press application in C-Panel , After that hosting our domain in Route 53  Hosted Zone.

ROUTE 53 HOSTED ZONE:

It is used for hosted our domain, after successfully hosted the domain give four nameserver, hosting the website in route 53 hosted zone after hosting domain give some nameserver, copy that nameserver paste in where you buy the domain search the add new nameserver, then create record finally view the test blog and website. Successfully load our website and test blog without manual intervention based on monitoring CPU utilization.

Figure 1.4 Website

CPU UTILIZATION:

Here monitoring the CPU utilization, CPU Utilization > 70% automatically instances are created or < 70% automatically instances are terminated.

Figure 1.5 CPU Utilization

CONCLUSION:

Auto scaling group is monitoring the CPU utilization, CPU Utilization > 70% automatically instances are created or < 70% automatically instances are terminated, Application load balancer load the request to new server based on server latency, load balancing improves application responsive. In future its very use full to users and website owners give result fast, reduce manual intervention, minimize the workspace. 

Reference:

  1. A. Charland and B. LeRoux, “Mobile application development: Web vs. native,” Queue, Apr. 2011, pp. 20-28.
  2. Elastic load balancing for dynamic virtual machine reconfiguration based on horizontal and vertical scaling [2020]
  3. IBM, “Microservices from theory to practice: Creating applications in ibm bluemix using the microservices approach, aug. [2016].”
  4. L. Beernaert, M. Matos, R. Vilac¸a, and R. Oliveira, “Automatic elasticity in openstack,” in Proceedings of the Workshop on Secure and Dependable Middleware for Cloud Monitoring and Management, ser. SDMCMM,[ 2012].
  5. L. Vacanas, S. Sotiriadis, and E. Petrakis, “Implementing the cloud software to data approach for openstack environments,” [2015].
  6. D. B. Fogel, “An introduction to simulated evolutionary optimization,” IEEE Trans. Neural Netw., vol. 5, no. 1, pp.3-14, 1994.
  7. B. Burns, B. Grant, D. Oppenheimer, E. Brewer, and J. Wilkes, “Borg, omega, and kubernetes,” Commun. Apr.[ 2016].
  8. D. Petcu, “Consuming resources and services from multiple clouds,” J. Grid Comput., Jun [2014]
  9. I. Kureshi, C. Pulley, J. Brennan, V. Holmes, S. Bonner, and Y. James, “Advancing research infrastructure using openstack,” International Journal of Advanced Computer Science and Applications [2013].
  10. S. Bouchenak, “Automated control for sla-aware elastic clouds,” in Proceedings of the Fifth International Workshop on Feedback Control Implementation and Design in Computing Systems and Networks, ser. FeBiD ’10. [2010].
  11. Z. Shen, S. Subbiah, X. Gu, and J. Wilkes, “Cloudscale: Elastic resource scaling for multi-tenant cloud systems,” in Proceedings of the 2Nd ACM Symposium on Cloud Computing, ser. [2013].
  12. F. Wuhib, R. Stadler, and H. Lindgren, “Dynamic resource allocation with management objectives: Implementation for an openstack cloud,” in Proceedings of the 8th International Conference on Network and Service Management, ser. CNSM ’12. Laxenburg, Austria, Austria: International Federation for Information Processing, [2013].
  13. Ki-Hoon Lee, Dong-Hoon Kim, and Gyu-Tae Baek, “Web application virtualization for IPTV,” Proceedings of Network Operations and Management Symposium, Sept. 2012, pp.1-4.

Leave a Reply

Your email address will not be published. Required fields are marked *