The autoscaling component comprises of the synapse mediators  AutoscaleInMediator and AutoscaleOutMediator and a Synapse Task  ServiceRequestsInFlightEC2Autoscaler that functions as the load analyzer  task. A system can scale up based on several factors, and hence  autoscaling algorithms can easily be written considering the nature of  the system. For example, Amazon's Auto Scaler API provides options to  scale the system with the system properties such as Load (the timed  average of the system load), CPUUtilization (utilization of the cpu at  the given instance), or Latency (delay or latency in serving the service  requests). 
Autoscaler Components
- AutoscaleIn mediator - Creates a unique token and puts that into a list for each message that is received.
- AutoscaleOut mediator - Removes the relevant stored token from the list, for each of the response message that is sent.
- Load Analyzer Task - ServiceRequestsInFlightEC2Autoscaler is the load analyzer task used for the service level autoscaling as the default. It periodically checks the length of the list of messages based on the configuration parameters. Here the messages that are in flight for each of the back end service is tracked by the AutoscaleIn and AutoscaleOut mediators, as we are using the messages in flight algorithm for autoscaling.
ServiceRequestsInFlightEC2Autoscaler   implements the execute() of the Synapse Task interface. Here it calls   sanityCheck() that does the sanity check and autoscale() that handles   the autoscaling.
Sanity Check
sanityCheck()   checks the sanity of the load balancers and the services that are load   balanced, whether the running application nodes and the load balancer   instances meet the minimum number specified in the configurations, and   the load balancers are assigned elastic IPs.
nonPrimaryLBSanityCheck() runs once on the primary load balancers and runs time to time on the secondary/non-primary load balancers as the task is executed periodically. nonPrimaryLBSanityCheck() assigns the elastic IP to the instance, if that is not assigned already. Secondary load balancers checks that a primary load balancer is running periodically. This avoids the load balancer being a single point of failure in a load balanced services architecture.
computeRunningAndPendingInstances() computes the number of instances that are running and pending. ServiceRequestsInFlightEC2Autoscaler task computes the running and pending instances for the entire system using a single EC2 API call. This reduces the number of EC2 API calls, as AWS throttles the number of requests you can make in a given time. This method will be used to find whether the running instances meet the minimum number of instances specified for the application nodes and the load balancer instances through the configuration as given in loadbalancer.xml. Instances are launched, if the specified minimum number of instances is not found.
Autoscale
autoscale()   handles the autoscaling of the entire system by analyzing the load of   each of the domain. This contains the algorithm - RequestsInFlight  based  autoscaling. If the current average of requests is higher than  that can  be handled by the current nodes, the system will scale up.  If  the  current average is less than that can be handled by the (current  nodes -  1), the system will scale down.
Autoscaling  component spawns new instances, and once the relevant services  successfully start running in the spawned instances, they will join the  respective service cluster. Load Balancer starts forwarding the service  calls or the requests to the newly spawned instances, once they joined  the service clusters. Similarly, when the load goes down, the  autoscaling component terminates the under-utilized service instances,  after serving the requests that are already routed to those instances.

 
 
 
 
 
 
 
 

No comments:
Post a Comment
You are welcome to provide your opinions in the comments. Spam comments and comments with random links will be deleted.