Showing posts with label Amazon. Show all posts
Showing posts with label Amazon. Show all posts

Thursday, May 6, 2021

[Scam?] Dropshippers in Amazon from other sites such as Vitacost

Lakewood organic orange juice 6 pack was 44.55$ (incl. tax) with subscribe & save when directly ordered from Lakewood. Without subscribing, it would be 49.50$. Anyway, anyone can subscribe and then cancel the subscription soon after if they do not need anymore deliveries of the same. I ordered via Amazon for 43.00$ (incl. tax) since it was cheaper. Apparently, the Amazon seller (East Coast Shippers) is merely a dropshipper. He (a dude from East Coast Shippers) had just ordered it from
vitacost (it lists the same product for 28.14$, incl. tax) and shipped it directly to me. "Ship To:" in the bill shows my name and address. "Bill To:" shows the name of the dude from East Coast Shippers. 
 
Apparently, vitacost is selling stuff cheap. Is vitacost a scam that sells fake products? More likely, they hoard things when there were discounts from the seller? I have seen drop shippers buying things from Alibaba and then selling on Amazon. This works better because Alibaba is not widely used in the USA. Although Vitacost is American, not many are aware of Vitacost either (I wasn't aware myself until I saw the bill). So, the drop shippers shop these not-so-popular cheap sites for you, pay a fraction of the price you paid via Amazon, and ship the product directly to you.
 
Now, I am tempted to directly order from Vitacost next time. I could have saved 43 $ - 28 $ = 15$ that I paid to the middlemen of East Coast Shippers for no reason. But the juice bottles feel sticky. I am suspecting the quality of Vitacost. However, Vitacost cannot be that bad since they are owned by Kroger. Next time I should be more careful  with the Amazon dropshippers. They may be doing this all the time and ordering stuff from lower quality third-parties.

Wednesday, January 16, 2019

Fake reviews and Amazon.com

I used to be an active reviewer in TripAdvisor, until one day. That day, as usual, I was going through TripAdvisor, reviewing places I have been before. I found a large segment of fake reviews. I reported them to TripAdvisor with proof. They did nothing. In fact, an average Indian restaurant had become the #1 in Lisboa, thanks to fake reviews! TripAdvisor failed to take action even though I proved it with substantial evidence. Of course, eventually more people visited, and real reviews started coming in more than the fake ones in number. TripAdvisor re-adjusted the rating to #1,532 of 4,361 Restaurants in Lisbon, from its previous spot, when I checked now (2 years later). [Read the full story]

Rating, once fake reviews are removed
This time I fell for fake reviews on Amazon. A product that does not even work properly got all 5* reviews, thanks to fakes. As of now, with 27 reviews, it is 4.5*, including my 1*. When I bought it, it was all 5* reviews! Then, today I found an excellent website named fakespot.com that identifies fake reviews. It recognizes and eleminates the fake reviews and gives you the correct review back. So according to that site, this product deserves just 1.5*. Indeed, an accurate rating! There is also another website called reviewmeta.com which does not seem to work as good as fakespot.com.
WSJ has made an excellent video on these fake reviews. You should watch it.


Another site I like to use often is camelcamelcamel.com, which identifies whether a discount is really a discount by tracing the pricing variations for the products sold on Amazon.

Make sure to check the reviews of the other products from the same vendor when purchasing something. Currently, the product in question has already been sold out, and I am not even sure whether the seller will sell it again. Probably they bought some of these in Chinese street market and sold them all successfully soon after. Therefore, blacklisting or making aware of the buyers of a single product is not going to work. This has to happen at making aware of shady vendors, not just their dubious products. Next time, need to be a bit more vigilant when shopping in Amazon. The fakes are improving their game.


Update (Jan 19th):
Now with my 1-* review together with someone else's 1-* review (which I think the only honest reviews) in consideration, Fakespot has updated its rating to the product as 0-*. Indeed the ideal rating for this product. I wish Amazon lets me give 0-* ratings. :) 

  • How are reviewers describing this item?
    good, easy, nice, little and better.
  • Our engine has profiled the reviewer patterns and has determined that there is high deception involved.
  • Our engine has analyzed and discovered that 16.1% of the reviews are reliable.
  • This product had a total of 31 reviews on Jan 19 2019.

Update (March 17th):
The seller contacted me over the phone (from China!) and email and tried to negotiate me to delete the review by offering me increasing offers of 10$, 30$, and 50$ with a full refund with the product as free. I already returned the product and got the full refund during the refund window. I sent their email communications with all the proofs to Amazon and asked them to take action. Amazon failed to take action and the seller managed to delete some (at least one 1-star review, I noticed) negative reviews with similar bribes.



Their email goes like this:


***************************************************

Regarding your Amazon Product Review


Dear Customer,
Thank you for purchasing our Active Stylus Pen
Thank you for your purchase and taking the time to write a product review. We are terribly sorry to hear the product you received is defective and would like to know if we can send you a free replacement or assist you with a refund.

Customer reviews is important to us and we value your response. All responses will be used to further improve the quality of our service and products.

We saw the 1 star review you wrote down on January 16. This has a great impact on us. We are just a small seller. This will cause great harm to us. I want to ask you to help me delete this 1 star review. Can you help me?

We can provide you with a full refund and the product will be given as a gift, and we will pay you an additional $50 as compensation.

Can you accept it?

Sorry for the inconvenience and thank you for giving us the opportunity to rectify the matter.

Looking forward to your reply

Sincerly yours
Milletech Customer Service Team





***************************************************




Of course, my review will remain there. I truly wish Amazon was more proactive in removing or restricting these fake sellers. The product is thriving with more and more fake reviews. Amazon took 0 action to protect the customers from this shady seller. On the other hand, this is just a sample. I am sure that this is not the first or the last product to use fake reviews to boost the sales. Amazon agreed that the seller got my phone number and email address from their system. It is really bad that Amazon discloses this information to the seller without a valid reason. This information enables the sellers to bribe or threaten the customers to leave positive reviews.

Finally, I ask everyone to do additional research rather than blindly trusting the reviews they find online. Remember that a review you read online may have been incentivized even if they do not explicitly say so.

Thursday, June 30, 2011

Concerns of the public cloud and how PaaS helps mitigating them..

Using cloud model to deploy the applications has become a major trend in the recent years. Developers deploy their applications on top of the infrastructure as a service providers, considering its advantages.

Information outside

Software as a Service (SaaS) providers rely on the Infrastructure as a Service providers for their hardware requirements, in many cases. Having a private cloud set up on their own servers is yet another choice, where they deploy their SaaS solutions in their own servers. Hybrid cloud setup comprises of the best of both the worlds - private and public clouds.

Cloud outages and Server unavailability

When hosting your applications over a simple Infrastructure as a Service, you will have to consider the IaaS provided as you would consider your local single hard drive. Redundancy should be considered to mitigate the risk for the availability, as you would, in case of the local hardware in place. Recent outage faced with the Amazon infrastructure has invoked many cloud developers to think more in terms of developing so as to withstand the failures. What is your backup plan to ensure the availability during such outages or how are you going to handle the situation when an infrastructure or the platform provider leaves the business altogether? You have to have proper migration plan and backups for that. Multiple availability zones and having lesser dependency over the infrastructure or the platform should be considered.

Security Issues

In the cloud, you are on your own to find the ways of securing your applications from the attacks, though the infrastructure vendors have their measures in protecting the data, platform or the applications deployed on top of them. Security vulnerabilities may be higher in IaaS, than in a local hard-drive. When using IaaS in its purest form, customer has the responsibility to implement his own security solutions for his control objectives. Here Platform as a Service comes handy, where it takes care of the security solutions that are common to all the software applications. Hence the Software-as-a-Service (SaaS) developer doesn't need to bother about the recurring security and availability issues for each of the software being deployed on top of the infrastructure. Rather, a platform handles all the issues that should be taken care of, and let the software as a service developer focus solely on the application itself, than considering mitigating the issues that might occur due to the inherent concerns of the cloud.

Privacy and legal issues

Data of your business are legally bound, and care should be taken such that deploying it on cloud will not compromise any legal requirements. In most of the cases, infrastructure providers have the control over your information, and legal and state entities of the host country may have control over your information even without your knowledge, which is highly unlikely, when you are having your data in your own data servers, private cloud, or local network of computers.

Vendor lock-in

On the other hand, coding for a platform minimizes the vendor lock-in than coding for a particular infrastructure. However we should also note the risk of a vendor lock-in by the PaaS provider, where they require proprietary service interfaces or development languages. Platforms should adhere to the standards such that the migration between multiple platforms won't be of much of a task for the SaaS developers, unlike migrating between the infrastructures.

Flexibility

Rapidly evolving applications often require more flexibility in the PaaS offerings. Platform-as-a-Service should provide more flexibility than an Infrastructure-as-a-Service can provide.

Let's look at an example Platform-as-a-Service, and see how does that provides the required flexibility. WSO2 Carbon incorporates the accepted standards as a lean middleware platform, and as WSO2 Stratos is WSO2 Carbon platform as a service, it has the advantages inherent to the award winning WSO2 Carbon. Above all, WSO2 Carbon as well as WSO2 Stratos is open source, and free to extend, without any hidden fees for licensing, hence this eliminates the fear of being locked before trying, while providing the required flexibility to suit the needs of the sophisticated SaaS developers in the enterprise.

Loss of Control / Freedom

As we move towards the cloud, the cloud providers decide themselves in many cases, what are the services to offer or what services will be compatible with their offering. Unlike in servers in-house, as you move your data centers and servers to be on cloud, you almost lose the control over them to the cloud providers, not to mention, the privacy and security risks that may occur due to that. This leads to the third party controlling your applications.

The vendor of the IaaS or the platform has control over the applications provided or supported, or even whether they allow the external developers to write new applications to their infrastructure or platform. A platform that facilitates the extensibility of the cloud comes to the rescue in this place. If the platform supports installing or deploying multiple software applications or configure and update the existing applications, that would mitigate the above said short coming for a considerable extent.

Infrastructure expertise

When software are developed considering the infrastructure in mind, they often require the expertise for the particular infrastructure provider, let it be Amazon EC2 Cloud or Rackspace. Migrating for another infrastructure provider will need code level changes in many cases. While consuming the fruits of the scalability and the pay-as-you-go model of the cloud, this includes an additional overhead from the software application developers' point of view. Platform as a Service providers invest on this, by delegating the need to code for the underline infrastructure to themselves, away from the SaaS developers.

More Portability

While some of the PaaS such as Windows Azure rely completely on their own infrastructure, platforms including WSO2 Stratos are designed to function on any of the infrastructure. In this case, as the infrastructure level issues related to the portability are handled by the PaaS layer, the SaaS developers are freed from the burden of coding for multiple infrastructures. This is analogous to the well-known programming challenge - coding for multiple operating system APIs.

An extra bit of work

Those existing applications that work amazingly well on the standard computers or the servers are to be work on the new cloud model. Are they cloud-ready? In other words, can they work over the cloud, as they worked in the non-cloud environment? Ideally, making an application that was written without the cloud in mind needs some extra bit of work to make it utilize the cloud. Coding a new application having cloud in mind is a different consideration altogether, where SaaS model comes. But porting the applications to a cloud infrastructure needs the know-hows on elasticity, load balancing, and auto-scaling, which come as the fruits of the cloud. Platform as a Service vendors promise a smooth deployment of your existing applications into their platform, with lesser or no effort.

Saturday, March 19, 2011

Amazon ELB HTTPS Stickiness

Stickiness is needed in most of the complex applications when scaled up in cloud. Say one user logs in. Now his information should be attached to him. If not, each time, he would be routed to a random instance, which will not identify him. Attaching a session id would provide the stickiness, where a particular user can be forwarded to the particular instance. 

I tried using amazon elb-api commands using terminal to enable stickiness for the elastic load balancer. it gave error when trying to apply the stickiness for the port 443, and works fine for 80.

Create Stickiness Policy
pradeeban@pradeeban:~/pem$ elb-create-app-cookie-stickiness-policy wso2cloud1-as -p my-app-cookie-lb-policy -c jsessionid -K KEY.pem -C CERT.pem
OK-Creating App Stickiness Policy

Setting the Policy to the Listener - Fails for port 443
pradeeban@pradeeban:~/pem$ elb-set-lb-policies-of-listener wso2cloud1-as --lb-port 443 --policy-names my-app-cookie-lb-policy -K KEY.pem -C CERT.pem
elb-set-lb-policies-of-listener: Service error: aws:Client.InvalidConfigurationRequest AWSRequestId:4b2c7fc3-4fbf-11e0-a778-d17858cabec6

Setting the Policy to the Listener - Works for port 80
pradeeban@pradeeban:~/pem$ elb-set-lb-policies-of-listener wso2cloud1-as --lb-port 80 --policy-names my-app-cookie-lb-policy -K KEY.pem -C CERT.pem
OK-Setting Policies

So it seemed, it was an issue with https/stickiness from the amazon's end. It was not giving any error when tried with amazon management console nevertheless, but simply not allowing to enable stickiness for the listener with the tcp protocol.

It works for --lb-port 80 --lb-port 443, but not for --lb-port 443 --lb-port 80 (since it picks the first entry only, it seems).

Tried the same with elb-create-lb-cookie-stickiness-policy (elb controlled stickiness) and the results was same.

So I felt, https/stickiness is not working for both application based or amazon elb based cookies, while it works for http.


But Amazon announced in Oct, 2010 that they have started supporting HTTPS stickiness.


I reported this in AWS forum, but later found that it (Setting stickiness policy) works. What we had to change was the protocol to https from tcp for the port 443. :)

--listener "lb-port=443,instance-port=9443,protocol=https"

HTTPS/Stickiness is fine with Amazon's, but not TCP/Stickiness. The TCP/Stickiness issue has already been discussed in another thread [1] in AWS forums. Hence we resolved the issue, by changing the protocol from TCP to HTTPS for the port 443. However we also noted that aiCache provides web application HTTPS acceleration and stickiness for TCP too.

[1] Service error when setting LB policies of listener
[2] Elastic Load Balancing with Sticky Sessions
[3] Amazon Makes the Cloud Sticky
[4] jmeter + amazon ec2 + load balancing (elb)
[5] How can I force "non-sticky" connections for my ELB? 
[6] Load Balancer (ELB) - port forwarding on the load balancer itself
[7] "Sticky connection" on ELB for https?  
[8] Using ELB to Serve Multiple Domains Over SSL on EC2 for Giggles and Unicorns
[9] Amazon Simple Monthly Calculator

Saturday, March 5, 2011

Amazon Autoscaling ~ Issue uploading payload?

We tried to upload a payload.zip for the autoscaled load balanced system, with the startup-parameters for our Application Server (using the commands given below). There seems to be some issue for the current Amazon Autoscaling API, that prevents us from uploading the user-data-file. We have reported the issue in the AWS forum, and awaiting their reply. :)


Creating a launch config
pradeeban@pradeeban:~/pem$ as-create-launch-config autoscalelcapp --image-id ami-xxxxxxxx --instance-type m1.large --user-data-file /tmp/payload.zip --key "keypair" --group "default" -K KEY.pem -C CERT.pem

Updating Auto Scaling Group
pradeeban@pradeeban:~/pem$ as-update-auto-scaling-group autoscleasg1 --availability-zones us-east-1c --launch-configuration autoscalelcas --min-size 1 --max-size 5 -K KEY.pem -C CERT.pem

However we found that, it works when the payload is sent as a string using the param --user-data, instead of --user-data, so we used that to get appserver running, load balanced and auto scaled.

Launch Config
pradeeban@pradeeban:~/pem$ as-create-launch-config autoscalelcas2 --image-id ami-xxxxxxxx --instance-type m1.large --user-data  "AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxx,AWS_SECRET_ACCESS_KEY=xxxxxxxxxxx,AMI_ID=ami-xxxxxxxx,ELASTIC_IP=xx.xx.xxx.xxx,PRODUCT_MODIFICATIONS_PATH_S3=s3://wso2-stratos-conf-1.0.0/appserver/,COMMON_MODIFICATIONS_PATH_S3=s3://wso2-stratos-conf-1.0.0/stratos/,PRODUCT_PATH_S3=s3://wso2-stratos-products-1.0.0,PRODUCT_NAME=wso2stratos-as-1.0.0,SERVER_NAME=appserver.cloud.wso2.com,HTTP_PORT=9763,HTTPS_PORT=9443,STARTUP_DELAY=0" -K KEY.pem -C CERT.pem
 
OK-Created launch config

Since this works, we are happy to proceed passing the payload as a string, instead of sending it as a zip file. Relevant products will be taken from the S3 buckets and run using the script, hence producing the autoscaled appserver instances.

Sunday, February 27, 2011

When smartness of the word processors overtake you.. ;)

There were two times smart completion features have overtaken me. I was unable to expose the first experience, as it would violate someone else's privacy; so here comes the second experience. But this time a pretty boring one. :D

I was trying get a description of a load balancer I created. I have all the commands written down in a file. I opened it using a word processor and copied it to the terminal.

pradeeban@pradeeban:~/pem$ elb-describe-lbs autoscalelb –headers -K KEY.pem -C CERT.pem
elb-describe-lbs:  Service error: LoadBalancer name cannot contain characters that are not
 letters, or digits or the dash.
 AWSRequestId:6634cf73-423d-11e0-97ad-fd607d01edca

I was confused, where did I use an invalid character in the name (which was indeed just "autoscalelb"). After a few minutes, I figured out, it was my smart word processor, which had replaced "--headers" with "–headers" (Smart hyphenation!). :D

Tuesday, February 8, 2011

Auto Scaling With Amazon EC2 - II

We created an Amazon ELB as discussed here. Let's see more of it now.

Appserver
The load balancer we have created listens on the port 80 and 443 and forwards requests to 9763 and 9443. Say now we need to delete the listeners.

elb-delete-lb-listeners autoscalelb --lb-ports 80 443 -K KEY.pem -C CERT.pem  
    Warning: Deleting a LoadBalancer listener can lead to service disruption to
    any customers connected to the LoadBalancer listener. Are you sure you want
    to delete this LoadBalancer listener? [Ny]N
elb-delete-lb-listeners:  User stopped the execution of elb-delete-lb-listeners.
(Providing 'N' stops the action. You can proceed deleting the listener, by giving 'y' as the response.)

You can also create more listeners
elb-create-lb-listeners autoscalelb --headers --listener "lb-port=8280,instance-port=9763,protocol=http" --listener "lb-port=8243,instance-port=9443,protocol=tcp" -K KEY.pem -C CERT.pem
OK-Creating LoadBalancer Listener

The load balancer now listens on the port 8280 and 8243 and forwards requests to 9763 and 9443.

Now what will happen if you forcefully try to kill the instances initiated by the ELB? It will create one more identical instance immediately, handling the failover case.


Fail over
pradeeban@pradeeban:~/pem$ as-describe-auto-scaling-groups autoscleasg -K KEY.pem -C CERT.pem

Initially,
AUTO-SCALING-GROUP  autoscleasg  autoscalelc  us-east-1c  autoscalelb  1  10  1
INSTANCE  i-xxxxxxxx  us-east-1c  InService  Healthy  autoscalelc

When we killed the instance.
AUTO-SCALING-GROUP  autoscleasg  autoscalelc  us-east-1c  autoscalelb  1  10  1
INSTANCE  i-xxxxxxxx  us-east-1c  Terminating  Unhealthy  autoscalelc
INSTANCE  i-yyyyyyyy  us-east-1c  Pending      Healthy    autoscalelc

After a few seconds,
AUTO-SCALING-GROUP  autoscleasg  autoscalelc  us-east-1c  autoscalelb  1  10  1
INSTANCE  i-yyyyyyyy  us-east-1c  InService  Healthy  autoscalelc

Sounds cool.. So how to terminate the instances that ELB creates?


Shall we try deleting?
pradeeban@pradeeban:~/pem$ as-delete-auto-scaling-group autoscleasg -K KEY.pem -C CERT.pem

    Are you sure you want to delete this AutoScalingGroup? [Ny]y
as-delete-auto-scaling-group:  Service error: You cannot delete an AutoScalingGroup while there are instances
 still in the group.  AWSRequestId:aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa


Min = Max = 0
ok! So let's stop auto-scaling process by setting the minimum and maximum to zero.

pradeeban@pradeeban:~/pem$ as-update-auto-scaling-group autoscleasg --min-size 0 --max-size 0 -K KEY.pem -C CERT.pem
OK-Updated AutoScalingGroup
pradeeban@pradeeban:~/pem$ as-describe-auto-scaling-groups autoscleasg -K KEY.pem -C CERT.pem
AUTO-SCALING-GROUP  autoscleasg  autoscalelc  us-east-1c  autoscalelb  0  0  0
INSTANCE  i-yyyyyyyy  us-east-1c  InService  Healthy  autoscalelc


Delete
Now let's try to delete the auto-scaling group once more -- yes, as we have set it to zero, it should be possible now.
pradeeban@pradeeban:~/pem$ as-delete-auto-scaling-group autoscleasg -K KEY.pem -C CERT.pem
  
    Are you sure you want to delete this AutoScalingGroup? [Ny]y
as-delete-auto-scaling-group:  Service error: You cannot delete an AutoScalingGroup while there are scaling activities in progress for that group.
 AWSRequestId:bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbb

oh.. Let's wait a minute for that scaling down activity to finish!
Again!
pradeeban@pradeeban:~/pem$ as-delete-auto-scaling-group autoscleasg -K KEY.pem -C CERT.pem
          
    Are you sure you want to delete this AutoScalingGroup? [Ny]y
OK-Deleted AutoScalingGroup



Done!
Now, let's check once more whether we have deleted it properly.. ;)
pradeeban@pradeeban:~/pem$ as-describe-auto-scaling-groups autoscleasg -K KEY.pem -C CERT.pem
No AutoScalingGroups found

Yes, we have deleted the Amazon ELB along with the scaling group of nodes it created.  (We are sinners! ;))

Tuesday, February 1, 2011

Auto Scaling With Amazon EC2

Creating an auto scaled system using an Amazon load balancer is an interesting task that I did recently. We have an Amazon EC2 image with WSO2 Application Server installed. Creating an image with WSO2 WSAS installed is described here

Amazon EC2 API Tools
You will need Amazon EC2 API tools to create the image yourself. You can install it using "sudo apt-get install ec2-api-tools" in debian based operating systems, or you can download it from Amazon S3. These tools provide a client interface to the Amazon EC2 web service, to register and launch instance and more.

Configure
pradeeban@pradeeban:~/Downloads$ ec2-run-instances -K KEY.pem  -C CERT.pem

Instance Details
You can get the instance details using the InstanceID i-xxxxxxxx you get above.

ec2-describe-instances -K KEY.pem -C CERT.pem  i-xxxxxxxx or get the details of all the instances by,
ec2-describe-instances -K KEY.pem -C CERT.pem

[Providing the relevant public key and cert, KEY.pem and CERT.pem.]


Load Balancing with Auto Scaling
Now we come to the interesting part. That is auto scaling the Amazon EC2 Image with the load.

Download and set up Elastic Load Balancing API tools
Download it here, extract, and set the path up appropriately.
export AWS_ELB_HOME=/home/pradeeban/program/ElasticLoadBalancing-1.0.11.1
export PATH=$PATH:$AWS_ELB_HOME/bin

ELB Quick Reference Card
Downloading and setting up Auto Scaling API tools
Download it here, extract, and set the path up appropriately.

export AWS_AUTO_SCALING_HOME=/home/pradeeban/programs/AutoScaling-1.0.33.1
export PATH=$PATH:$AWS_AUTO_SCALING_HOME/bin



Creating a Load Balancer
pradeeban@pradeeban:~/Downloads$ elb-create-lb  autoscalelb --headers --listener "lb-port=80,instance-port=9763,protocol=http" --listener "lb-port=443,instance-port=9443,protocol=tcp" --availability-zones us-east-1c -K KEY.pem -C CERT.pem
DNS_NAME  DNS_NAME
DNS_NAME  autoscalelb-1316227031.us-east-1.elb.amazonaws.com

Describe ELB
elb-describe-lbs autoscalelb -K KEY.pem -C CERT.pem
LOAD_BALANCER  autoscalelb  autoscalelb-1316227031.us-east-1.elb.amazonaws.com  2011-01-28T09:40:54.750Z

Register instances with the load balancer
elb-register-instances-with-lb autoscalelb --instances i-xxxxxxxx -K KEY.pem -C CERT.pemINSTANCE_ID  i-xxxxxxxx

Configuring a health check
pradeeban@pradeeban:~/Downloads$ elb-configure-healthcheck  autoscalelb --headers --target "TCP:9763" --interval 5 --timeout 3 --unhealthy-threshold 2 --healthy-threshold 2 -K KEY.pem -C CERT.pem
HEALTH_CHECK  TARGET    INTERVAL  TIMEOUT  HEALTHY_THRESHOLD  UNHEALTHY_THRESHOLD
HEALTH_CHECK  TCP:9763  5         3        2                  2


 
Creating an AutoScaled System

Launching configuration for Amazon EC2 framework to launch new Amazon instances.
pradeeban@pradeeban:~/Downloads$ as-create-launch-config autoscalelc --image-id ami-xxxxxxxx --instance-type m1.large -K KEY.pem -C CERT.pem
OK-Created launch config



You can choose the instance type (m1.small, m1.large, and m1.xlarge) based on the requirements.

Creating Auto Scaling Group
pradeeban@pradeeban:~/Downloads$ as-create-auto-scaling-group autoscleasg --availability-zones us-east-1c --launch-configuration autoscalelc --min-size 1 --max-size 10 --load-balancers autoscalelb -K KEY.pem -C CERT.pem
OK-Created AutoScalingGroup


Describe auto scaling groups
pradeeban@pradeeban:~/Downloads$ as-describe-auto-scaling-groups autoscleasg -K KEY.pem -C CERT.pem
AUTO-SCALING-GROUP  autoscleasg  autoscalelc  us-east-1c  autoscalelb  1  10  1


Configuring a trigger with start actions according to the load.
pradeeban@pradeeban:~/Downloads$ as-create-or-update-trigger autoscaletrigger --auto-scaling-group autoscleasg --namespace "AWS/ELB" --measure Latency --statistic Average --dimensions "LoadBalancerName=autoscalelb" --period 60 --lower-threshold 0.5 --upper-threshold 1.2 --lower-breach-increment=-1 --upper-breach-increment 1 --breach-duration 120 -K KEY.pem -C CERT.pem
DEPRECATED: This command is deprecated and included only to facilitate migration to the new trigger mechanism.  You should use this command for migration purposes only.
OK-Created/Updated trigger



measure
You can choose the measure, based on your auto-scaling requirements. Let it be CPUUtilization, Latency, or Load. You will have to choose this wisely based on the application types, let them be CPU-intense applications, or huge applications, or applications that consume too much time. 

Now you notice as-create-or-update-trigger is depreciated. You can use scale up and scale down policies, along with the cloud-watch tools as described below instead!


Amazon CloudWatch API Tools
Downloading and Setting up 
Download it here, extract, and set the path up appropriately, to monitor the AWS cloud resources.

export AWS_CLOUDWATCH_HOME=/home/pradeeban/programs/CloudWatch-1.0.9.5
export PATH=$PATH:$AWS_CLOUDWATCH_HOME/bin

Now we have to define the scale up and scale down policies on scaling up and down the system based on the load, along with the monitoring.

Scale-up Policy
pradeeban@pradeeban:~/pem$ as-put-scaling-policy MyScaleUpPolicy1 --auto-scaling-group autoscleasg1 --adjustment=1 --type ChangeInCapacity --cooldown 300 -K KEY.pem -C CERT.pem
arn:aws:autoscaling:us-east-1:xxxxxxxxxxxxxx:scalingPolicy:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:autoScalingGroupName/autoscleasg1:policyName/MyScaleUpPolicy1

pradeeban@pradeeban:~/pem$ mon-put-metric-alarm MyHighCPUAlarm1 --comparison-operator GreaterThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 80 --alarm-actions arn:aws:autoscaling:us-east-1:xxxxxxxxxxxxxx:scalingPolicy:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:autoScalingGroupName/autoscleasg1:policyName/MyScaleUpPolicy1 --dimensions "AutoScalingGroupName=autoscleasg" -K KEY.pem -C CERT.pem
OK-Created Alarm

Scale Down Policy
pradeeban@pradeeban:~/pem$ as-put-scaling-policy MyScaleDownPolicy1 --auto-scaling-group autoscleasg1 --adjustment=-1 --type ChangeInCapacity --cooldown 300 -K KEY.pem -C CERT.pem
arn:aws:autoscaling:us-east-1:xxxxxxxxxxxxxx:scalingPolicy:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:autoScalingGroupName/autoscleasg1:policyName/MyScaleDownPolicy1

pradeeban@pradeeban:~/pem$ mon-put-metric-alarm MyLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 40 --alarm-actions arn:aws:autoscaling:us-east-1:xxxxxxxxxxxxxx:scalingPolicy:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx:autoScalingGroupName/autoscleasg1:policyName/MyScaleDownPolicy1 --dimensions "AutoScalingGroupName=autoscleasg" -K KEY.pem -C CERT.pem
OK-Created Alarm

Now the auto scaling group gets an instance, as the minimum number of instances in the auto scaled system has been set to 1,
pradeeban@pradeeban:~/Downloads$ as-describe-auto-scaling-groups autoscleasg -K KEY.pem -C CERT.pem
AUTO-SCALING-GROUP  autoscleasg  autoscalelc  us-east-1c  autoscalelb  1  10  1
INSTANCE  i-xxxxxxxx  us-east-1c  InService  Healthy  autoscalelc

Once the elastic load balancer is set fine and triggered, it starts new nodes or remove the existing nodes according to the load. Following these steps, the system can be load balanced with autoscaling.


Load Balanced Instances' Health
Initially,
pradeeban@pradeeban:~/pem$ elb-describe-instance-health autoscalelb –headers -K KEY.pem -C CERT.pem
INSTANCE_ID  INSTANCE_ID  STATE      DESCRIPTION  REASON-CODE
INSTANCE_ID  i-xxxxxxxx   InService  N/A          N/A

Later with the load, you may see at least a new instance.
pradeeban@pradeeban:~/pem$ elb-describe-instance-health autoscalelb –headers -K KEY.pem -C CERT.pem
INSTANCE_ID  INSTANCE_ID  STATE      DESCRIPTION  REASON-CODE
INSTANCE_ID  i-xxxxxxxx   InService  N/A          N/A
INSTANCE_ID  i-yyyyyyyy   InService  Active Instance

After a few failed attempts, an instance will be marked as 'OutOfService' with the reason, 'Instance has failed at least the UnhealthyThreshold number of health checks consecutively.' or so.