miércoles, 19 de enero de 2011

Deploying a high-capacity ArcGIS Server Web app on Amazon EC2: A case study

Deploying a high-capacity ArcGIS Server Web app on Amazon EC2: A case study: "
Recently Esri Technical Marketing was asked to host a wildfire application for the esri.com web portal. This portal was then linked by CNN’s Tech section. With this added user load we had to quickly increase the amount of resources devoted to the application, and so we turned to the cloud. With the new ArcGIS Server on Amazon EC2 it is relatively easy to create ArcGIS server instances that are already connected to the cloud and ready to serve content to users. In as little as 15 minutes I can create an instance that has Windows, ArcGIS Server, and ArcGIS Desktop installed and ready to use. Unfortunately one server wasn’t going to get me the capacity that we felt was going to be needed to support this new demand, so the process would take a few more steps.

Planning the architecture


In planning this setup, we needed to accommodate a potentially large number of users that would be too much for one system to handle at any given time. This meant we needed a load balanced architecture that would spread the load across multiple servers all serving the same content. In addition to that, one of the services we were hosting on this site was an ArcSDE-based feature service that allows users of the site to add points to the map. This SDE data would have to be viewable by all of the instances running this map service. So in addition to the load balanced server instances, we were also going to need an SDE database that all of these instances could access at any given time.

After reviewing the architecture help for ArcGIS Server on Amazon EC2, we felt the following setup would best suit our needs:

Architecture for ArcGIS Server on Amazon EC2

Creating the staging instances


Now that we had our setup, we began staging the maps and data. We created two staging instances, one for the ArcGIS Server environment and one for the ArcSDE environment. We used the two Amazon Machine Images (AMIs) that Esri provides as our starting point for these instances. On the ArcGIS Server instance we uploaded the application itself and all of the data for Web services that would not change through the life of this application. On the ArcSDE instance we created the database that would be used by the feature service that was editable by users.

The reason that we did not put all of the data on the SDE instance was because it is a single point of failure for the system. If that instance were to go down, the entire site would stop functioning properly. We wanted as little going on as possible on that instance. The ArcGIS Server instances were to be load balanced, so that if any single one ever went down, it could be removed from the load balancer and users would never notice it was gone.

With the application hosted on the staging servers and functioning the way we wanted, the next step was to create the live version of the SDE instance. To do this we simply created a custom AMI from our staging SDE server. This takes about an hour to complete, but once it is done we had an image that could be spun into a new server in just 15 minutes. We used that AMI to generate a new instance of our SDE that would now be our live SDE server. We used one of the larger instances because this was a single point of failure and wanted to be sure that it would remain running even under higher demand. We also created an elastic IP for this instance so that we would be able to reference this machine from the many ArcGIS Server instances that would need to use it, without fear of its address changing on us.

Setting up the load balancer


Next on the list was to get the ArcGIS Server instance ready for use with an Elastic Load Balancer and pointed to the SDE data on the live SDE Instance. We updated the service that was using data from the staging SDE server to point to the new live SDE server and then created an Elastic Load Balancer on our Amazon account. Once the load balancer was created we updated the staging machine to be aware of the load balanced environment and added it to the load balancer itself. The application needed to be updated so that Web links pointed to the load balancer’s DNS address instead of the machine itself.

At this point, we tested the application again to make sure everything was functioning properly from its new environment. To do this, we simply accessed the Web site we had hosted on the machine through the DNS address of the load balancer instead of the DNS address of the instance itself. Once we were confident everything was working properly, we took the staging machine off of the load balancer and generated a custom AMI of our ArcGIS Server instance.

Deploying and testing the live instances


The custom ArcGIS Server AMI was finished about an hour later and now it was time to start creating our live instances. One at a time, we created and tested the new instances to make sure they were functioning properly. While all instances are located in the US-East Region, we spread them out into all of the different Availability Zones within that region to insure that if one of the data centers went down that we would still have other instances still running.

The first thing to do when an instance is created is to log in and make sure the machine name of the system matches what ArcGIS Server thinks it is. We found that occasionally the scripts would have difficulty updating the new instances. Once logged into the system you can bring up the system properties window to determine what the current machine name is and compare it to machine name that server has in the home page within Firefox. If they do not match, shut down the instance from your Amazon Management Console and restart it. This usually fixed the problem for us.

With the machine name verified we checked to make sure all of our services were up and running, and started the ones that may not have spun up correctly. Afterwards we added that machine to the load balancer. In our deployment we created a total of three live instances of ArcGIS Server.

Monitoring the deployment


We now had a fully functional, load-balanced environment that could support the demand placed upon it. The next thing to do was to make sure it continued to function. While the load balancer itself can monitor if a Web page is responding and remove the systems as needed, it cannot check the fine-grained functionality of ArcGIS services. To do that we set up a monitoring script that checks each service periodically and makes sure it is responding.

If the service does not respond, the monitoring script sends us an e-mail telling us the service is down. This gives us a heads up on any problems that might be occurring and we can quickly respond by removing that server from the load balancer or spinning up new instances to replace it. This gives us a very reliable environment that we can feel confident will remain up and running for our users.

Contributed by David McGuire of Esri Technical Marketing
"
Enhanced by Zemanta

No hay comentarios:

Comparte en Facebook

Mi gran Blogg: