UPDATED 08:00 EST / MARCH 25 2016

5 steps to fast AWS performance NEWS

5 ways to improve cloud application performance

Application performance is always a concern. Is the code written well? Is the database optimized? Can the application handle load spikes?

When the application is on a public cloud platform, however, developers have unique challenges. The biggest being the infrastructure the application runs on is shared with other applications, and high activity on one application can pull resources away from other applications—the noisy neighbor effect.

Add to that the lack of monitoring and metrics cloud service providers provide, and developers might not know there’s a problem until an application fails.

Performance challenges in the cloud revolve around the unpredictability of the different components used in a cloud platform, said Owen Garrett, head of products at NGINX Inc., which provides load-balancing products for applications running in cloud environments.

Specific challenges include poor disk performance, limited CPU access on a host, running out of memory and resources and poor network performance.

“All of those things can cause performance issues for applications in the cloud, but there are good practices that address many of those issues,” Garrett said. “The challenges businesses have is learning those practices and learning them as quickly as possible so they can take advantage of the benefits of Amazon Web Services, Microsoft Azure or other clouds without having to go through a steep learning curve.”

One of the techniques people have been known to follow when they have high-load services on Amazon Web Services (AWS) is using the largest available AWS instances. Doing that tends to give you exclusive access to the hardware, alleviating some of the noisy neighbor problems, Garrett said.

While that can help, it isn’t the best way to ensure high performance, he said. What you want to do is monitor everything and look for potential problems as they arise—whether it’s a server that’s running slowly or a network that has higher latency. And be prepare to redeploy the application on a different machine. The hope is that you will get it running in an environment that has better performance, Garrett said.

“The smart thing to do is to deploy it and scale out horizontally,” he said. “By that I mean you run multiple different instances of the same service—maybe it’s an application server or a logging server—and put a load balancer in front to balance the load across those instances.”

5 steps to fast AWS performance

Load balancing is just one step Garrett said organizations should take to ensure fast performance when running applications on AWS. When you implement all five, you ensure you get all the benefits cloud platforms such as AWS offer.

1. Create a plan

Start by determining if the architecture of the application is ideally suited for the cloud, Garret says. Is the application cloud-ready, meaning it is broken into smaller components called microservices? That type of architecture allows you to deploy and scale those components independently at a moment’s notice if necessary, and it fits much better on a cloud environment than traditional large applications. It’s also a much more cost-effective architecture because you don’t generally need to oversubscribe capacity, he said.

After deciding the architecture of the application, you need to identify the resources you will need from the cloud provider. Also assess what data you are accepting and delivering, storage types and network capability

Then deploy a small project and monitor the performance. Use the pilot project to build up experience and confidence in this new technology, as well as ensure the application performs well.

“By planning well, you get a good foundation and that ensures your first deployments on the cloud will have a higher chance of success,” Garrett said.

2. Use a load balancer

A load balancer will give you two things, according to Garrett: First, it will route the traffic to instances that are performing the best. Second, it can provide you with metrics so you can identify which instances are running slowly. It provides visibility, performance and scalability, he said.

“It is the core of the technology that allows you to scale out horizontally,” Garrett said. “You can add more web servers or app servers, you can move users from one generation of an application to another seamlessly. You can do all of that without changing the public face of your application.”

3. Cache static and dynamic files

Caching files has two benefits. First, the user gets the static file from their browser cache or a caching server close to them on the Internet. These are sometimes big files, so caching reduces latency. Second, it reduces the load on the server. The server has fewer requests to process.

With caching, “a large spike of traffic coming in doesn’t have to create a large spike of traffic on the application server,” Garrett said.

4. Set standards and monitor application performance

AWS provides several ways to measure performance. For example Amazon CloudWatch give you metrics on how the Amazon infrastructure is performing. You can also pull metrics out of your load balancer, the applications, the database servers and other components.

“The art is then correlating all of those metrics, identifying opportunities to improve performance and then work to address those opportunities,” Garrett said.

You can use application performance management tools to help correlate all of the information.

5. Use a DevOps approach

The DevOps approach is rooted in the cloud-ready approach to application development. Application development and operations teams work together to meet technical and business goals. That, combined with the flexibility of microservices, allows organizations to quickly and easily develop, deploy and change applications.

“In the cloud, you need something that is much more flexible [than traditional application development] so you can respond more quickly, not just from a technology perspective but from a people perspective as well,” Garrett said.

So, if an application encounters a problem, the team can quickly take care of it. Or if a business owner wants to change the application, the update can be made and deployed quickly.

The key to ensuring high application performance in the cloud is to continually monitor and track issues and build processes so you can respond to those issues quickly, Garrett said. It’s an interactive and iterative approach. Don’t think you can deploy an application and forget it.

Photo credit: Krystian Olszanski via flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU