6 ways to seeking to optimal performance on AWS
When it comes to seeking to achieve optimal performance throughout your well-architected systems on AWS, there are multiple solutions and features that can help you get there. In this article, we’ll discuss six ways in which you can get your workload closer to optimal performance.
Understanding the available services and resources The first step when looking to achieve performance efficiency is to learn about and come to a solid understanding of the extensive range of services and resources available in the cloud. You should work towards identifying the services and configuration options that are relevant to your workload. To do this, you must first inventory your workload software and architecture, which will provide the necessary insight to decide which components can be replaced by managed services, thus increasing performance and reducing operational complexity.
Defining a process for architectural choices It is key that architectural choices be made taking into account a previously defined process. This process should encourage experimentation and benchmarking with any resources that could be used in your workload, and it should include internal experience and knowledge of the cloud and external resources such as published use cases. There are two sides to this process: selecting an architectural approach, and defining performance requirements. To identify the kind of architecture that meets your performance requirements, you’ll need to start by pinpointing constraints, legacy requirements and integrations. You should then focus on identifying all possible opportunities for reuse, including refactoring. To do this, it’s a good idea to consult with other teams, review architecture diagrams, and, if necessary, bring in external resources for the task. To define performance requirements, you should begin by identifying your most important metrics and, for each of them, identify the target, measurement approach and priority. After that, you should document the performance experience you’d like to achieve for your customers. This also includes what customers might think about the performance of the workload. The two sides of this process will greatly inform your decision when defining a process for architectural choices.
Factoring cost requirements into decisions In order to always be on top of your workload’s cost requirements for operation, you should be using internal cost controls to select resource types and sizes in accordance with your predicted resource need. Workload components should be optimized to reduce costs whenever possible. You can reduce costs and maximize component efficiency by rightsizing workload components and enabling elasticity. To do this, you’ll need to determine which of your workload components can be replaced with managed services, such as managed databases, in-memory caches, and reverse proxies.
Using policies or reference architectures Performance efficiency can be maximized using your analysis to select services and configurations for your workloads based on the evaluation of internal policies and existing reference architectures. You can deploy your workload using existing policies or reference architectures by integrating the relevant services into your cloud deployment. Once done, you can use performance tests to ensure that you can still meet your own performance requirements. Keep Reading | Prevent failures with this 4 AWS Best Practices
Benchmarking existing workloads All existing workloads’ performances should be benchmarked in order to understand how they perform on the cloud. Data collected from benchmarks should be collected and analyzed to drive future architectural decisions. Benchmarking includes the following aspects: you should be monitoring performance during development by implementing processes that provide visibility into performance as your workload evolves. When it comes to your delivery pipeline, you should be running automated tests and periodically comparing their results against pre-defined key performance indicators and thresholds, making sure that you continue to meet performance requirements. User journeys should be tested as well. You can use synthetic or sanitized versions of production data for load testing. You can really put your entire architecture to the test by using replayed or pre-programmed user journeys through your application at scale.
Load testing your workload Your latest workload architecture should be deployed on the cloud using different resource types and sizes. By monitoring the deployment, you can capture performance metrics that identify bottlenecks or excess capacity. This performance information should be used to design or improve your architecture and resource selection. Load Testing a proof of concept can help you validate your approach and find out if you meet your performance requirements. There are AWS services available to run production-scale environments to test your architecture. Amazon CloudWatch can be incredibly useful to collect metrics across every resource in your architecture. CloudWatch alarms should be put to use in order to indicate when a threshold is being breached. Load testing uses your actual workload. This way, you can get information about how your solution performs in a production environment. You should always be paying attention to where your workload fails to scale (if ever), or if it scales in a non-linear way, then compare these metrics against your own requirements to accurately assess performance. Useful resources: Performance Efficiency Pillar whitepaper Introducing The Amazon Builders’ Library (DOP328)