Cloud Design Patterns Pdf

0 views
Skip to first unread message

Shanta Plansinis

unread,
Aug 5, 2024, 12:28:22 AM8/5/24
to alhoterfunc
Eachpattern describes the problem that the pattern addresses, considerations for applying the pattern, and an example based on Microsoft Azure. Most patterns include code samples or snippets that show how to implement the pattern on Azure. However, most patterns are relevant to any distributed system, whether hosted on Azure or other cloud platforms.

Design patterns don't eliminate notions such as these but can help bring awareness, compensations, and mitigations of them. Each cloud pattern has its own trade-offs. You need to pay attention more to why you're choosing a certain pattern than to how to implement it.


Data management is the key element of cloud applications, and it influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for performance, scalability or availability. This can present various challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations.


Good design encompasses consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and scenarios. Decisions made during the design and implementation phase significantly impact the quality and total cost of ownership of cloud-hosted applications and services.


The distributed nature of cloud applications requires a messaging infrastructure that connects the components and services, ideally loosely coupled to maximize scalability. Asynchronous messaging is widely used and provides many benefits, but it also brings challenges such as ordering messages, poison message management, idempotency, and more.


This guide provides guidance for implementing commonly used modernization design patterns by using AWS services. An increasing number of modern applications are designed by using microservices architectures to achieve scalability, improve release velocity, reduce the scope of impact for changes, and reduce regression. This leads to improved developer productivity and increased agility, better innovation, and an increased focus on business needs. Microservices architectures also support the use of the best technology for the service and the database, and promote polyglot code and polyglot persistence.


Traditionally, monolithic applications run in a single process, use one data store, and run on servers that scale vertically. In comparison, modern microservice applications are fine-grained, have independent fault domains, run as services across the network, and can use more than one data store depending on the use case. The services scale horizontally, and a single transaction might span multiple databases. Development teams must focus on network communication, polyglot persistence, horizontal scaling, eventual consistency, and transaction handling across the data stores when developing applications by using microservices architectures. Therefore, modernization patterns are critical for solving commonly occurring problems in modern application development, and they help accelerate software delivery.


This guide provides a technical reference for cloud architects, technical leads, application and business owners, and developers who want to choose the right cloud architecture for design patterns based on well-architected best practices. Each pattern discussed in this guide addresses one or more known scenarios in microservices architectures. The guide discusses the issues and considerations associated with each pattern, provides a high-level architectural implementation, and describes the AWS implementation for the pattern. Open source GitHub samples and workshop links are provided where available.


Design patterns that govern cloud-based applications aren't always talked about -- until companies reach a certain scale. While there are countless design patterns to choose from, one of the biggest challenges of doing so is dealing with scale when it becomes necessary.


Rapid growth is a blessing and a curse for any application, providing both increased revenue but also increased technical challenges. To better scale, there are a number of design patterns that can make any cloud-based application more fault-tolerant and resistant to problems that often come from increased traffic.


Named after the divided partitions of a ship that help isolate flooding, the bulkhead pattern prevents a single failure within an application from cascading into a total failure. While the implementation of this pattern in the wild isn't always obvious, it is typically found in applications that can operate under some sort of degraded performance.


An application that implements the bulkhead pattern is built with resiliency in mind. While not all operations are possible when email or caching layers go down, with enough foresight and communication to the end user, the application can still be semi-functional.


With isolated application sections that can operate independently of one another, subsystem failures can safely reduce the application's overall functionality without shutting everything down. A good example of the bulkhead pattern in action is any application that can operate in "offline mode." While most cloud-based applications require an external API to reach their full potential, fault-tolerant clients can operate without the cloud by relying on cached resources and other workarounds to ensure the client is marginally usable.


The retry pattern, a common cloud design pattern when dealing with third-party interactions, encourages applications to expect failures. Processes that implement the retry pattern create fault-tolerant systems that require minimal long-term maintenance. These processes are implemented with the ability to safely retry failed operations.


The retry pattern only works when both the sender and receiver know that failed requests can be re-sent. In the webhook example, a unique identifier for each webhook is often provided, allowing the receiver to validate that a request is never processed more than once. This avoids duplicates while also making it possible for the sender to experience its own errors that could erroneously re-send redundant data.


Dealing with scale can be an incredibly nuanced problem in cloud-based applications, especially with processes with unpredictable performance. The circuit breaker pattern prevents processes from "running away" by cutting them short before they consume more resources than necessary.


To illustrate how this cloud design pattern works, imagine you have a web page that generates a report from several different data sources. In a typical scenario, this operation may take only a few seconds. However, in rare circumstances, querying the back end might take much longer, which ties up valuable resources. A properly implemented circuit breaker could halt the execution of any report that takes more than 10 seconds to generate, which prevents long-running queries from monopolizing application resources.


Queue-based load leveling (QBLL) is a common cloud design pattern that helps with scale problems as an application grows. Rather than performing complex operations at request time -- which adds latency to user-exposed functionality -- these operations are instead added to a queue that is tuned to execute a more manageable number of requests within a given time period. This design pattern is most valuable in systems where there are many operations that do not need to show immediate results, such as sending emails or calculating aggregate values.


For example, take an API endpoint that must make retroactive changes to a large dataset whenever it is executed. While this endpoint was built with a certain threshold of traffic in mind, a large burst in requests or a rapid growth in user adoption could negatively affect the latency of the application. By offloading this functionality to a queue-based load leveling system, the application infrastructure can more easily withstand the increased throughput by processing a fixed number of operations at a time.


An alternative design pattern to QBLL is the throttling pattern, which centers on the concept of the "noisy neighbor" problem. While the QBLL pattern offloads excess workloads to a queue for more manageable processing, the throttling pattern sets and forces limits on how frequently a single client can use a service or endpoint to keep one "noisy neighbor" from negatively impacting the system for everyone. The throttling pattern can also supplement to the QBLL pattern, which allows for the managed processing of excess workloads and ensures the queue depth doesn't become too full.


Looking back at the QBLL example, let's say that the API endpoint could originally handle about 100 requests per minute before the heavy work was offloaded to a queue, while an API can support a maximum throughput of about 10,000 requests per minute. Ten thousand is a huge jump from 100, but the queue will still only be able to support about 100 requests per minute without any noticeable impact on the end user. This means that 1,000 API requests would take about 10 minutes to fully process, and 10,000 API requests would take almost two hours.


In a system with evenly distributed requests, every user would experience slower processing equally, but if a single user sends all 10,000 requests, then all other users will experience a two-hour delay before their workloads even get started. A throttling schema that limits all users to 1,000 requests per second would ensure that no single user could monopolize application resources at the expense of any other user.


It can be incredibly difficult to scale a cloud-based application. Often, IT teams must choose between implementing a design pattern that can support application growth for another six months, or a design pattern that can support application growth for another six years.


In my experience, options that fall under the six-month timeline are the most cost effective. Spend a few weeks to buy yourself six months that will support the needs of the business and users. It's more effective than spending a year building a more robust system that is much harder to change.

3a8082e126
Reply all
Reply to author
Forward
0 new messages