Chapter 4 notes the importance of a penetration test for any cloud-based application. It also stresses the need for comprehensive logging and monitoring.
For those looking for a succinct tactical introduction to cloud computing, To the Cloud: Cloud Powering an Enterprise is an excellent place to start. Cloud Powering an Enterprise Saturday, March 10, In fewer than pages, the books 4 parts consist of sections on: Post Rating I Like this!
The target audience of these books happens to be developers or IT professionals. But when it comes to senior decision makers and CXOs, there are not many resources that give them the big picture of the Cloud. Cloud Powering an Enterprise shines. It is a prescriptive guide to the CIOs who are keen on adopting the Cloud. It provides a comprehensive framework for the decision makers to systematically move to the Cloud. Authored by the senior leaders from Microsoft, To The Cloud: Cloud Powering an Enterprise reflects the real-world experience of dealing with internal IT and the challenges of moving to the Cloud.
Though all the authors come with strong Microsoft pedigree, this book offers a generic framework to embrace the Cloud running on any technology stack which makes it a credible resource for CIOs. With less than pages, this is a great read during a short flight. Each chapter is modular and makes it an easy read to absorb the key points effortlessly.
Each chapter focuses on a specific milestone of the Cloud adoption lifecycle. CIOs will easily be able to relate to the terminology and nomenclature used in the book.
Irrespective of the phase that your enterprise may be in the adoption of Cloud, you will find this book as a great reference. The book has 5 chapters with each chapter addressing an important aspect of Cloud Computing. Here is a quick summary of each of these chapters. Chapter 1 — Explore: If a service that the application relies upon for one usage scenario goes down, other application scenarios should remain available. For example, a component that requests data from another source could include logic that asks for the data a specified number of times within a specified time period before it throws an exception.
For the occasional reboot of a cloud instance, application design should include a persistent cache so that another scale unit or the original instance that reboots can recover transactions. Using persistent state requires taking a closer look at statelessness—another design principle for cloud-based applications.
Designing for statelessness is crucial for scalability and fault tolerance in the cloud. Whether an outage is unexpected or planned as with an operating system update , as one scale unit goes down, another picks up the work. An application user should not notice that anything happened.
It is important to deploy more than one scale unit for each critical cloud service, if not for scaling purposes, simply for redundancy and availability. Cloud providers generally necessitate that applications be stateless. Therefore, without stateless design, many applications will not be able to scale out properly in the cloud.
Toward the end of the auction, as people try to outbid each other, the bidding engine is in higher demand. Cloud providers generally necessitate that applications be stateless. To the Cloud cuts through the noise and addresses the Why, What, and How of enterprise cloud adoption. CIOs will easily be able to relate to the terminology and nomenclature used in the book. Jerry Shaw on One way to design such an application would be to split it into three components, as each service has a different demand pattern and is relatively asynchronous from the others: Cloud Powering an Enterprise by Pankaj Arora ,.
Most cloud providers offer persistent storage to address this issue, allowing the application to store session state in a way that any scale unit can retrieve. Load balancing and other services inherent in cloud platforms can help distribute load with relative ease. With low-cost rapid provisioning in the cloud, scale units are available on-demand for parallel processing within a few API calls and are decommissioned just as easily.
Massive parallelization can also be used for high performance computing scenarios, such as for real-time enterprise data analytics. Many cloud providers directly or indirectly support frameworks that enable splitting up massive tasks for parallel processing. For example, Microsoft partnered with the University of Washington to demonstrate the power of Windows Azure for performing scientific research.
The result was 2. For cloud offerings that support auto-scaling features, engineers can poll existing monitoring APIs and use service management APIs to build self-scaling capabilities into their applications. For example, consider utilization-based logic that automatically adds an application instance when traffic randomly spikes or reaches certain CPU consumption thresholds. The same routine might listen for messages, instructing instances to shut down once demand has fallen to more typical levels.