Why Is Sustainability Thinking Necessary in Serverless?
The scale of cloud operation is beyond most of our imagination. When consuming cloud services, as an end user or value-added reseller, you only see the face of those services: APIs, functionality, service limits, etc. Most of us never get to take a deeper look at their operation.
Take, for instance, the hugely popular, highly successful, and exceptionally efficient Amazon DynamoDB—the NoSQL data store as a service from AWS. In any given Region, thousands of organizations and software engineers work with millions of DynamoDB tables to store anywhere from a single data item to trillions of items. You may work with its APIs to create, read, update, and delete items in a table—the popular CRUD operations. Beyond that, you may configure its various options for scaling, data retention, streams, replication, etc. But have you ever stopped to imagine the sheer scale of computer resources that must be available to continuously provide such a mammoth service?
Now, multiply that by the number of AWS Regions around the world (at the time of writing, there are 32, with 102 Availability Zones). Add to that other data store services available from AWS—S3, Redshift, Aurora, RDS, etc. Then, add the resources that are in operation to support the functionality of all the other services from AWS across all of those Regions—Lambda functions, message queues, event streams, machine learning, AI, logs, reporting, and so on.
And that’s just AWS. Now picture the same for every other cloud provider and the hundreds of services each one operates. That gives you a glimpse of the scale of the cloud operation that supports and influences our daily lives in so many ways. Because it’s so vast, to understand its impact on the environment and equip you to think sustainably in serverless, we’ll start by breaking it down into its three main components.