In our last blog post, we shared our insights on three new AWS feature announcements made at re:Invent. Indeed, there were so many announcements at the conference that we’ve continued our analysis in this second blog on AWS. Let’s start by looking at AWS Key Management Service (KMS).
AWS KMS can be thought of as two cloud security solutions merged into one. First, KMS lets you create and configure custom encryption keys that AWS can then use to encrypt data on your behalf—for example to encrypt EBS Volumes, and Redshift and S3 data. As you’d expect from a security service, KMS also provides audit reports on the use of those keys.
Second, KMS is a crypto-processing service, which means that your applications can make API calls to KMS to have it encrypt or decrypt data on their behalf. The benefit: you can work with encrypted data throughout your application stack without the need to expose the keys anywhere, reducing the risk of data compromise and helping facilitate key management. Additionally, KMS works nicely with Identity Access Management (IAM), allowing administrators to assign which users, instances, or even other AWS accounts are allowed to use the key(s) to encrypt and decrypt data.
We are excited to say that Scalr users can start using KMS as a crypto-processing service because IAM instance profiles are already supported in Scalr. More importantly, Scalr provides Governance, allowing you to ensure only your selected subset of Scalr users can use the IAM instance profiles that provide KMS access.
Next up for comment are new AWS tools for Code Management and Deployment: CodeDeploy, CodeCommit and CodePipeline.
As the name implies, AWS CodeDeploy is fundamentally a deployment tool which means it can take an “Application Revision“ (a collection of files) and ship it to a “fleet” of EC2 instances (a “Deployment Group”), which you identify through EC2 Tags. Just like competitor Salt, or the newly introduced Chef Push Jobs, AWS CodeDeploy requires an agent, making it somewhat unclear what the benefits of using CodeDeploy are. For now, I’m not convinced the lock-in is worth the benefit.
AWS CodeCommit is a hosted Git service and AWS CodePipeline is a hosted continuous integration pipeline. For both product lines, AWS’s product differentiation and competitive stance is unclear. GitHub, arguably the global leader in Git hosting, announced GitHub Enterprise for AWS the day before AWS announced CodeCommit, and here again there are plenty of solid and open-source CodePipeline alternatives.
While it’s early days, tight integration with CodeCommit (to trigger the pipeline) and CodeDeploy (for continuous deployment) might be the value add here. We’ll be sure to keep an eye on the AWS “CodeSuite” as the jury is out on whether or not it will be able to rival existing best of breed software.
Almost done! Next is AWS Container Service. AWS Container Service seems like it would be AWS’s response to Google’s newly-announced Container Engine. Google has historically led AWS in terms of container support, so it’s no surprise AWS is trying to catch up.
At a high level, Container Service is a Docker container scheduler: you operate a cluster of instances with Docker (and a Container Service agent) installed and running on them, and you can have the Container Service dispatch Docker containers to them.
Unfortunately, I can’t help but feel AWS Container Service was rushed out of the door. Unlike Google’s Container Engine, AWS Container Service doesn’t manage host VMs (the ones running Docker) for you, which means you still have to take care of maintaining all these hosts yourself.
In other words, AWS's Container Service is only a container scheduler, whereas Google's Container Engine is both a container scheduler (this part is open-source Kubernetes project) and a cluster manager (this part is proprietary to Google). Google's long experience operating container-based infrastructure might be making the difference here — placing AWS in a tough spot trying as it might to keep up.
Finally, the last service we'll look at today is AWS Lambda. Using AWS Lambda, you start by loading code into the service, and then you connect to it event notifications generated by other AWS services. When those events fire, AWS executes your code. In your code, you can do whatever you want, and you pay based on execution time (in 100ms increments) — hopefully there's some form of deadloop protection!
In terms of event sources, only S3, DynamoDB, and Kinesis are supported at this time (note that AWS also announced new notifications for S3), but we can expect to see additional services in the near future.
As of now, this lets you plug in your own logic into AWS's infrastructure, enabling workflows like "when an object changes in S3, make an API Call to CloudFront to invalidate the corresponding cache" (even though CloudFront can't be used as a notification source, nothing prevents you from calling its API in your code).
And, that's it for re:Invent! Note that AWS also announced faster EBS volumes, and new compute-optimized instances. Interestingly, there were no price cuts announced this year.
Interested in learning more about enterprise cloud management? Check out our resource page for hybrid cloud and multi-cloud case studies.