Building service orchestration with any language! Be it Bash, Python, Ruby, Chef, Node.js, Ansible, Salt, and most anything in between.
What if we could stop caring about individual machines (unless you really want to) and just model deployments in the cloud like you can with Lego blocks?
In Ubuntu we're building a collection of services, things that any one can deploy. Need a load balanced WordPress deployment with MySQL and memcached? What if instead of searching the web for "the perfect WordPress deployment" we built that into the operating system? Deploy it either via CLI or a web interface. Need more resources? Click a button for more instances.
Smart DevOps have already figured out how to deploy these services in a way that scales; what if we could encapsulate that person's expertise, generalize it, and then let everyone deploy it that way?
Too many people are wasting time learning how to deploy "Hadoop" and getting lost in the weeds fixing config files rather than learning the actual technology. That's the problem we're trying to solve. Find the experts, define what a reliable, scalable, robust configuration is for that service, and stick it in the operating system, all open source, peer reviewed, and tested.
And not just for the complicated things like Hadoop. In Ubuntu we're making so services can be made into lego blocks via a tool we call Juju. With over 100 "blocks" ready to deploy, I'm going to show you how you can spend less time searching for custom AMIs all over the internet to deploying entire stacks in as little as 4 or 5 commands.
This talk is generally for system administrators and web developers (Ruby, Python, node, and so on) who want to save time when they need to spin up deployments for dev, testing, or production in the cloud, be it Amazon Web Services, HP Cloud, any OpenStack cloud, or even your own bare metal in your datacenter.