EC2 Design Patterns (1): ExternalConsole

First pattern from our series about EC2 Design Patterns.


Give API access to an EC2 instance created from a publically available virtual image (AMI).


Sometimes it is necessary that a server deployed on the EC2 platform has access to the EC2 API itself. Examples include access to persistent storage, instantiate other machine images to scale an application, reassign IP addresses, monitor other instances, change security groups, create key pairs, create snapshots for backup, and more. Any access to the API requires the EC2 credentials. Those credentials allow complete control over the whole platform of an Amazon account and must therefore be stored and communicated in a secure and reliable manner.  Amazon EC2 provides a console via a web-interface that allows to deploy and startup Amazon Machine Images, but it does not allow to pass AWS credentials to the upstarting image.

Since the image is public and can be instantiated by many different accounts, the credentials cannot be stored on the image itself. Credentials can be passed as so called user data when an image is started via the API.


Use ExternalConsole when

  • you want to apply an AMI that needs access to the EC2 API
  • you want to facilitate deployments and especially avoid any manual interventions for the user who deploys the image  (e.g. avoid starting the image using API command line tools, edit configuration files on the image, etc.)


  • External Console: piece of software running outside Amazon EC2 with access to the EC2 credentials (by asking the user to enter them or by retrieving it from a database, for example). The main task of the external console is to deploy a virtual image on EC2 while passing the AWS credentials to it.
  • AMI: Amazon Machine Image, e.g. a virtual image that needs access to the EC2 API.
  • AMI Instance: the actual virtual machine instance resulting from deploying the AMI. The AMI Instance
  • Image Provider: a person or organism that create the AMI
  • Image User: a person or organism that deploys the AMI and manages AMI instances


  • The Image User that wants to deploy an image connects to the External Console and instructs it to deploy a certain AMI
  • The External Console retrieves the AWS credentials (e.g. from a database), connects to the EC2 API, and deploys the selected AMI while passing the AWS credentials to the upstarting AMI instance
  • The AMI instance gets the AWS credentials as user data and stores them on local storage. It uses the credentials to perform API methods (e.g. to have access to persistent storage). Upon a reboot, it retrieves the credentials from its storage


The ExternalConsole offers the following benefits:

  • From the perspective of the image provider: flexibility to provide abstraction to the image users from all possible EC2 dependencies. The External Console component is under the control of the software provider and can be adapted transparently for the end-user to any API modifications or new features
  • From the perspective of the image user: ease of use. The credentials must be entered only once

At the other hand, it may have the following drawbacks:

  • From the perspective of the image provider: additional costs to implement and run software outside EC2
  • From the perspective of the image user, the AWS Credentials are given into the hand of an external provider and its measures to secure the access to the credentials

Known Uses

A couple of companies specialiced on managing EC2 deployments, life-cycle of images, and ease of configuration by providing an alternative web-interface to Amazon EC2 Console. All those companies have in common to store the AWS credentials on behalf of the user in their own database and use it for all operations involving the EC2 API. A list of those companies include RightScale, Enstratus, CloudKick, or Kaavo. Their approach uses the ExternalConsole pattern.

Note: this is work in progress. Any feedback, questions, or corrections is welcome and might be integrated in an updated version. If you use this pattern, let me know!

7 thoughts on “EC2 Design Patterns (1): ExternalConsole

  1. Pingback: EC2 Design Patterns « Elastic Security

  2. Hi Shlomo, thanks for the link. I just read your article and enjoyed it a lot. I am currently on an article that descibes a pattern that I call “Trusted Gateway”. Here, the EC2 credentials would be encrypted with a locally available key and stored on the local storage so that it is available at reboot (it’s also covered by your article) – no strong security of course.

    We were recently discussing the idea of an external credentials-service. The EC2 instance would pass the (reasonably encrypted) credentials via an request over an SSL connection to the credential-service that generates a token handed back to the requester (i.e. the EC2 instance). The token is then stored together with the timestamp of the request, the IP address of the requester, and the credentials. The credentials are handed back only when someone asks 1) in time with the 2) right IP and 3) the right token.

  3. Definitely.
    A proxy is an interesting idea. I guess you would need to hack the API tools, right?
    But how would you get it between your instances and the AWS API and where would you actually run it? If it is inside EC2, the proxy itself is not secure anymore. If you run it yourself outside AWS, you need to deal with a hardware server again. And if it is run as a general service by a third-party provider, you give away your credentials again.
    Did I miss something?

  4. No need to hack the AWS tools. You can start with the WSDL and use code generation tools to get you halfway there. You can get both halves of the implementation (both the AWS-facing consumer of the API and the client-facing server) from such tools. Then you hook them up back-to-back, adding in your own credentials handling in the middle.

    If only there were a good generic “API Proxy from XML” generator out there. There is an “API Proxy as a service” as, which might be worth a look.

    Getting between the client and the AWS endpoint can be done by setting the endpoint URL.

    Where to host the server is truly an issue. If the proxy were open-source you could run it inside Google App Engine, for example. Otherwise you would need to have dedicated hardware for the purpose. Depending on the size of your deployment and your overall sophistication, you might already have these external machines – for example, your monitoring servers.

  5. Pingback: EC2 Design Patterns (2): Trusted Gateway « Elastic Security

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s