The End-to-End Argument
The end-to-end principle in systems design has become famous for its successful implementation in the Internet architecture. It suggests “that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.” The complexity and cost of implementing those functions in lower layers, the fact that those functions cannot be implemented in a fully reliable way on lower layers and thus need to be implemented on higher layers anyway, and the risk that they may be inefficient or even useless for certain services on top – all those facts are arguments in favor of the end-to-end principle. In the context of the Internet, it means that the network remains rather dumb (simple packet forwarding), while more sophisticated protocol functions like error detection, retransmissions of lost packets, flow- and congestion control, and connection management are implemented at the end-points, i.e. the servers themselves.
The End-To-End Argument in Information Security
The end-to-end principle however does not play the same role in the information security domain. Encryption of file-transfers (encrypt files instead network packets) follow the end-to-end argument. However, many security functions like firewalls, network intrusion detection systems, authentication and authorization servers or reverse proxies violate the end-to-end principle – and for good reason because they are more effectively done by separate components in the enterprise network instead on the end-points. Even vulnerability, anti-virus software and patch management systems are no longer managed by the end-points, but by centralized servers with big databases behind.
The End-To-End Argument in Cloud Security
OK – the end-to-end argument does rarely hold for Information security systems. But is this still true in a cloud computing setup, where servers may be distributed across different heterogeneous networks and infrastructures provided by different providers? Most cloud providers provide a simple provisioning API that allows to start and stop instances from virtual images and possibly create snapshots of the running servers. They don’t provide a firewall component in their infrastructure (Amazon EC2 is one of the exceptions with their concept of security groups), they provide no or only rudimentary Identity and Access Management, no IDS/IPS systems, no vulnerability and patch management, no encryption, no data leakage systems, no VPN layer. Most providers put the burden of assuring security explicitly on the shoulders of their clients and say: that is your responsibility, not ours – do it yourself or find someone who does it for you. This way, they implicitly promote the end-to-end principle: why encrypting all data in memory and on storage, when only few customers need this level of protection? Why providing a firewall, when every end-point can install and configure their own? Why providing sophisticated identity and access management, when the users know much better what exactly they need with regard to IAM?
The danger of this attitude is that when the cloud providers don’t build security services into their infrastructure, no one may do it. Many users simply won’t do the effort of searching and deploying an appropriate third party security solution. They will use the cloud service as it is, enjoy their immediate benefits (no capex, immediate access, scaling), and postpone the security problem for later. This is somehow understandable, since it corresponds to the division of work and responsibilities in most enterprises today: users/developers are not the administrators are not the security experts. My belief is that cloud providers will be obliged to integrate more and more security services in their infrastructure (similar to Amazon EC2 security groups) and provide APIs for them – and thus adapt a system design that move security functions away from end-points into the cloud providers own network and infrastructure.