With the amount of data that is being generated these days, it is getting more and more challenging to keep track of what data is being exposed to the public and what should be controlled. And as for the control, there are various levels of imposing it. It can be registration only, so that number of views can be logged. It can be registration with verification, enabeling tracking of what information each user access to. It can be complete restriction of access, so that certain data is only available for a small number of people and so on.
While this is not something completely new, the ideas I want to discuss are of great importance and can both add business value if properly implemented, as well as become a show stopper if overseen. There are, of course, a number of vendors on the market that offer products that handle security on different levels, as well as various open source solutions, but I think that any product, however sofisticated it may be, is only half of the solution. Without proper understanding of what is going on in the system and how things should work under the hood, you are more or less bound to get lost. I do understand that different systems have different needs and thus there is no perfect solution for each and every case, but some concepts can still be considered cornerstones of application security. And I would like to discuss some of them.
Let us consider a simple system, e.g. for an online shop, which has a database with tables for products, clients and orders, some services that are used to call the database and a front end that is publicly available.
The easiest solution is just to connect components directly with each other and implement security in the front end only, but what good will it do? What if the front end app gets compromised? Or even worse, the internal network gets hacked so that there is someone with access to the inside of the system that can directly call IO services. Should we then implement security in each and every component? It can of course be achieved by writing a security library reused by all components, but what about updates? Say, a new component is implemented using some other technologies which are not compliant with the library being used. In that case there is no other way than to start a new round of development, and that involves new tests and potentially inconsistencies between components. It ends up with growth of development budgets and extension of release dates. And no business likes that.
PDP, PEP and so on
The information security concepts I mentioned above may be not-so-well known to developers that do not deal with security on regular basis, but they are, to my mind, quite intuitive and self-describing. What I am referring to is a new layer between client and server side, which will handle all security tasks, thus freeing the business critical components from implementing unrelated functionality. I am talking about Policy Enforcement Point and related concepts.
Lets name and describe the things I want to discuss first.
|PEP – Policy Enforcement Point||The component that acts as a front gate for all incoming requests. Its primary task is to stop requests that are not supposed to come through. In many ways it is a firewall, but the thing is that PEP is not an independent component. In order to allow/deny a request, PEP asks PDP whether the request should be allowed or not.|
|PDP – Policy Decision Point||This is the component that actually decides whether a given request is valid or not and communicates its decision to PEP. The decision is made using the information obtained from PIP|
|PIP – Policy Information Point||This is the component that containes values that the request is being validated againts. The simplest example is Active Directory – a database that holds user information, like username, password, group membership, access rights, and so on.|
The way this chain is functioning can be depicted like this:
And if there is a complex system with several domains separated by DMZs, then several PEPs can be utilized in order to achieve necessary setup and correct dataflow.
Among other functions that can be implemented in this chain I can name audit, i.e. logging of some or all of the requests, and integration of different technologies. For instance, if there are some legacy applications in the system which use an older security token format, while some new applications with newer protocols are added, PEP-PDP chain allows to detect what kind of application is being called, what type of security mechanisms it uses and perfom necessary actions to transform incoming token into the expected one. This way it is possible to seamlessly connect various parts of the system together.
A bit about policies
When experts talk about policies, they often refer to XACML – eXtensible Access Control Mark-up Language. And it may sound pretty advanced. But the thing is that a lot of tools operate with this language under the hood, so that policies can be created even by non-developers with the help of some GUI. These policies express business rules like “Application X should be available only in workins hours” or “For customer A application X must serve 50.000 requests per second. For customer B application X must serve 10.000 requests per second”. Policies are helpful not only for securing applications and maintaining desired SLA level, but they can also be used to keep track of critical KPIs.
But to my mind, one should not bury oneself into XACML and ways of generating policies and attaching them to the contracts. It is the layer constitution, with separated responsibilities, and understanding of what really needs to be done during request processing that matters.