All-too-often, customers are drawn in by what they see in a product demonstration, without understanding or focusing on the system's underlying architecture or design. The system looks good on a superficial level.
Quite often, a small-scale system will be installed and perform well on an initial basis. However, as the end user's needs change and the security system necessarily grows to match these differing aspirations, greater stresses are inevitably placed on the systems set-up. It's then that hidden design flaws and limitations become more readily apparent.
What, then, are the frequently overlooked requirements for designing a security management system in today's world? Would it in fact be true to say that a software-centred design philosophy will win out over one that's most certainly more reliant on hardware?
Distributed network architecture
In contrast to traditional centralised network architecture, distributed network architecture means that each hardware component of the system is designed with its own processing and decision-making capabilities. Additionally, these Intelligent System Controllers are configured as a plug-compatible network module. The access panel (or access controller) is such a device, and is the fundamental piece of hardware in a security management system.
In a true distributed network environment, all decisions are made at the access controller rather than at the host computer. Ironically, most security systems currently on the market don't satisfy this fundamental requirement and, therefore, do not have distributed network architecture. Furthermore, the vendors of such systems position the fact that their controller must communicate with the host as an advantage or a significant feature. In truth, it's a major design weakness.
If you're talking about a well-designed security management system, the access controller will be a self-contained intelligent device, with its own local processing power and sufficient capacity to store a full database of cardholder and other information required to make real-time access decisions.
Unfortunately, most of the access controllers in use today have limitations in physical memory which preclude them from storing a complete ID card database. This handicap means that, if an ID card is presented to a reader and that card's information is not within the field controller's database, the controller must communicate upstream with the central host computer in order to obtain the information needed to make the critical 'access granted' or 'access denied' decision.
In truth, this can lead to major delays, as the host computer may be busy handling other transactions. If the communication path is through many different network segments, the delay can turn out to be even longer.
Furthermore, if the network is down, the controller can't communicate with the host at all – and the controller continues to wait for a response. People with legitimate access rights are dubbed 'network citizens', and do not add additional superfluous traffic onto a customer's existing network.
The controller is the most reliable piece of hardware in a security management system, with a dedicated communication channel to its downstream devices (readers, alarm input and output points). Potentially (at least), there are a large number of downstream events generated in a system.
If communication between the controller and the upstream devices (the host computer, for example) is unavailable (off-line), the controller needs to be able to store all the downstream events received while off-line without losing or overwriting them until communication has been restored.
The controller must be intelligent enough to not only store such information, but to prioritise it as well. For example, events generated at certain critical alarm points might be of an extremely high priority, and would have to be transmitted to the host computer and displayed in the central station monitoring facility ahead of other alarms and events.
When the host computer communicates with the controller, it can send information either in single data record transaction or in a block of data as a single transaction. A well-designed communication protocol between the controller and the host system uses asynchronous, full duplex communication, and sends and receives data to and from the controller in large data blocks optimised for network packet size.
Asynchronous, full duplex protocols
By deploying an asynchronous, full duplex protocol, the host and controller can exchange blocks of data simultaneously without waiting for each other's responses. In reality, most current systems are using old protocols employing synchronous, half duplex, single data record transaction. This method of communication, even over a very fast network, is really inefficient and becomes a performance bottleneck. Consider this analogy: you might well be driving along a highway that has no speed limit, but if you're at the wheel of an old jalopy you can't hope to take full advantage of the road's speed potential.
For many years, most security management system vendors were involved in manufacturing their own hardware. Along the way, they developed a culture in which their system design continued to emphasise hardware functionality and features. Those companies viewed software as a necessary evil. Something they had to provide – for a nominal fee – in addition to 'the box'.
This type of mentality was appropriate years ago, when systems were fairly simple in nature and non-integrated. However, modern systems are much more complex. They're far more likely to be sold as either 'integration-ready' or integrated solutions, incorporating access control, credential management, visitor management, digital video, biometrics, smart cards and other diverse functionality.
As demand for these integrated solutions has increased exponentially, the old hardware-based philosophy has hindered the ability of such systems to scale upward, integrate other components and perform as the systems grow and become more complex.
Access controllers are typically embedded systems, and such systems have a number of serious limitations to them. For one, they have either home grown or highly specialised embedded operation and limited physical memory. On a different note, they have no
Limitations of embedded systems
Access controllers are typically embedded systems, and such systems have a number of limitations. First, they have either home grown or highly specialised embedded operation and limited physical memory. Second, they have no database engine by the classical definition, so data organisation and management is very primitive. As a consequence, inserting and deleting/replacing cardholder information is a fairly slow process, particularly when there's a large amount of data.
Consequently, scalability and performance are greatly diminished – because as the system grows there's a large amount of information to manage, primarily as a result of the highly dynamic nature of data in security systems. Therefore, it might not be the network that limits system performance but the firmware in the controller itself. Occasionally, vendors overemphasise the need for high speed communication with the controller, while ignoring the fact that there's still an inherent storage management limitation in embedded systems firmware (which is the real cause of the aforementioned bottleneck).
The basic functionality of an access control system is very simple. On a fundamental hardware level, all systems do pretty much the same thing. They have one central host computer (ie one or many communication servers), one or many access controllers and many downstream devices with hundreds or thousands of input and output alarm points.
The host computer stores information for system administration, and downloads device configurations, cardholder records, etc. The controller makes access decisions, and uploads event information from the downstream readers and alarm points to monitor events.
This basic functionality of hardware/firmware is the same for every manufacturer, and there's very little room for innovation or breakthroughs in terms of new developments. For this reason, a number of forward-thinking companies have now outsourced the manufacture of their controllers. What separates these controllers? The application software each uses.
Due to the inherent limitations of the controller's firmware, the real innovation in a security management system must come from the application software. This software is really the glue that binds a complex system together and makes it work. This has been true in all other realms of computing and electronics. Once a hardware technology is in place, it's the application of that technology – through software – that realises potential and delivers the benefits. Companies that continue to preach the importance of a hardware-based system design are falling behind the norm.
The 'brains' of security
Compare the complexity of firmware on the controller level with the application software, which is on the system integration level...
A controller typically contains less than 256 Kilobytes of firmware code, while a well-written integrated solution contains easily in excess of 256 Megabytes of software code. That's 1,000 times more software code than firmware code. This is not surprising, since the application software is really the brain of the system. It must interact with many more resources and technologies than does the firmware.
Well-written application software is designed using modern object-oriented technology. This means that many lines of repetitive code are replaced with a reusable software object which combines the building block of the application. Programs are smaller in size due to the reusability of objects. Well-written application software also uses off-the-shelf libraries of standard objects and the latest technologies and tools available.
Open architecture is one of the most important requirements in designing a system that works for the customer. Surprisingly, even though open systems are the way forward, many vendors will still pay lip service to this requirement. Their products are still based on closed, proprietary architecture.
In other words, everyone's talking about open architecture – but to this day most security management systems still don't encompass it.
Exactly what is open architecture? Open architecture implies that every major software and hardware component of the system, every communication protocol and every interface is designed according to industry standards that allow easy integration with other systems and components.
It should be stressed that a well-designed security management system is based on an open structure that relies on current de facto industry standards in software design, operating systems, networking and databases for seamless integration with the corporate infrastructure.
Integrating with the outside world
In terms of software, the system must support multiple database standards, such as SQL Server, Oracle and DB2. It must support multiple protocols, such as TCP/IP for network communication, XML for data exchange between different applications, SSL for secure communication and LDAP for interfacing to directories and directory services.
The system should also provide a standard way of integrating with the outside world – with other systems and devices. For example, the system must offer standard application programming interfaces (APIs) for ease of integration with different devices such as access control panels, digital video recorders, IP cameras, fire panels, intrusion controllers and intercoms, etc.
End users don't want a jumble of proprietary systems that can't work together. They want integrated, total security solutions that serve their best interests. They don't want to be limited in the third party products they can use. At the same time, they desire to preserve their investment in installed field hardware.
Source
SMT
Postscript
Phil Mailes is director of Lenel Systems International's operations in the UK, Ireland and Sub-Saharan Africa
No comments yet