Comment on page
Information Asset Registers
C-TAG InformationAssetRegistersFoundation PaperApril2021.pdf
Information Asset Registers are nothing new, the original concepts go back to the ICL 2900 series mainframes under VME/B and the ICL Data Dictionary model . The Data dictionary was revolutionary as it recorded both the physical real world practices and procedures and mapped them to their logical programs and processes. This meant that Business analysts could design systems and services seamlessly in the real world, then map the data attributes into data schemas and taxonomies.
We focus on confidentiality, we're always thinking about access control and encryption. When we talk about integrity, we're thinking about the information on the systems and services being accurate, not tampered with and non-repudiation. And when we're thinking about availability, this is actually about availability of systems and services, which covers the areas around backups, disaster, recovery, and disaster recover. In pulling together asset registers, we are also concerned with network components with manufacturers. We things like operating system types, versions and patch levels within them. We further need to consider how long it is since the configuration information has been updated, if indeed an information asset register exists in the organisation.
We need to understand how long it will be until these systems are due to be replaced. The main reason for wanting this is that if we find a particular vulnerability or exploit appertaining to a certain manufacturer or type of kit, Knowing who has that specific equipment and what the patching level, will help quickly determine if an organisation is at risk of a systems breach or comprise is attacked with a certain exploit. If they are adequately patched, they may not be vulnerable.
Knowing the profile of equipment deployed and the relevant patching and software versions and how they configured will help network defenders. Also those challenged with national network defence will be able to quickly and efficiently contact those organizations and tell them about vulnerabilities and provide actionable intelligence to help them defend their networks and infrastructure.
The main international standard for Information Security is ISO27001, which covers a number of domains relating to Information Security. We must however consider other aspects such as Information Risk Management, Assurance and Governance. Other ISO standards cover these areas and so even the standards are many and complex. There is a hierarchy which covers elements from the physical network, through to servers, operating systems, applications, data and access control. These elements are all interlinked and it is proposed that you should consider them in isolation. This paper proposes an approach and flags some of the core issues and questions.
This paper is also a foundation for further research in the area and explores a novel deployment of some social science research methods and approaches. The overall information system, comprises all of the components (attributes) necessary for it’s operation, the hub of the system is the server which hosts the application. There will be a number of supporting components, including the file storage, the access control system and the supporting operating system.
Many systems today are run and supported on databases. Servers themselves need to be accessed. In the old days terminals were hard wired to servers. Today, we generally access a server through a network. This can be a local network (LAN) or a wide area network (WAN). Today, we tend to use the Internet as an integrated part of the Corporate infrastructure, by deploying VPNs (Virtual Private Networks). These VPNs then connect to either on-premises servers or to cloud services, such as Amazon, Microsoft or Google. There is an emergent theme of multi-cloud and hybrid cloud (both on premises and Public Cloud based).  When we mention Public Cloud we mean a Virtual Private Cloud (VPC). .
All of the network and infrastructural components need to be identified, quantified, risk assessed and assured. This is especially important in the context of Zero Trust Computing and as an aid to Operational Cyber Resilience and Response planning and coordination.
This paper proposes an approach to identify, quantify and report on the components in an organisations infrastructure. The proposed approach covers both the hardware (whether logical, physical or conceptual) and the software systems, to provide a heterogeneous taxonomy, for planning, Cyber defence, assurance and incident management.
The reason for this type of granular consideration is to ensure all components of the system are taken into account, because attackers will try to exploit any available attack vector. Whilst most attacks are predicated through emails, websites and direct attacks on Internet facing servers. Good documentation and asset registers will enable rapid identification of vulnerable information assets. We refer to network and infrastructure equipment as information assets as they are configurable and therefore can be assessed against Confidentiality, Integrity, Avaibility and Non-Repudiation.
Asset Descriptions and Registers
Information Asset Registers have been in use for some time, they are acknowledged by the Information Commissioners Office (ICO) . In the context of this paper, we propose a wider and deeper use of Information Asset Registers to annotate and record the network and Infrastructure components deployed within an organisation. The concept was first explored by the author in a previous paper in 2021 .
There are a number of academic research methodologies that are useful in this space and an mixed-methods approach is being taken to undertake and understand this work. The overarching approach is to use qualitative methods within a practice based research framework .
As the actual project around the replacement of the PSN (Public Sector Network) compliance is a effectively a live real world problem, requiring tools and techniques to understand, analyse and work towards solving the problem. An Agile approach to the process is being used, whilst not a formal research method, it does provide a useful for context and will foster better understanding of the constructs and issues by stakeholders, namely Local Authority compliance and security managers.
The Agile methodology is widely used an understood in central and local government in England and Wales. MHCLG Digital  use agile as their delivery method, so any proposals we make will be of greater use if they interface with agile. The NCEF (NLAWARP Cyber Exploitation Framework), developed and presented at the Cyber Practitioners Conference in York 2017, is one such Conceptual Framework, which acts as an aide memoir to security architects and network defenders in an agile environment. These frameworks were developed after consulting with a number of regional WARPs (Warning, Advice and Reporting Points) from 2013-17 .
Figure 1 The NCEF Framework
The NCEF framework is predicated on a number of questions to help shape and refine the infrastructure design. This approach helps walk the architect or agile product designer through the landscape to build a profile and ensure all necessary steps are covered to form an holistic approach to zero trust design from a information security / assurance perspective.
NCEF1 What does the network look like, discovery, diagrams and documentation?
NCEF2 What's happening on the network, logs and monitoring?
NCEF3 What does normal network traffic look like (SIEM)?
NCEF4 What bad stuff is out there on the network (detection)?
NCEF5 Do we have bad stuff? - Use tools for NCEF3 & Mitre Att&ck Framework
NCEF6 How do we remove our bad stuff from the network?
NCEF7 How do we keep bad stuff out of the network?
NCEF8 How do we respond to the bad staff, through incident response (Develop playbooks)?
NCEF9 How do we report bad activity (Security Incident & Event Management (SIEM)?
NCEF10 How do we prepare and practice dealing with bad stuff (Cyber Resilience Exercising)?
© NLAWARP 2017-21
Figure 2. NCEF enquiry questions
A Conceptual Framework is a way of mapping and showing the relationship between a collection of variables, some are fixed and some are dynamic. In this case the variables are network components and information governance issues. Once you’ve identified your variables, they can be assembled, mapped and clustered together. This clustering starts to show relationships and help the formation of categories. Using Grounded Theory, to produce data clusters. Management students will be familiar with the Business Model Canvas  and the similar canvasses and approaches . Many modern tools, such as the agile “Kanban”  approach, software like Trello  and MIRO all fit beautifully with Grounded Theory and conceptual frameworks. These in turn fit with Systems Thinking , Wicked Problems , Wardley Maps  and weak signals , which in Grounded Theory are outlier variables. I’ve explored some of there issues in a paper on Horizon Scanning .
Quantification of information assets, using Grounded Theory , allows for the categorisation of Information Assets in a way that Grounded Theory allows for the categorisation of issues within a community. It is hoped that the introduction of these Social Science research methodologies  into areas traditionally serviced by Software engineering  and other Computer Science methodologies  will prove innovative and useful to other researchers. In developing this work I’ve been influenced by the Deep Work approach  and the ZettleKasten  which has helped to shape the structure. I believe this approach, brings a whole range of Qualitative Social Science tools into play in a novel and innovative way that not only helps map the landscape, but also helps to identify some of the soft cultural issues that affect information management and governance. The Covid-19 pandemic of 2020/21 has forced many organisations to work from home and to collaborate and operate in a virtual environment.
The SCRAPE Framework
It is contented that Information Asset Registers are an essential part of Cyber Security, Information Assurance and Cyber Incident Response moving forward. There is anecdotal evidence in some Local Authorities that Information Asset Registers do not exist for this purpose. This view has been formed over the past few years, through discussions with Local Authorities during Cyber Incidents, through on-line forum discussions and during Cyber Incident Response Training.
Therefore the proposal is to offer an approach to Local Authorities, to develop an Information Asset Register approach and to implement it as part of their Cyber Incident Response Planning.
As we are advocating an approach to move from static plans to dynamic playbooks, Information Asset Registers will be a very useful planning and response tool.
Whilst thinking about this problem and a practical approach to implementation,
When we talk about systems in this context, we are referring to the discrete system for instance Housing Benefits, Council Tax. The Systems can also be a service, such as Microsoft 365. Systems and services will be made up of a number of elements, for instance servers, Operating System, Data Base, Programming language, scripting, configuration files, data files. The systems f today are very different in their composition than those of twenty years ago. The most simple Information Asset Register will comprise a series of linked records, which describe the functional layout and composition of the system. This could physically be a text document, spreadsheet or database.
We must think about the not only the structure and layout of the Information Asset Register, but how it will be constituted, stored and published. These Information Asset Registers could potentially be a valuable asset for attackers and those who wish to cause harm or disruption.
Thought must therefore be given to the creation, storage, publication and use of these Information Asset Registers.
There are a number of useful descriptors and approaches that may be of use to researchers in this field and could be the subject of further research and reporting, these include;
- Systems Thinking 
- Complexity 
- Weak Signals 
- Nudge Theory 
- Cynefin 
- Wicked Problems 
- Wardley Maps 
When the term cartography is used in this context we mean mapping, that is the visual and textual documentation, illustration and recording of the Physical, Logical and Conceptual layout of the information that forms the Focus of Interest, in this case the Service of System, being documented in the Information Asset Register. Some very useful work in this area is the Domain Based Security, referred to as “DBSy” , a process extensively used in the Ministry of Defence and although now thought of a legacy approach it is still worth reading and understanding.
Mapping complex interlinked systems is even more important as we move to a cloud based eco system, which can comprise a hybrid multi cloud approach, that is components of physical servers on premise, inter-linked with public cloud services of multiple different vendors. Mapping these interconnections and keeping the documentation up to date, ideally this is done automatically through the use of metadata and automated module communication.
Many systems components can be open source and these utilise platforms and tools such as GitHub. The modern systems development process, referred to as “DevOps” ,in the agile world  also has a security approach called DevSecOps (Development, Security Operations)  these processes in turn mean that program code is developed, tested and deployed through a federated approach called CI (Continuous Integration). Much of this is automated and te whole code to production (Live running and Operations), is carried out at scale and often is fully automated.
There are a number of concepts and approaches that have formed the thinking around the Cartography element of this model, these are worth further investigation;
- Mind Maps
- Architectural Diagrams
- Symbols & Lexicons
- Systems Mapping
- Domain based security.
- Security Domain and mapping.
Because of the federated nature of agile cloud based systems, it is necessary to have authoritative lists of data items, some of which are fixed for instance recognised countries of the would used by the banks: https://bank-code.net/iban/country-list also country prefixes for international telephone dialling, there are also registers on the .gov.uk website at are definitive;
https://www.registers.service.gov.uk Registers are therefore an approach and worth consideration in the context of Information Asset Registers. We must however be mindful of the security implications and the “Equity” (The usefulness for a hacker), so these register entries will need to be pseudo-anonymised. To facilitate pseudo-anonymisation, we propose a CUON (Cyber Unique Organisational Number), which would be randomly allocated to an organisation in a similar way to a private and public key.
Registers are also extensible, like postcodes. Once components have been declared, other organisations with the same components would be able to copy the entries, this would speed up the whole process up enabling fast and accurate data base population of asset components. This would in turn lead to a standardisation of threat profiles and compensating controls and architectural patterns. This could make a huge difference to local authorities, through standard threat profiles. The contention being all Council Tax Systems have the same data and asset value. Once a systems has been profiled, all councils would be able to use the same profile. Any variations would also be recorded and a huge amount of effort can be saved. Defining and saving these threat profiles and in time asset register entries in XML or similar makes them machine readable and this opens the possibilities for further work to look at the use of agent and API based automated approaches.
The mapping of attributes will it is contended be a journey of iteration. To start with key components will be identified to form the core of a taxonomy. For instance;
Active Directory Servers
Network Area Storage Devices
A detail of this approach is laid out in the NIST SP 1800-5 document:
Taking a firewall as an example;
Location: Server Room 102b
Asset number: 21/45634
Classification Level: OFFICIAL
Manufacturer: Cyber Sure
Build level: 126.96.36.199
Last patched: 12/02/2021
IP Address or identifier: 10.3.4.56
Record Date: 210215
Record version 1.0
Figure 3 Example Information Asset Register record format
The above is a simple example but it means there is a definitive record for the asset.
The CUON being:  The organisation ID  Year of allocation  the unique reference number for the firewall. The key being that a CERT or other authorised entity, could search for
Cyber Sure model 345/t firewalls and find all of the organisations that have them recorded. A further refined search could be on build level [188.8.131.52], that could be an old build and subject to a zero day CVE exploit.
This would save a lot of time and effort. Using an agent based system for instance HUGINN https://github.com/huginn/huginn The agent based approach is a push/pull system. The updated contents of a database wait until polled for an update. Bespoke workflows are put together. This node based store and forward approach could be incorporated into a CERT (Computer Emergency Response Tram) or as part of a hierarchic network for instance linking all of the Local Authorities in Wales, through regional based nodes. This was discussed in a CSIRT paper, referencing Cybershare as model that could achieve this .
this type of asset register could be automated and integrated into a STIX and TAXII type infrastructure: https://stixproject.github.io – however as previously discussed the issue of security and pseudo-anonymisation has to be considered.
Other areas for further consideration are;
Cataloguing Functional and non-functional requirements.
When we discuss patterns in this context we propose that a pattern show the linkages between elements of an Information Asset Register and how the individual components form a coherent system or service. The DBSy references  previously discussed and the Data Dictionary reference  are both good examples of elemental linkages. The rationale for needing these descriptors is that ultimately we need to follow the data . A Data Protection Impact Assessment (DPIA) may well have a diagram showing the flow of information through a system. Service.
Service Transaction Mapping https://insidegovuk.blog.gov.uk/2018/02/07/how-we-approached-service-mapping/ Is a good example of how this looks in practice. We contend this is valuable in working through Cyber Resilience Planning as has immediate utility for Cyber Incident Response when you are making sense of what has happened after an attack.
When systems were written in house, it was possible for the programmer to understand the entire system. Today systems are far more complex and can be distributed and inter-linked. This is why documentation is so important.
There are various standards and approaches to security architecture that may be of interest for further research;
Following on from the authors Dec 2020 Horizon Scanning paper  which referenced the work of William Barker  describing the implications for Digital Ethics we need to consider these in the context of information assets and Cyber Security.
The UK is playing through official bodies like the Office for Artificial Intelligence, Centre for Digital Ethics and Information Commissioner's Office are working closely with Digital Ethics Lab, Alan Turing Institute, Open Data Institute and Digital Catapult in championing digitally ethical practice across the UK public sector.
Most recently GCHQ has published an ethics strategy paper, The Ethics of AI: Pioneering a New National Security, which looks at the future ethical role of the technology in dealing with crimes such as child abuse and human trafficking, and threats from disinformation. Similarly, in the wake of Covid 19 the NHS AI Lab is introducing the AI Ethics Initiative to ensure that AI products used in the NHS and care settings will not exacerbate health inequalities.
Taken together, we are seeing an emerging set of common core values or attributes that built upon the combined disciples of bioethics and responsible AI (see Fig 4 below) that can inform wider digital and cyber ethical practice:
Diagram Description automatically generated
Figure 4 Ethics Framework (Barker 2020)
- Beneficence: do good. Benefits of work should outweigh potential risks.
- Non-maleficence: do no harm. Risks and harms need to be considered holistically, rather than just for the individual or organisation.
- Autonomy: preserve human agency. To make choices, people need to have sufficient knowledge and understanding.
- Justice: be fair. Specific issues include algorithmic bias and equitable treatment.
- Explicability: operate transparently so as to explain systems working and its outputs
A further exploration of Architectural patterns
Pulling this all together, the mapping of components, their inter relationship, implementation, configuration and protective controls can all be pulled together in the form of a security architectural pattern.
One of the best ways to ensure good security practices is to observe bad ones, this is where Security “Anti-Practices” come in useful; https://www.ncsc.gov.uk/whitepaper/security-architecture-anti-patterns
The Information Asset Eco system
Back in 2017, some work was undertaken to consider the key questions relating to network protection and defence. These questions were designed to be an aide memoir for Information Governance professionals to understand Information Assurance issues. This has how been developed on to help visualise what an information asset eco system may look like.
Shape Description automatically generated with low confidence
Figure 5 Information Asset Eco-System
Lego building Blocks
This approach is very good for explaining to senior leaders and non technical people how components link together. This can be used for Risk Management modelling and as an planning aide for Cyber Security exercises .
ISACA have also published a useful article that discusses the use of Lego models for Cyber decision making and risk management .
Implementation Approach The 5 D’s
This methodology was developed by the author and was tested by a group of London Boroughs in 2009  through the LGA. The approach take you through Information Asset Identification and classification. This helps determine the relative value of an Information Asset.
© Mark Brett 2009-21
Figure 6 The 5 D’s of Information Asset Registers
- A trawl of Information Assets – This is the difficult bit and the SCRAP process already discussed can help with this.
- What assets exist. You need to understand what you have and how they physically or logically exist, where are they and if they are backed up against cyber-attack.
- What are their inputs / outputs. Asset and Systems linkages are critical to enabling incident managed and recovery. Linked assets need to be viable, that is all of their linked parts exist and are accessible.
- What linkages exist, without the linkages, you can’t restore a working system.
- Who owns the asset? Every Information Asset must have an owner. The acid test is, who would miss it most if it were permanently destroyed?
- Who is responsible for the asset? As above, along with the Owner is the team responsible for it’s maintenance, operation and use.
- Who controls the asset? How is it delivered, through a system or service.?
- Who can authorise the processing and disclosure?
- What is the business impact level of the asset? That means if it’s lost how much “harm” would it cause? [REF] to Harm modelling….
- What is it’s Data Protection Status? Does the Asset contain Personal Data?
- Who is authorised to process the asset? Again Data Protection status.
- What protective measures are required? This is about the Information Assurance of the asset.
- Where will the asset be created, stored and processed?
- Will the asset be transmitted?
- Will the asset be copied?
- Will the asset be controlled?
- Who will process it?
- Compliance/monitoring/audit regime??
- Who will authorise the destruction of the asset?
- How will you know if all copies are destroyed?
- Do you need to retain a copy for legal/compliance purposes?
- How will you destroy the asset?
Linking Information Risk, Information Assurance and Incident Management
These tools and techniques are part of wider Cyber Incident Management, a detailed approach is explored in the authors incident response policy primer and guide . The SCRAP approach previously discussed provides a practical framework and approach to facilitate the scoping and identification phase to enable Cyber Incident Planning. Likewise the 5Ds provides a structured approach to augment Cyber Incident planning and management. Public Sector organisations can make full use of the National Cyber Security Centre (NCSC) Active Cyber Defence (ACD) tools and services .
Logs / Time Sources / Network Diagrams / Documentation
The SCRAPE approach above was devised to draw together the key non-functional requirements for Cyber Indent Managing and Response. Making artefacts unique (Developing a descriptive Taxonomy for asset identification, version control and management). Further applications for Incident reporting. These are discussed in detail in the NIST incident Response Guide . Once you have identified the assets and catalogued them, you can then start to evaluate the Assets and their inter relationship. All of the attributes are as discussed, causal variables. Identifying and documenting the attributes, will lead to the creation of. Taxonomy  and the NSIT Asset implementation guide  , which can then be mapped against the Mitre Att&ck Framework , which will expose the vulnerabilities and attack vectors that can be exploited through the Cyber kill chain . We mitigate these attack vectors through compensating controls .
Information Asset Registers aren’t new, the Data Protection Act, The Freedom of Information Act and the work of the Information Commissioners Office has highlighted the need for them. The ITIL framework too has a asset registers at its heart. Many Councils claim to have them, yet they are not understood. We believe they are highly valuable artefacts to better understand Information Risk, Assurance and to aid incident response. Automated discovery tools such as NMAP & Spiceworks  can help make the job a lot easier.
The next article will explore an approach to address the changing dynamic and need to remote coordination and response.
Future studies may well confirm an acceleration towards cloud provisioned software and zero trust computing services. I am also concerned with the need to review and change Cyber resilience plans, Incident response and Crisis Management may well need to be delivered remotely rather than in the traditional face to face manner. There is a need to understand fast time communications, using various channels and software applications. An approach to fast time communications for incident response and Cyber resilience in the context of UK Local Government will be discussed in the article. This will concentrate on the formation of Cyber Technical Advisory Cells (C-TACs) and an exploration of adapting the JESIP Framework  to Cyber.
References (All accessed April 2021)
 Brett(2021) An overview of current issues and practice relating to local government cyber security in England and Wales Henry Stewart Publications Cyber Security: A Peer-Reviewed Journal Vol. 4, 4 1–13
 ITIL CMDB: https://www.axelos.com/best-practice-solutions/itil/what-is-itil
 Shrivastwa A. (2018) Hybrid cloud for Architects, Packt Publishing
 Agile Methodology in UK Govt: https://www.gov.uk/service-manual/agile-delivery
 MHCLG Cyber: https://mhclgdigital.blog.gov.uk/category/cyber/
 DBSy: S. Katam, P. Zavarsky and F. Gichohi, "Applicability of Domain Based Security risk modeling to SCADA systems," 2015 World Congress on Industrial Control Systems Security (WCICSS), London, UK, 2015, pp. 66-69, doi: 10.1109/WCICSS.2015.7420327.
 Newport C. (2016) Deep Work. Rules for Focused Success in a Distracted World, Grand Central Publishing
 NIST Asset Registers: http://doi.org/10.6028/NIST.SP.1800-5