Digital Ethics and Data Handling

This discussion shows the linkages to the Digital Ethics Framework and the Data Handling Guidelines

Background & Context

The relationship between data ethics and data handling may not be immediately obvious. However. Over the last few years. A lot has changed with automation. Machine learning. and other aspects, we have started to think about a new model that explains what this looks like. The work carried out by William Barker has developed the Digital Ethics model;

The principles of Digital Ethics;

· Beneficence: do good. Benefits of work should outweigh potential risks.

· Non-maleficence: do no harm. Risks and harms need to be considered holistically, rather than just for the individual or organisation.

· Autonomy: preserve human agency. To make choices, people need to have sufficient knowledge and understanding.

· Justice: be fair. Specific issues include algorithmic bias and equitable treatment.

· Explicability: operate transparently so as to explain systems working and its outputs

The latest version of the Data Handling Guidelines includes a more expansive description of this work.

There are a number of conceptual and logical aspects that need to be taken into account traditionally in ICT we started to think about the strategy, which defines the overall objectives and vision and the mission, that is their normally encompassed through policy.

A set of principles or a more defined and specific policy gives us the handrails and defines both the scope and the exclusions (red lines) relating to that policy. A policy when properly defined should always use action centred language, that is verbs and actions, which can then be quantified and turned into key lines of enquiry and measured through defined metrics.

Figure 2. The new Integrated Approach © M Brett 2021

The next stage of the Data Handling Guidelines lead to the processes and procedures. The operational aspects of ICT and finally the tactical aspects, which are about keeping the service running, and dealing with things when they go wrong. The resilience and incident response aspects pick those up.

We have considered the supporting aspects of that traditional view of strategy, policy operations and tactical. This part of the conversation also includes information governance. Information governance is where the data handling guidelines really start. It has become more and more apparent over the last couple of years, with the emergence of artificial intelligence especially, and that being embedded in a number of devices through the Internet to things.

An emergent theme is that of Physical Cyber Systems(PCS). PCS reflects operational capabilities. Digital ethics are now absolutely critical. The Digital Ethics Principles themselves support strategic decision making through the ethics embedded in algorithms and Machine Learning.

The in-built decision logic or machine leaning frameworks have to have gone through an ethical check as many PCSs will be deployed as “Fire and Forget” , maybe in service for many years, sitting in the corner, doing what they are doing without any further though or intervention.

We have to make sure that data. Is accurate, relevant and timely which has always been supported through information assurance(IA). Information Assurance has always considered the confidentiality, integrity, and availability of information. The ethical dimensions make sure that the information does not cause harm and actually protects the individual.

Artificial intelligence generally is encompassed in algorithms, and even the UK Data Protection Act has now got specific protections for citizens, businesses and data users. around the right to challenge automated. Decisions made on your behalf. These aspects having been enshrined in data protection law. These subtle nuances are often miss understood by lay people as to their significance.

To help articulate this the handling guidelines is a good framework, in so far as its overarching principles, which have always been based around people. Places, policies, processes and procedures. The dates handling guidelines were originally written back in 2008, in response to the loss of two data CDs, maintaining a huge UK wide data set.

Figure 3. Data Handling Guidelines Over-arching domains © Mark Brett 2021


The people aspect reflects the fact that people are often the weakest link in the process, whether that's maliciously or through mistakes and errors.


The places looks at physical security and of course places now is also around smart technologies, place based technology and the like.


Policy remains the heart and central aspect of the titanium guidelines because from the previous model, that is how we actually hook together the world of strategy in the strategic and the vision driven things to the world of the operational and tactical things.


The process is aspect is how we engage with the exchange of information exchange of data within systems. Systems are all woven together and interlinked. There are always data flows and that is picked up through data schemas. Information taxonomies and metadata. The interfaces. Likewise, data handling and the Data Protection Act look at cloud based systems as much as they do physical on premise systems. So the process aspect of the Data Handling Guidelines pick up all of the data flows in information flows between systems and technologies.


The procedures are the final gritty part where things get. Written down and actually followed and taken on the customer journey. Under the Data Protection Act, we need to have data processing impact assessments and these DPS themselves are the logical. Journey the customer journey through the data sets, so there should always be a data flow diagram.

As we said earlier on if these things are being automated through the through, the Internet of Things (IOT), which encompass Cyber Physical Systems (CPS), then ethics has to come in right at the start, so the contention now is the data. Ethics is as important as information security and assurance in making sure that a system is safe and secure and fit for purpose. In order to facilitate this, a C-TAG paper we produced around information assets has explains the “SCRAPE” framework.

Figure 4. SCRAPE Framework © Mark Brett 2021


The systems aspect, to make sure that we undertake. Risk assessments and understand the value of the data and completely layout how a system looks both at the physical and logical level in his components. These are known as high level and low level designs and these are articulated through functional and non functional requirements.


This leads onto cartography. Which is basically about diagramming, and making sure that the whole of the data flows, the taxonomy is and the information scheme as a rule map town.


Register's consider how the information is put together, and you've then got orthogonal data sources that immutable. For instance, if you have a look on the website, there is an immutable list of all of the countries that are officially recognised on the planet. That list needs to be. Immutable because it needs to be a final point. It needs to be a single resource of the truth, so we need registers to say what the truth looks like, and then that can be applied against information integrity to make sure that stuff is not been altered.


Attributes which looks at information attributes and the way that information is structured, both in terms of. Confidentiality, integrity and availability, but ethical dimensions need to be put in place across the top of those as well.


Patterns. When we talk about pattern, that's something like a an architectural diagram or a pattern for actually doing something in a set way. This is going to be especially useful for the Internet of Things, and likewise how not to do things, and those are called anti patterns. So under agile, by putting together user stories. An architectural patterns gives you a rich picture on how to design things, time and time again to make sure that they are safe and secure, and this will be especially useful for the Internet of Things.


Ethics. Taking the ethical principles and laying those over. the top of the data and information that needs onto the asset Discovery framework. Which is based around five domains being;

Figure 5. The 5 D Information Asset Management Model © Mark Brett 2009-2021


Decision is to look at whether the information asset needs to exist in the first place.


Discovery is working out how the information asset is going to be deployed, where it fits in, and how its integrated.


Determination is to value it in terms of harm ethics and its information risk and assurance.

Deployment then is about how the information is configured and actually put into the systems and how it will be used on the daily basis. So in other words, that's looking at data protection. Impact assessments. so who's going to have the information where the information is going to be listed and live what it's going to be used for how it's going to be used on why exists in the first place.


We should always start with destruction, the end game, after necessary retention, how will you ensure the data is safely and appropriately destroyed, including all backup copies. Remember it may take a year or longer after destruction for all of the backups to be cycled off and destroyed.

Finally we consider the “Underpinning Cyber Aspects” have been developed as a way of mapping through a journey path to look at all of these different things because it's just as important to make sure that information is properly resilient but that goes back to the tactical aspects of Cyber Resilience and Cyber Incident Response.

Figure 6. Underpinning Cyber Aspects © Mark Brett 2021


Source documents referred to in this paper are available at:

Some of these issues are also discussed in a recent article:

Last updated