Welcome to Serco.com. Please review the region selection dropdown just below to get the most relevant content to your region.

Is there something in the data?

Digitising asset management to unlock cost optimisation and more.

by Paul Bogan, Chief Digital Officer, Serco Middle East

 

We live, work and operate in an increasingly data driven world.

With recent advances in new educational degrees such as computer science, improvements in regulatory policies and international standards and the commoditisation of raw computer power, technology is disrupting established ways of delivering and doing business.

“Data is the new oil.” This quote goes back to 2006, and is credited to mathematician Clive Humby, but has recently picked up more steam after The Economist published a 2017 report titled “The World’s Most Valuable Resource is No Longer Oil, but Data”.

So is it true? It really depends on the following:

  1. Your understanding of the complexities, limitations and application of computing science currently. 

  2. The application for which you are trying to extract data value to drive some form of probable position of improvement in an automated way.

The above also has to assume that there are sufficiently qualified skillsets and capabilities within the organisation to identify ways to capture data sets relative to the hypothesis; and that is not even considering the amount of noise inherent within large data sets or real-time information from systems and subsystems.

As a leader in managing public services on behalf of governments across the world, Serco has been at the forefront of integrating technologies across our physical assets to empower our people in delivering best-in-class services.

During this unexpected period of COVID-19 and especially in the UAE where we are going through a period of minimal passenger footfall, social distancing and low liquidity, asset owners are under incredible pressure to enhance asset performance and to maximise asset investments.

Many of our partners face specific challenges in understanding how to leverage technology to better manage their asset portfolio to realise new efficiencies and minimise wastage.

Digital asset management may be the key to unlocking new cost optimisation, minimising reactive maintenance, extending economic life and improving productivity when you look to use data as a source to leveraging technology as an end-to-end solution.

The cost of operating and maintaining infrastructure often far exceeds the available funding especially in these times. The questions of how to identify the appropriate area of spending are increasingly difficult and complex with trade-offs being decided between maintaining, sweating, mothballing and in some cases closing entire operational infrastructure during this period.

The answer to enabling optimal decisions could lie at the very heart of the information being provided by the assets themselves: the asset data. Like the vast majority of decision-making tools or digital transformation-led initiatives, the concept is fairly if not intuitively simple:

  1. Capture data sets

  2. Aggregate data sets using workflows and frameworks

  3. Analyse the data using technology

  4. Drive business insights through computing science or visualisation tools

  5. Enable better data driven business decisions

 

We at Serco have identified what we believe to be the 5 steps of success for managing physical assets. These steps are individually smart and collectively intelligent.

Reducing volume and increasing value from data parameters that define performance, safety and criticality of physical assets is the key to driving real Asset Investment Planning and Asset Performance Management strategies. 

 

The Challenge:

A large majority of asset owners are still not covering the basics such as: what do we have? Where is it located? What spares do I have in my inventory to support and finally, in what condition are my assets? Yet they are keen to issue request for proposals (RFP) for predictive maintenance and industrial internet of things (IoT) implementations.

To really benefit from the Gartner’s data flow requirements within the asset management service, a holistic approach needs to be considered and managed in a controlled way to really enable the power of what data driven decision making can deliver for a business.

Asset Performance Management (APM) should not be confused with Enterprise Asset Management (EAM), although integration between the two is common for triggering work orders in all levels of functional capabilities. APM is designed for decision support; EAM is designed for maintenance execution.

Similarly, APM also should not be confused with asset investment planning (AIP). AIP can be used at any level in the maintenance roadmap. AIP is designed to support both short and long-term capital investment decisions which have to provide a forward budget of asset replacement and decisions about repair versus replace.

It takes data on asset condition, maintenance costs, criticality, budgets and risks, and then analyses it to produce capital investment plans over extended time horizons. The two solution types often use the same data and similar analytical techniques, but for different purposes.

There is no overlap between APM and AIP. These are complimentary to one another.

When it comes to asset management, we have to learn to walk before we can run.

The maturity in which it is applied on physical assets is shown below:

 

At Serco we believe that optimising business processes by leveraging the latest policies and procedures in the asset management space, supported with the right technology stack whether it be a CMMS, CAFM, IWFM or EAMS are essential in driving real value in the way in which assets are managed throughout their lifecycle.

Why do we try to presuppose that the way we have always managed our assets is the optimum way in which assets would, could and should be managed in the future? We have to embrace change and do things differently to improve.

I always try to use the analogy with a car, and it goes something like this: If you were to get a group of people and offer them the exact same car (same make, same model, same year, same specs) and then asked them to use the car as they see fit and to meet in exactly 10 years and compare the cars,  would they be in the exact same condition? The answer is a resounding no. The cars will be in different states of repair.

When you break it down, we all drive with completely different behaviours. Some individuals accelerate quicker, some brake harder or turn the corner faster and with more conviction. Some individuals drive on different terrain and use the cars for completely different purposes from short quick trips to long, lengthy commutes. This does not even consider the tolerances in which all materials are made.

The outcome is that some will have spent more money on maintaining that car over the 10-year period and some will have spent less, even when incorporating the assumption that all cars were submitted for maintenance in their predefined intervals of X months or Y miles, even at a relative mileage comparison over the same period. 

We must assume that if we apply the same logic and rationality to a physical asset such as multiple escalators in a metro station or an air handling unit in a university in that original equipment manufacturer (OEM) recommendations and policy such as SFG 20 are great baselines for driving an initial planned preventative maintenance (PPM) schedule. But do they really reflect individual asset behaviour over a period of time? 

Even assets from a similar class such as HVAC within a single facility such as an airport can operate in completely different environmental conditions, be operated at different frequencies and usage and can be located in difficult to reach areas - all of which has an effect on how the asset degrades over time. 

The hypothesis that can be assumed is:

  1. Real-time historical and condition-based data can identify both its current and potential forecasted state of repair if data can be captured and measured accurately.

  2. Capture the data characteristics for each asset (parent, child, grandchild) that specifically drives performance, safety or criticality and relate them to the expected PPM schedule to see if we can exceed or improve the baseline.

  3. The insights that can be deduced will be relative to the potential intervention required and at what time and associated cost to retain a level of performance dictated by the operating context for which that asset supports.

 

Clients that do not know what they have or where the asset is located and do not understand the importance or critical nature to operations that a specific individual asset has and in what its current condition is, will struggle to prioritise their investments, either through capex or opex, on how to optimally enhance performance from their asset portfolio in a far more efficient and effective way.

We need to leverage as much of the data points as possible and ascertain where an asset is currently positioned, relative to the remaining useful life, to determine when and what action (if any) should be chosen.

This stage in itself holds the key to lifecycle analysis based on individual asset behaviour and provides an existing baseline in which to make those difficult and challenging questions on where and when to spend capex or opex budgets that are relative to the performance of the asset in the environmental and operational requirements that it supports.

Once the data sources have been maximised, then and only then, should additional censoring capability be added to improve the baseline towards a predictive position if it compliments a failure mode that cannot be derived from the existing data sources that have been sought.

Then together, we can start to look at internet of things (IoT) strategies, digital twins and operational facilities where asset information moves from concept through to decommissioning in a logical manner, continuously driving sustainable performance towards just-in-time maintenance as a service capability ensuring that the right person at the right time is undertaking the right service.

In ancient Roman religion and mythology, Janus was the god who held the key, so to speak, to the metaphorical doors or gateways between what was and what is to come. To truly understand and enable predictive maintenance in a way that drives optimisation of capex and opex management with real time data, then one needs to understand the past, capture data clearly and consistently from the present trends and patterns to foresee the intervention outcomes of the future.