Monthly Spotlight - Clarke Analytics
Data, Data everywhere, where is my insight to action…
In today’s modern pharmaceutical manufacturing plants, it is not uncommon for sensor readings to be taken every second. Recording various measurements on things such as Alkali, Antifoam, Air Flow, Oxygen, Glucose, pH, Temperature, Weight to name a few. Given that a batch may be in growth mode for 14 days, if 10 different measurements were recorded a second, that would equate to 10 x 1.2 million data points for each batch over the 14 days. This is just the bioreactor data. Add chromatography and cross filtration downstream data to the mix and you are into ‘Big’ data from a volume perspective.
In the past, measurements would have been taken once a day, with the data easily held in an excel spreadsheet. A biochemist with good quantitative skills could easily slice and dice the data to get an understanding of the quality levels in the process. With operating margins increasingly coming under pressure, that understanding needs to become even more diagnostic at a much lower level of time granularity. Hence the move for more granular tracking of data as described above.
With increasing data volumes; concepts on data architecture, storage, access, and types of analysis come to the fore. For a biochemist managing a batch’s growth, harvest, chromatography, and other process steps, these complex data skills are outside their usual skillset. These skills are more readily found in the IT departments. However, even in IT departments, while data engineering and architecture skills may be available, advanced analytics skills can often be missing especially at principal or consultant level.
Clarke Analytics are helping pharmaceutical companies with the dual challenge of finding that key insight to effect upstream batch titre levels and downstream protein extraction levels as well as building the overall data analytic capability of biochemists. From Clarke Analytics' experience, there are two fundamental components of successful data analytics projects. The first is enabling team collaboration. The second is having a strong delivery methodology or framework.
Collaboration across business, process, and data teams is key:
A collaborative innovation lab is a valuable exercise that brings resources from the relevant functions together to combine process and chemistry knowledge with data and analytics skills. This cross-functional team should be communicate and collaborate regularly with stand-up status meetings, a practice common to software development. It is important that the time taken from the companies’ experts are optimised and expertise is efficiently adopted by teams.
Data Analytics Development Framework:
It is important that an analytics development framework is the foundation of all analytics work. The one noted above is based on the well-known Capability Maturity Model Integration (CMMI) framework. This framework adds specifics relevant to analytics software and development processes based on Clarke Analytics 30+ years of experience in the analytics space.
This data analytics development framework builds on the three key understandings that need to be developed by organisations to build the overall data literacy and analytics maturity levels of their companies.
- Understanding the Business Context
- Understanding the Data Systems Architecture Context
- Understanding the Data Value context
These understandings are often found more strongly in certain parts of an organisation like business, process, or IT teams. The framework builds collaboration across business, process, and IT to help each function build their overall data analytics and literacy maturity, enabling greater actionable business insights.
Benefits of this approach:
- Greater understanding of the data that the enterprise has access to
- Better data driven business decisioning
- Better enterprise cost management
- Higher revenue potential with better personal customer relationships
- Better staff retention based on staff development in key technologies and practices