It’s not a secret that the majority of ML programs don’t see the light of day, but we find that it’s not because of the availability of data itself, or the lack of machine learning algorithms to deal with the available data.
We keep hearing how much more data we, the digital beings generate. Many times, businesses are structured in silos which make it difficult to utilise all of the data that’s out there, and a lack of data is rarely the core problem for implementing machine learning. Additionally, as machine learning has become more mainstream in the last 5-7 years, almost all modern programming languages now have a native library for ML and can implement a plethora of ML algorithms. Industry heavyweights such as Google and AWS are also creating ML libraries that are easily available as consumable APIs on their cloud platforms or as stand-alone software packages, like TensorFlow.
This first phase of exploration, experimentation, and development of the ML algorithms is a crucial milestone for a business that is trying to implement AI. Many businesses are able to explore and experiment with their data using in-house or consultant Data Engineers and Data Scientists, and many more are able to take it a step further and develop reliable dev environments where ML projects thrive.
However, itʼs when ML projects need to go the second phase of its existence – when businesses Integrate, Rollout, and Govern the ML algorithms created during the previous phase – where we see the majority of ML programs fizzle out of steam.
Once a machine learning pipeline is developed and can generate insight from the data, it’s integration which will take this entire dev-test pipeline and configure it to run on your production IT infrastructure (cloud, on-prem, etc.). The output of this stage is an IT and risk/compliance project signed-off and ready to run inside a real business process. The reason this stage typically takes longer than it should is because there are many more stakeholders involved, including your infrastructure teams, IT teams, security teams, external penetration testing vendors, and so on. Stakeholders will have their own transformation programs running and it takes additional effort to align everyone’s timelines and approaches.
A lot of times this gets ignored by pure-tech players, but in reality insights are useful when they integrate into a business process. Integration takes care of technical aspects, but a true rollout needs to take care of other, more softer aspects of technology rollout, like people training and awareness, and any relevant updates to the existing business processes via convenient job aids and such. Business and IT stakeholders need to become completely aware of what insights are being generated by the ML project and how their work is affected by the change in the overall business functions and processes. A good team of BAs can sort this out in a few days with the proper awareness workshops and communications, yet many times this becomes the most nervous point for the business to fully adopt ML and start reaping the value.
Rollout and govern really go hand-in hand. The purpose of Govern is to clearly layout how the ML insights and insight generation process will be governed from a change-management point of view going forward. Insights generation processes require more technical change management oversight, whereas insights require a more business-heavy governance to make sure the insights continue to deliver business value. A good governance process can give business stakeholders the confidence that the insights continue to deliver value to BAU for years to come.
If any of this resonates with you, email me (Apoorv Kashyap) and share your thoughts or experience with your own ML programs.