Perfect the simple, manual process first before introducing advanced tools like artificial intelligence and automation. During the trip, I learned that Amazon operated some of its physical retails stores with absolutely no AI or fancy technologies. They focused first on defining their processes and making sure that the baseline product worked. Then, they automated the micro-process that would yield the highest benefit. I know buzz words like machine learning, neural networks, and minute-by-minute automatic testing sound great, and they might get a few nods in board meetings. However, implementing these tools without first understand the underlying process can lead to unnecessary headaches (and in my case, unnecessary all-nighters).
The top factories focused on where they could provide the most value and didn’t waste resources trying to build everything in-house. Do what you do best and outsource the rest. Think about how you can double down on your organization’s strengths and utilize pre-made or external sources for everything else. If data visualization is your bread and butter, maybe you can use open source or external machine learning model providers. Or perhaps, you can use pre-made data aggregation and ETL tools instead of trying to build everything by hand.
One of the smaller facilities that we visited is currently undergoing major improvements every week. On Fridays, the manager reviews data collected throughout every step of the manufacturing process to see where the major bottlenecks are. Then, during the following week, they solely focus on improving that one bottleneck. Healthy data operations have mechanisms in place to catch, measure, and prioritize data quality issues, application downtime, and process bottlenecks. You can’t fix what you don’t know.