
January 12, 2026
Over the last decade, business leaders have been given a consistent message about modernization: before automation, analytics or AI can deliver value, data must be clean, complete, structured and standardized.
Nowhere has this message been more persistent than in manufacturing, where Manufacturing 4.0 initiatives have promised transformational gains built on data-driven foundations.
Yet for many organizations, this narrative has quietly become a barrier rather than a bridge.
The intent of this column – the third in a series focused on discernment in business technology decisions – is to help leaders apply discernment to data preparation itself.
Not all data in an organization is created equal.
Not all data holds the same potential for return on investment.
And importantly, the value to be unlocked from existing data is often closer than it appears.
Understanding which data problems actually matter, and which ones are inherited assumptions, allows organizations to move forward without unnecessary delay or risk.
Discernment, in this context, is not about lowering standards – it is about aligning effort with outcomes.
Appearance: When modernization feels heavier than it should
The setting is informal: a manufacturing association social, with name badges, small plates and the low hum of shop talk mixed with optimism about what comes next.
Vendors and manufacturers gather in loose circles, drinks in hand.
Conversations are friendly, forward-looking and familiar.
What it looks like
Near the bar, a vendor speaks with confidence.
Hands move as diagrams take shape in the air.
Machines feeding systems.
Systems feeding dashboards.
A clean arc from the plant floor to insight.
The vision is modern, polished and cohesive.
It mirrors the presentations executives have seen many times before.
Across the circle, a plant manager listens carefully.
They recognize the picture being painted.
What they do not see reflected are the realities they work with every day: multiple ERP versions, engineering drawings in different formats, spreadsheets built and rebuilt over years and processes that exist because they work, not because they were designed to be elegant.
The contrast is subtle.
The vision is simplified, the reality layered, and while both are part of the conversation, only one reflects how work actually gets done.
What it sounds like
The language is reassuring and well practiced:
- “Once your data is standardized, everything becomes possible.”
- “AI really needs clean inputs to deliver value.”
- “The first step is getting all your data in order.”
The words are not confrontational.
They are offered as guidance – as lessons learned, as the responsible path forward.
A manufacturer responds with measured curiosity.
They ask about scope, timelines and what “in order” actually means in an environment shaped by years of operational decisions.
The answers remain confident, high-level and abstract.
The conversation drifts on before those questions are fully explored.
What it feels like
For the business owner listening, the reaction is not resistance – it is weight.
There is genuine interest in the future being described, paired with an intuitive sense that the path to get there feels heavier than it should.
The effort appears front-loaded, the payoff feels distant and the organization is being asked to pause momentum to prepare for momentum.
That feeling is not fear of technology – it is early discernment.
A recognition that something in the framing may be misaligned with how value is actually created.
What is the risk: When data preparation becomes the wrong gate
Understanding data is essential.
Misunderstanding how much data quality is actually required is where risk enters.
The most common risk is not working with imperfect data.
It is investing heavily in preparing all data before proving which problems are worth solving.
This risk often emerges when data quality is treated as an absolute instead of something that should be right-sized to the business problem at hand.
Data does not need to be perfect to be useful.
It needs to be sufficient to support meaningful progress.
Two dimensions help make this distinction concrete: completeness and accuracy.
Completeness: Enough to be useful
Completeness answers a practical question: Do you have enough information to start a workflow and move it forward reliably without constant intervention?
Right-sized completeness does not mean having every possible data point or handling every edge case.
It means the data can initiate a process without frequent rework or manual correction.
Consider a common manufacturing use case: ingesting PDF blueprints to extract part lists.
The drawings may vary in format, scale and annotation style.
The data does not need to be uniform.
It needs to be complete enough to consistently identify parts, quantities and basic relationships so the workflow can begin.
If the system cannot handle different blueprint types or fails to identify parts at all, the workflow breaks early.
That failure then gates any downstream automation or optimization.
Once the workflow can start and run consistently, completeness has reached a useful threshold.
Accuracy: Enough to create value
Accuracy becomes relevant once a workflow is running, and like completeness, accuracy does not require perfection.
It needs to be right-sized to the decisions the workflow supports and the value it is expected to deliver.
Early accuracy may be imperfect and still valuable if it reduces manual effort, accelerates review cycles or highlights the exceptions that matter most.
Improving accuracy is justified when it produces measurable business impact, such as reduced rework, improved throughput or better decision-making.
When additional accuracy does not materially change outcomes, further investment delivers diminishing returns.
In this way, accuracy is not a prerequisite.
It is an optimization lever applied when the value is clear.
Where AI fits
AI is particularly effective in environments where data is sufficient but not perfect.
It can operate on raw, inconsistent inputs and improve them over time by extracting structure, inferring meaning and normalizing variation within active workflows.
Rather than requiring pristine data upfront, AI helps data improve in motion.
For example, AI can normalize inconsistent part descriptions such as “1/2 inch bolt,” “0.5 bolt” or handwritten notes, allowing workflows to run reliably even when inputs vary widely.
In practice, this matters.
One Wisconsin metal fabrication shop used AI to extract part details from decades of inconsistent PDF drawings with different scales, formats and annotations.
Without waiting for full standardization, they automated the first portion of their quoting process.
Even with imperfect data, turnaround time was cut nearly in half and manual re-entry errors dropped significantly.
This approach allows organizations to prove a process first, understand where value is created and then invest in improving only the data elements that matter most.
What is better: Progress that teaches you what to fix next
Data-cleansing is unavoidable if organizations want durable, scalable solutions.
Cleaning all data before beginning, however, is not a prerequisite for progress.
A more effective approach is iterative, outcome-driven and grounded in real work.
Starting where you are
Many successful automation and AI initiatives begin with inputs that would never pass a traditional readiness assessment:
- Manual documents
- Scanned PDFs
- Inconsistent spreadsheets
- Photos of whiteboards or handwritten notes
- Email-based processes
These inputs are not liabilities.
They are signals of where work actually happens and where value is already being created.
Starting here aligns automation with reality rather than aspiration.
Learning through use
When workflows operate, they teach.
Organizations learn which data elements matter and which do not.
They see where ambiguity creates friction and where human judgment remains essential.
They identify which improvements create leverage and which add complexity without benefit.
This knowledge is more valuable than a static data-preparation plan because it emerges from lived operations, not assumptions.
Speed to value
This approach changes timelines and expectations.
Instead of multi-year preparation phases, organizations see early returns.
Initial automations may be imperfect, but they deliver value while revealing what should be improved next.
This builds confidence and understanding across the organization.
AI becomes a tool that assists learning, not a gatekeeper that delays action.
Knowledge that scales with use
Over time, clarity emerges around how work is done, how data supports it and where decisions truly matter.
That understanding compounds across teams and initiatives, increasing the organization’s capability to adopt future technologies with confidence.
What you can do this quarter
Modernization does not have to begin with a large data cleanup initiative, but it does benefit from experience.
Not all tools, vendors or approaches apply these principles consistently, and early misalignment can introduce friction that slows progress later.
The goal is not to find a perfect solution, but to learn quickly where value emerges and where additional effort is justified.
Here are practical steps many organizations can take this quarter, ideally with guidance from someone who has navigated these tradeoffs before:
1. Choose one high-friction process
Start with a workflow that creates visible drag, such as quoting, scheduling, inventory updates or reporting.
Focus on where time, rework or delays are already felt.
2. Right-size the data required to run that process
Identify the minimum data needed for the process to operate consistently.
In practice, this is usually far less than expected and becomes clear quickly with experienced facilitation.
3. Test approaches that work with your current data
Evaluate tools or techniques that can operate on inconsistent inputs, such as AI document extraction, workflow automation or rules-based validation, rather than requiring full standardization upfront.
4. Measure practical outcomes, not theoretical readiness
Look for simple indicators of progress: fewer manual touches, fewer errors, faster cycle times or reduced handoffs.
These signals validate whether the approach is worth expanding.
5. Improve only the data that blocks progress
Invest in data cleanup selectively, based on what the workflow reveals as limiting value.
Avoid broad upstream efforts that are disconnected from measurable outcomes.
When guided well, this approach shifts modernization from theoretical planning to practical learning quickly.
More importantly, it builds internal understanding of where data quality truly matters and where it does not.
Summary: Discernment changes the path forward
Most companies are not data-ready by traditional definitions – that reality is not a weakness.
Discernment allows leaders to recognize that not all data deserves equal investment and not all problems require perfect inputs.
It shifts the conversation from “Is our data clean enough?” to “Which problems are worth solving, and what data do they actually require?”
Manufacturing modernization does not stall because organizations lack perfect data – it slows when perfection is demanded before value is proven.
By right-sizing completeness and accuracy, starting workflows with real materials, and using AI to improve data in motion rather than in isolation, business owners can unlock ROI sooner while building durable capability over time.
The most effective data strategies are not defined by readiness.
They are defined by relevance.
Inspiring the next generation of female construction workers
SentryWorld’s Rainbow aces PGA resort award
