Data Integration Consulting: 7 Things Buyers Miss

Last updated: April 2026
Most data integration projects miss their original budget, often by half or more. The cause is almost never technical. It’s the engagement model, which is the layer most consultancies don’t want you to look at too closely.
We’ve been on both sides of these projects. We’ve inherited integrations from the Big 4 that took 14 months and produced a working system but no team capable of maintaining it. We’ve also been the firm a client called when the previous integration had to be ripped out and redone. After enough of those, patterns emerge. Most of the patterns aren’t about tools.
This piece is for buyers evaluating data integration consultants. It walks through what data integration consulting actually delivers, what to look for in a partner, what red flags to watch, and a realistic engagement timeline. If you’re on the procurement side of this conversation, this is the article we wish we could send our prospects before the first sales call.
What data integration consulting actually delivers
Data integration consulting is the work of connecting data sources (CRM, ERP, product database, third-party APIs, event streams) into a unified, governed, queryable layer that downstream teams and systems can use. A good engagement delivers four things: working pipelines, a governance model, documentation your team can follow, and a maintenance handoff that doesn’t require keeping the consultancy on retainer forever.
Most engagements deliver the first one. Some deliver the second. Few deliver the third or fourth, which is the entire reason these projects come back as rework two years later. A pipeline that works on day one is table stakes. A pipeline your team can extend, debug, and adapt without calling the consultancy back is the actual value.
The other thing a good engagement delivers is honest scope. Data integration scope balloons because every adjacent problem looks integration-shaped. Master data management, identity resolution, customer 360, AI readiness: these are all related, but bundling them into one engagement is how budgets blow up. The consultancies that scope tightly tend to deliver. The ones that scope broadly tend to overrun.
Why most data integration projects miss budget
The budget overrun problem is so consistent across firms and industries that it can’t be a coincidence of bad luck. Three patterns explain most of it.
The first is discovery debt. Most engagements quote based on a 2-week discovery phase that surfaces 40% of the actual complexity. The other 60% shows up in week 7, by which point the contract is signed and the change orders begin. Firms that quote without seeing your data tend to miss what’s actually in your data: the duplicate customer records, the deprecated fields nobody removed, the upstream system that emits NULL when it should emit 0. None of that is in the slide deck.
The second is the staffing model. Many engagements are sold by senior partners and delivered by junior consultants two months out of bootcamp. The discovery they did was good. The execution is by people who are still learning the tools. Your project becomes their training, on your budget. This is one of the patterns that drives the consultant-dependency trap. We’ve written more about why this engagement model creates structural problems.
The third is integration scope creep. Halfway through the project, someone realizes the data model needs to support three new use cases nobody mentioned in scoping. The consulting firm is happy to expand. The change orders are 30% of total budget. By the time you ship, the scope has doubled and the timeline is six months longer.
The firms that ship on budget tend to do three things differently: they discover deeply before quoting, staff senior, and aggressively defend scope. None of those are technical capabilities. They’re engagement model choices.
Seven things most buyers don’t know to ask
The most useful screening conversations cover questions buyers rarely ask. Here are the seven that separate firms that ship from firms that don’t.
1. “Can we see the resumes of the people who’ll actually do the work?”
Not the team that pitched. The team that codes. If the answer involves the phrase “we’ll staff to your needs at engagement start,” you’re probably getting juniors. If the resumes show 8+ years of integration work and the same names appear in case studies, you’re probably getting practitioners.
2. “What’s your discovery process before you give us a fixed-price quote?”
A good answer involves looking at your actual systems, sampling your data, and writing a one-page architecture proposal that’s specific to your situation. A bad answer is “we use our standard methodology.” Standard methodologies miss the things that matter for your specific stack.
3. “What does the engagement look like in month 6, and who’s doing what?”
Most firms can describe months 1 and 2 (discovery and design) and month 12 (handoff). The middle months are where projects either stay on track or quietly drift. A firm that can specifically describe month 6 has shipped this work before.
4. “Show me a project that went sideways. What happened, and what did you do?”
Every firm with real shipping experience has a project that went sideways. The ones that don’t have one either haven’t shipped much, or aren’t being honest. The answer to this question reveals more about the firm than any case study.
5. “What does the maintenance handoff actually include?”
A good answer includes specific artifacts: documentation, runbooks, monitoring dashboards, and pairing time with your team. A bad answer is “we’ll provide knowledge transfer at the end.” Knowledge transfer at the end is theater. Real handoff happens throughout.
6. “What’s the smallest possible version of this project that delivers value?”
Firms that try to sell you the biggest possible version are optimizing for their P&L. Firms that try to sell you the smallest version that proves out the architecture are optimizing for your outcomes. The smallest version usually costs less than half of what gets quoted.
7. “Who on your team will my team work with directly, and how often?”
The right answer is “the same senior people every week, in working sessions, not status meetings.” If the answer involves account managers, weekly steerco, and “we’ll loop in our specialists as needed,” your team won’t actually be working with practitioners. They’ll be working with project managers.
Red flags and green flags in your shortlist
| Signal | Red flag | Green flag |
|---|---|---|
| Sales process | Glossy deck, methodology slides, big logos in the footer. | Practitioners on the call from week one, asking about your specific data. |
| Pricing model | Fixed-price quote without seeing your data. | Discovery scoped separately, then a tight quote based on what they found. |
| Team composition | “We’ll staff appropriately at start.” | Named senior people, with hours-per-week commitments. |
| Engagement length | 12+ months baseline. | 3-6 month phases with explicit decision points between them. |
| Scope flexibility | “We can handle anything you need.” | “Here’s what we won’t do, and here’s why we recommend a different approach.” |
| References | Logos and quotes. | Phone calls with actual past clients, including ones where things went wrong. |
| Handoff | “We’ll do a knowledge transfer session.” | A handoff plan that starts in month one, not month twelve. |
If your shortlist has more than two firms in the left column, the issue isn’t your shortlist. It’s the procurement process that produced it. The firms in the right column tend not to participate in big RFPs because the format selects against them. Reaching them usually means asking your network rather than running an RFP.
A realistic engagement timeline
A focused data integration engagement that delivers a working system and a maintainable handoff usually runs 4-7 months. Here’s the shape of one.
Weeks 1-3: Discovery. Look at the actual data, the actual systems, and the actual team. Output: a one-page architecture proposal, a scoped quote, and a list of specific decisions needed from leadership.
Weeks 4-7: Foundation. Build the ingestion layer and the storage layer. Working sessions with the client team weekly. By week 7, raw data is flowing into the platform.
Weeks 8-14: Transformation. Build the curated layer, the semantic models, and the governance policies. This is where most projects either stay on track or drift. The discipline is to stay focused on the agreed scope, not the adjacent work that looks tempting.
Weeks 15-20: Production hardening. Monitoring, alerting, runbooks, performance optimization, and security review. The unsexy work that determines whether the system survives in production.
Weeks 21-26: Handoff. Pairing time with the client team, code reviews going both directions, documentation passes, and a structured exit. The consultancy’s hours ramp down as the client team’s confidence ramps up.
The firms that try to compress this into 12 weeks tend to either skip production hardening or skip handoff. The firms that stretch it into 14 months tend to be padding bench utilization. The right shape for most engagements is somewhere in this 4-7 month range, with explicit checkpoints.
Boutique vs Big 4 vs in-house: how to choose
The three options have different shapes, and the right one depends on your situation.
Big 4 firms (Deloitte, Accenture, PwC, KPMG) make sense for projects where you need 40+ people on the ground for 18 months, you have the budget for it, and the political importance of the engagement requires a brand on the contract. The trade-off is engagement model: lots of juniors, heavy account management, and dependency by design.
Boutiques make sense for projects where 4-8 senior practitioners can deliver in 4-7 months, you want the team to leave your organization stronger, and you’re optimizing for capability transfer rather than headcount. The trade-off is bench: when the boutique’s senior people are booked, you wait. They’re rarely available at 24 hours’ notice.
In-house makes sense for organizations that have already done one or two integrations with outside help, have built up a team that can handle the next one, and have the patience to take 50% longer than a consultancy would. The trade-off is opportunity cost: your engineers are doing this instead of building what differentiates your business.
The pattern that works for most enterprise teams in 2026 is a hybrid. Outside help for the architecture and the first integration, then in-house for everything that follows. We’ve seen this work at multiple Fortune 500 companies. The first engagement teaches the team how to do the next one. By engagement three, they don’t need outside help anymore. That’s the goal.
Frequently asked questions
How much does data integration consulting cost?
For a focused 4-7 month engagement covering 5-10 data sources with a clean handoff, expect $300K to $1.2M from a senior boutique. Big 4 engagements at the same scope often run $1.5M to $4M because of staffing model and overhead. In-house, the cost is mostly your team’s opportunity cost plus tooling, usually 30-50% lower than outside engagement.
How long does a typical data integration project take?
A focused engagement runs 4-7 months. Larger transformations can run 12-18 months, but those are usually multiple smaller projects sequenced together rather than one monolithic engagement. Anything quoted at over 12 months as a single phase deserves scrutiny.
What’s the difference between data integration and system integration?
Data integration moves data between systems into a unified, queryable layer (typically a warehouse or lakehouse). System integration connects systems so they can act on each other (a sale in your CRM creates an invoice in your ERP). The two overlap, but the skills and tools are different. Many consulting firms blur the line. The clean ones don’t.
Should we hire a data integration consultancy or build the team in-house?
Hire a consultancy for the first one if you don’t have senior data engineers in-house. Use the engagement to build the team. By the third project, you should be doing it yourselves, with the consultancy only on the hardest pieces. If you’re three projects in and still fully dependent on the consultancy, the engagement model failed.
The right consultancy doesn’t make itself hard to replace. It builds your team while it works, leaves the playbooks behind, and stays on retainer only for the work your team genuinely doesn’t have capacity to do. Anything else is the dependency trap dressed up as partnership.
If you’re evaluating partners and want a second set of eyes on your shortlist, we’re happy to talk through it. We’ll tell you honestly whether we’re the right fit, and if we’re not, we’ll point you at firms that are.











