10
minutes
Floris Schoenmakers

Replacing legacy software series, part 3: UX and redesign of business-critical systems

A government department was processing applications through a system that required 20 clicks per submission. Users couldn't navigate back to information they'd already entered. The interface had become so complex that employees needed significantly longer to complete each application. The system worked, technically. But it was slowly suffocating the department's ability to function.

This is the paradox of enterprise UX. When Netflix releases a clunky interface, users cancel subscriptions. When your internal system is terrible, employees just adapt. They develop workarounds, memorize the quirks, and build unofficial guides to help new colleagues navigate the system. Because they have no choice. But adaptation has a cost. And understanding that cost is what separates legacy replacement projects that succeed from those that fail.

01
Replacing legacy software series, part 1: the introduction and the 6R approach

Every second, legacy systems process millions of critical operations worldwide. These aging digital workhorses, built when smartphones were science fiction, keep the global economy running. Yet they're also bleeding organizations dry.

Why internal tools inherit terrible UX

Think of a manufacturing business that starts with one building. Everything works perfectly at first, production flows smoothly, and the layout makes sense. Then the company grows. They need more capacity, so they build an addition against the original building. Then another. Then a specialized area for a new product line. Over years, what started as a logical facility becomes a maze where materials travel inefficient routes and workers waste time navigating between disconnected areas.

Legacy software systems evolve exactly the same way. As Wes Botman, CEO at Eli5, explains: "Eventually you have this thing that doesn't work optimal."

The critical difference between consumer software and enterprise tools shapes this evolution. Multi-tenant SaaS products face immediate market pressure. If a project management tool has bad UX, teams switch to competitors. Customer churn creates direct financial pain that forces investment in user experience. But single-tenant internal systems exist in a different reality. Employees can't switch to competitors. They must use what the organization provides.

This doesn't mean internal tools should get less UX attention. The fundamental issue is that the business case calculates differently. Without customer churn as a forcing function, executives need different metrics to justify UX investment.

The foundation decisions made years ago compound this challenge. Early architectural choices about database structure, user permissions, or workflow logic create constraints that make adding features increasingly difficult. What started as a clean system accumulates technical debt in its interface layer. Each new feature gets bolted on in whatever way the existing structure allows, not in whatever way makes most sense for users.

Calculating the business case: clunky versus bad

Not all poor UX justifies major investment. There's a spectrum from mildly annoying to catastrophically inefficient. Understanding where your system falls on that spectrum determines whether UX overhaul makes financial sense.

"There's a difference between something just being a little bit clunky and something just being bad," Wes explains. Minor inefficiencies that save one minute per task might not warrant redesign. An executive reasonably thinks, "Yes, it's clunky, but we have bigger problems." But severe cases create clear business opportunities.

The government project illustrates the severe end of the spectrum. When Eli5 evaluated the application processing system, the numbers revealed the true cost. The original workflow required 20 clicks to complete tasks that could be accomplished in 4-5 clicks with proper design. But the calculation goes deeper than click reduction.

Start with time per task. Assume each unnecessary click adds 3-5 seconds when accounting for page loads, cognitive switching, and navigation. Fifteen unnecessary clicks per application adds roughly one minute. If an employee processes 30 applications daily, that's 30 minutes of pure waste. Multiply by 100 employees and you're losing 50 hours per day to interface inefficiency.

Annual cost becomes stark: 50 hours per day times 250 working days equals 12,500 hours per year. At an average loaded cost of €50 per hour for government workers, that's €625,000 annually spent navigating a bad interface. And this calculation only captures wasted time, not the applications abandoned because the system was too difficult to complete.

The business case strengthens when including incomplete work. In the government project, employees regularly gave up on applications when they couldn't navigate back to previous sections or when the interface failed to save their progress. Each abandoned application represents both wasted effort and delayed service delivery. For a department measured on processing speed and completion rates, this creates compounding organizational cost.

At the other end, if your system is merely clunky and saves one minute per task, the math changes. That same 100-person team saves 50 hours yearly. At €50 per hour, that's €2,500 in annual savings. Probably not enough to justify a major redesign project. The executive decision to leave it alone makes sense.

The calculation framework applies across industries. Time saved per task, multiplied by frequency, multiplied by number of users, multiplied by loaded labor cost. When that number reaches hundreds of thousands annually, UX investment becomes strategically obvious. When it's in the thousands, other priorities likely deserve attention.

The resistance paradox: it's not who you think

The common assumption is that end users resist new interfaces because they've mastered the old system. After years of learning workarounds and memorizing quirks, they don't want to start over. This assumption is mostly wrong.

"From the users themselves, I always get good input on how to make the product better," Wes observes. "Not saying that you need to listen to them 100% of the time, but they often give you a very good direction of what you need to do to make them happy." Resistance typically originates higher in the organization.

The real problem appears in rooms with 18 stakeholders, each representing different departments with different priorities. Everyone wants input on the new system. Marketing needs certain reports. Finance requires specific approvals. Operations demands particular workflows. IT insists on security protocols. As Wes describes it: "If you end up in those situations then you got to be very strong to avoid this mix of very average opinions. You just get a product that's just mediocre."

This creates what might be called the boardroom averaging effect. When you aggregate all opinions equally, you don't get a great product. You get a compromise that satisfies no one fully. The resulting interface tries to be everything to everyone and excels at nothing.

The workforce stability question does matter, but differently than expected. Organizations with 100 stable employees who've spent years with the legacy system need careful transition planning. These users have developed muscle memory around the old interface. Abrupt changes to fundamental workflows can temporarily decrease productivity as experienced workers relearn basic tasks. But even here, the resistance comes from comfort with the known, not from users wanting to keep a bad system.

High turnover environments flip the equation entirely. When people come and go frequently, onboarding speed becomes critical. A complex, poorly designed interface might take weeks to master. Each new employee represents training cost, reduced productivity during learning, and increased error rates during the adjustment period. In these organizations, UX investment pays back through faster onboarding and reduced training overhead.

The solution balances structure and input. Involve users early so they feel ownership over the changes. As Wes notes, "talk to some people, those hundred employees, and let them weigh in on how things need to work. Then they also feel they own it a little bit." But involvement doesn't mean design by committee. The product team must interpret user input through the lens of good design principles, not simply implement every suggestion.

For non-tech-savvy user bases, the approach requires additional caution. Government workers, factory floor supervisors, or retail managers often aren't early adopters of technology. "In that case I wouldn't try anything on the cutting edge of how things can work," Wes explains. "Be a little bit more conservative." This doesn't mean accepting poor UX, but it does mean avoiding experimental interaction patterns that assume high digital literacy.

Build, buy, or something in between

After a decade of using custom-built internal software, the market has probably changed dramatically. SaaS solutions that didn't exist when you built your original system might now cover your use case. This makes the build versus buy decision more complex than simply comparing features and price.

The first step is research. Many organizations assume their needs are too specific for off-the-shelf solutions, but that assumption was probably correct ten years ago and might be wrong today. Specialized vertical SaaS has proliferated across industries. Regulatory technology for banks, inventory management for logistics, case management for law firms. The landscape has evolved.

The fit percentage determines the decision path. If an off-the-shelf solution handles 90% of your requirements, you face a genuine trade-off. Is the missing 10% worth the cost of custom development? "If there's something out there that does 90% of what it needs to do, then you got a trade-off to make," Wes explains. "Is it worth the 10% to do something with a company like Eli5 or is it fine to just go with that off-the-shelf SaaS tooling?"

At 60% fit, the calculation usually tips toward custom. As Wes notes about typical clients, "they often have things more like 60% of what we need to do is done with this SaaS tool. But that doesn't cut it for us." The missing 40% often includes the core differentiating workflows that make your organization unique. Compromising on these usually means accepting reduced capability in the areas that matter most.

The sweet spot sits between fully custom and purely off-the-shelf. Building with established components in your specific vertical provides the flexibility of custom development with the speed of pre-built solutions. "You are not bound to the boundaries of SaaS and off-the-shelf but you also don't want to end up in a fully custom software project," Wes explains.

For example, if you're rebuilding regulatory compliance software for a bank, working with a firm that already has experience building similar systems for financial services means you're not starting from zero. These battle-tested components accelerate development while allowing customization for your specific compliance requirements.

But a warning about intellectual property. Many development firms have extensive component libraries but still charge as if building everything from scratch. "There's so many companies out there that have all this IP already, but they just want to charge the full amount as if they are building it from scratch," Wes observes. This creates a frustrating dynamic where the cost doesn't reflect the reduced development time.

The evaluation questions become specific. Does this agency have working components in our industry vertical? Can they demonstrate previous projects that leveraged these building blocks? What's their pricing philosophy around reused versus new code? An honest firm should offer faster delivery and potentially lower cost when leveraging existing IP, not just pocket the efficiency gains.

AI and the amplification of UX importance

The conventional wisdom suggests AI might make UX less important. If agents handle complex workflows and LLMs process natural language, perhaps interfaces become simpler by default. The reality appears to be the opposite.

Current AI tools have already transformed how product teams work. Wireframing that previously took hours can now be accomplished with text prompts. Tools like Lovable and Figma Make generate UI concepts from descriptions, dramatically accelerating the ideation phase. "Wireframing used to take quite some time. They're basically now done with text prompts. That's already a massive gain," Wes notes.

But this efficiency in creation doesn't reduce the need for thoughtful design. As Wes explains, "you still need to have a team or person that knows how to take all that input and turn it into a product that's actually very good." The 18-person boardroom problem doesn't disappear with AI. If anything, it becomes easier to generate mediocre designs by aggregating everyone's ideas through an LLM. The result is still average.

The human-in-the-loop pattern emerges as the sustainable model across enterprise applications. Consider regulatory compliance at banks. No institution would allow AI to make compliance decisions autonomously. The liability is too high, the edge cases too complex, and the regulatory requirements too specific. But AI can accelerate the process enormously when humans validate each decision point.

This pattern has direct UX implications. As more business logic and data processing moves into AI systems, the information presented to human decision-makers becomes more concentrated and critical. Users see less raw data and more synthesized insights. The interface must present exactly the right information at the right moment because users are making consequential decisions based on AI-processed outputs.

"As more and more information is processed by language models and agents, the information that you present in a screen or dashboard needs to be very on point," Wes argues. "The aesthetics and the UX in the near future will only be more and more important." When AI handles the bulk of data processing, the tip of the iceberg that users actually see must be perfectly crafted.

This manifests in enterprise systems as increasingly sophisticated dashboards that surface insights rather than data. The UX challenge shifts from organizing information to presenting intelligence. Users need to understand AI confidence levels, data sources, and decision logic without being overwhelmed by complexity. Poor UX in these AI-augmented systems doesn't just waste time, it leads to misunderstanding of AI outputs and potentially catastrophic decisions based on misinterpreted information.

What to look for when replacing legacy UX

For organizations sitting on 15-year-old systems with terrible interfaces, the path forward requires specific evaluation criteria. The partner you choose needs capabilities that many generalist development firms simply don't have.

First, verify deep understanding of legacy technology. This is rarer than it should be. As Wes notes, "most product studios don't really have that. They're more focused on building things entirely from scratch. But turning a legacy system into something modern requires a bit more than writing proper code." Legacy expertise isn't glamorous work. Developers generally don't enjoy it. This makes experienced practitioners scarce.

The questions to ask are specific. Has your team migrated systems from our technology stack before? What's your approach to understanding business logic embedded in old code? How do you handle data migration when the new UX requires restructured information architecture? A partner who blanches at these questions probably isn't equipped for the challenge.

Second, look for vertical-specific IP and building blocks. A firm that's built compliance tools for three banks will move faster on your banking compliance project than a generalist team, no matter how talented. "I would definitely look at what's the IP that you guys already have in this vertical that we can leverage," Wes suggests. The accelerated timeline and reduced risk both flow from battle-tested components.

The deeper question is about their business model. Do they view reusable components as opportunities to deliver faster and potentially charge less, or do they hide the efficiency and charge full rates? The philosophical difference matters. You want a partner who's transparent about what's being built new versus leveraged from existing work.

Third, evaluate the lead designer's approach to user input. You need someone with strong opinions loosely held. As Wes describes it: "You want someone that listens to all the input that's there, but you still need to have someone that has quite a strong opinion on how things need to be done, but kind of loosely held. You need the person to be open to all the inputs and not only go his own way."

The opposite extremes both fail. A designer who ignores user input and imposes their vision creates elegant systems that don't match how the organization actually works. A designer who simply aggregates all stakeholder requests creates the mediocre boardroom-averaged product. The right designer synthesizes input through the filter of good design principles, advocating for users even when that conflicts with stakeholder preferences.

For risk-averse organizations, start with non-critical parts of the system. Redesign a reporting module or an administrative function before tackling the core workflow. This validates the approach, builds user confidence, and reveals unexpected challenges before high-stakes migration. "Try to migrate non-critical parts piece by piece and don't touch the entire critical system until you know for sure that all these small parts that you did migrate work separately," Wes advises.

The investment calculation comes down to severity

The ROI case for UX redesign exists, but it calculates differently than consumer software investment. Without customer churn as the driving metric, you're measuring employee efficiency, onboarding speed, error reduction, and work completion rates. These metrics are real and quantifiable. They just require different analysis.

Severe cases, where employees waste hours navigating terrible interfaces or regularly abandon work due to system limitations, create obvious business cases. Calculate the time waste, multiply by employees, multiply by loaded labor cost. When that number reaches hundreds of thousands annually, UX investment becomes strategically necessary, not aesthetically nice to have.

Clunky but manageable systems present harder decisions. A few minutes saved per task might not justify major overhaul, even if it would make employees happier. This is where the workforce stability question and the AI future both factor in. High turnover or increasing AI integration might tip moderately poor UX into the "needs fixing" category.

The manufacturing facility metaphor remains useful. Just as factories eventually need comprehensive redesign to optimize material flow and production efficiency, software systems periodically need foundational rethinking. Incremental additions and patches eventually create something that can't be optimized further without structural change. The question isn't whether to redesign but when the accumulated inefficiency justifies the investment.

As AI systems handle more backend complexity, the front-end presentation becomes more critical, not less. What users see must be precisely crafted because they're making consequential decisions based on processed intelligence rather than raw data. This trend suggests that UX investment in enterprise systems will become more important over the coming years, not something that can be deferred indefinitely.

02
Replacing legacy software series, part 2: data and data migration

When organizations replace legacy software, the biggest technical challenge is safely moving decades of business-critical data from old systems to new ones without losing information, corrupting records, or disrupting daily operations.

Floris Schoenmakers
Chief Venture and Growth Officer
current