← Back

Lessons from a Legacy Enterprise Monolith

February 10, 2026 · 5 min read

When people hear "legacy enterprise monolith," they usually think "old code." They're right, but they're imagining the wrong thing. A system with 148 JSPs, 55 servlets, and 308 entity files isn't just a lot of code — it's a fundamentally different kind of engineering. The problems change. The solutions change. The way you think about software changes.

I spent years working in one of these systems at a federal agency. Here's what I learned.

What Scale Actually Looks Like

The repository had hundreds of modules. The dependency graph was a tangle that no single person fully understood. A clean build took the better part of an hour. The EJB architecture linked multiple applications together, so a change in one system could break another.

This wasn't a startup codebase that grew too fast. It was a system that had been serving critical national infrastructure for years, accumulating layers of business logic, regulatory compliance, and integration points with dozens of other government systems.

The EJB architecture (Enterprise JavaBeans) was the backbone. Heavy, container-dependent, hard to test — but it worked. It had been working for years. That fact matters more than most engineers realize.

Lesson 1: Incremental Over Revolutionary

The temptation with legacy systems is always the same: "let's rewrite it." I've never seen that work at this scale. The codebase isn't just code — it's encoded business knowledge. Every weird conditional, every seemingly redundant check, exists because something went wrong in production once and someone fixed it.

The Strangler Fig pattern became our primary strategy. You don't replace the old system — you grow a new system around it. One service at a time, one endpoint at a time. The old EJB session beans continued running while new Spring Boot services took over their responsibilities gradually.

This is slower than a rewrite. It's also the only approach that actually works. You maintain continuity, you can roll back, and you never have a "big bang" migration day where everything has to work perfectly.

Lesson 2: Build Systems Matter More Than Code

When your build takes 45 minutes, every failed build costs the team nearly an hour of productivity. At that scale, optimizing the build pipeline has more impact than almost any feature work.

We focused on Maven build optimization: incremental compilation, parallel module execution, strategic use of build caching. Getting the CI pipeline from 45 minutes to 12 minutes didn't ship a single new feature, but it made every developer on the team meaningfully more productive every single day.

This is the kind of work that doesn't show up on a sprint demo. Nobody celebrates a faster build. But the compound effect on team velocity is enormous.

Lesson 3: Security Is Architecture, Not a Feature

Implementing OIDC and OAuth2 across 15+ internal applications sounds like a ticket. It's not. It's an architectural migration.

The old system used session-based authentication with its own user management. Moving to a centralized identity provider meant touching every application's authentication flow, session handling, and authorization logic. It meant coordinating across teams who owned different applications, each with their own release cycles and priorities.

We designed it as an IAM layer — a shared infrastructure component that every application would integrate with. This meant defining clear contracts, building migration guides, and supporting each team through their transition. The technical work was straightforward. The coordination was the hard part.

The result was a single sign-on experience across all internal applications and a massive reduction in security surface area. But it took months of careful, incremental work.

Lesson 4: Legacy Is Not a Dirty Word

There's a tendency in our industry to treat legacy code with contempt. "Technical debt." "Spaghetti code." "It should have been written differently."

Maybe. But these systems run critical infrastructure. The EJB monolith I worked on helped manage national security incident reporting. It processed compliance data. It was part of the system that keeps organizations secure and accountable.

The engineers who built it made reasonable decisions with the tools and constraints they had. EJB was the standard. The application server architecture was best practice. The choices that look dated now were correct when they were made.

The real skill isn't judging the past — it's modernizing without breaking what works. Migrating to Spring Boot while maintaining 99.99% uptime. Upgrading security protocols while keeping every application functional. Improving build times without changing the development workflow so drastically that the team can't adapt.

Respecting what exists while improving it incrementally — that's the work. It's unglamorous. It's never going to trend on Hacker News. But it matters.

The Engineers Who Do This Are Rare

Most developers want to build greenfield. They want to pick their stack, design their architecture, start fresh. That's understandable — it's more fun.

But the world runs on legacy systems. Banks, governments, healthcare, transportation — the systems that matter most are often the oldest. They need engineers who can operate at scale, who can modernize without breaking things, who can respect what exists while building what's next.

If you find yourself in one of these codebases, don't rush to escape it. The lessons you learn operating at that scale, managing that complexity, and navigating that kind of modernization are some of the most valuable in the industry.

The work is hard. The work is unglamorous. The work matters.