Enterprise Software Projects killed the Software Developer

Posted by Tim Zöller on August 30, 2021 · 12 mins read

This post was inspired by a comment on HackerNews that I am not able to find anymore. The gist of it was “While architecture is often over-engineered, code itself is often under-engineered”. If somebody recognizes the author, I will gladly attribute them. As a disclaimer, this text describes experiences I had in the last 10 years working as a consultant. There might be frameworks and methodologies to counter the issues I am about to describe, but they were either not applied or applied badly.

What is Enterprise Software Development?

The term is not standardized and often used differently in different contexts. My understanding of enterprise software is heavily influenced by Martin Fowlers definition in “Patterns of Enterprise Application Architecture” (2002, Addison Wesley).

  • Developed for internal use in an organization
  • Works with complex, persistent data from multiple sources
  • Works with complex business rules
  • Usually multi-user systems (meaning concurrency)
  • Often integrated into a bigger system environment

When talking to other devs, work on enterprise applications is often belittled. Sure, these projects won’t develop exciting new tech, won’t write new streaming codecs or work with Blockchains (is this still a thing?). But while looking trivial at the surface, enterprise software development is trying to solve very hard problems regarding data. Developers have to work with a lot of data from different sources. Each of those sources might define the same entity just slightly different in their own domain, making transformations between those domains quite complex. Additionally, they are required to consider a ton of complex business rules in a system which is growing constantly.

The issue with software development projects

I was part of many enterprise software projects as an external developer and consultant and would often see project methodologies interfering with the goal of the software. In my opinion, projects are no suitable environments for developing great software. The core attributes of a project are:

  • Projects are limited in timeframe and budget
  • Project teams are only composed for a limited time

The task of a project manager is to make sure, that a projects goals are completed in time and budget. If these goals are somewhat met, the project is considered a success. The budget is often fixed, and the set of goals to complete is, too. The first fundamental flaw for me is the fixed ending, which implies that a software is “done” at some point. Often this is “solved” by separating the development work into the “project phase” and the “maintenance phase”. When the project is finished, the development team is often disbanded and the software is passed on to a “maintenance team” (usually with a high amount of junior developers). A benefit of this approach is, that the project team is required to thoroughly document the software before it is handed over (In theory, that is. Sometimes that is the part that gets cut, because the budget runs out). This leads us to the second flaw in a project setup: When handing over the software from one team to the other, a lot of knowledge is lost in the process. To mitigate the boundaries of project work, many companies try and apply an agile framework in these projects, mostly SCRUM. Personally, I have never seen SCRUM or any agile approach working in a project setup ever. I am biased, though, because a company that truly lives agile values won’t do software development in a project setup, even less with external consultants. They will realize that their applications are in fact products which are improved continuously.

Standardization

Considering the flaws listed above, one of the main concerns in the whole development lifecycle is to avoid “surprises” in the code. If the code is picked up by a new project team or the maintenance team, it should be easily understandable, so that new features or bugfixes can be added with the lowest possible effort. The main tool for this is standardization. In bigger enterprises, there is often an “Enterprise Architecture” team which sits above all software projects and standardizes the development process. Which language is used in which version? Which frameworks, libraries are allowed? Which tech stack is required, which testing approaches? What architecture guidelines can be considered? This macro-architecture is often prescribed in amazing detail. And to some degree, this makes a lot of sense: If your software developers have to work in a project setup, they will have to work on a multitude of codebases over time. The maintenance teams might be resposible for many different applications at the same time. If developers “feel at home” in all of those applications, because they recognize the structure and technology instantly, they can pick up work much faster. The hard part is to avoid setting up too many rules. I have worked for clients who pinpointed the exact Spring Boot version to be used. If the version was upgraded by the enterprise architects, the whole company had to upgrade as a whole. I have worked for clients, who prescribed using Hibernate with Hibernate Query Language (HQL) only, forbidding the use of native SQL, other frameworks or projection on POJOs. On the pro side, the developers did not have to be familiar with SQL and the way of accessing the database was standardized. Of course this led to subpar performance in most of the applications. Also, the proprietary features of the very expensive Oracle Enterprise Database underneath all of the applications could not be leveraged. But even if these standardizations led to huge issues in the software projects, the enterprise architects would not be argued with. The pesky problems of the developers are no reason to tamper with the architecture, are they?

Layered architecture

The gold-standard of standardization is layered architecture. Taking the definition from “Patterns of Enterprise Application Architecture” by Fowler again, layering is a technique to break apart a complicated software system. In theory, separating persistence, domain logic and presentation in separate layers makes reasoning with the application easier. Each layer only needs to know about the layers below, e.g. the domain layer knows about the persistence layer, but it does not know if it is called by a REST Service, a Web Application, a Rich Client or a Terminal interface. This not only structures the application in an easily discoverable way, it also makes it harder to intertwine different concerns which should be separated. But it also chops the applications domain logic into separate sub-domains and can have impact on performance, as these sub-domains need to be transformed when carrying data through the application.

Why does all this lead to under-engineered code?

The more standardized the environment for a software developer is, the more under-engineered their code will be. Underengineered code, for me, is code that could achieve its goal more performant, less verbose or with less duplication. Sticking to the examples above: If developers are prohibited from writing native SQL, they will be limited by HQL, writing slower queries or needing multiple queries from their application for one task. If the exact language, framework, libraries are prescribed in every detail, developers sometimes need to bend these tools to solve their requirements instead of using the right tool for their problem. If the layered architecture is to be zealously followed, 50% of your code will be the mappers between the layers. If you always have to consider the dreaded hand-over of the code after the project is done, your programming style will stay on the safe side, ignoring much better solutions because they might be a little harder to understand.

The consequences for the developers

Developers who work in such a restricted environment for a longer period of time, writing undereingineered code as their daily job, will lose motivation and stop building skills. I was able to observe this on myself: When tasked with a new feature, I would often opt for the simplest possible code, neglecting more efficient ways, always afraid that the more complicated code would not be understood by the “maintenance team” that would soon pick up the project. Whole Pull Request discussions were arguing about if there was really a need for abstraction on this feature, if the pattern used was not too exotic, the SQL query written wasn’t too complex. If your code deviated from the standard too much. In the end it makes you stop caring if there is a better way, you automatically pick the most naive way. You stop growing and become a bad developer, who writes code that is the lowest common denominator for everybody. You become lazy. In such enterprise projects, “elegant” or “clever” code is a veiled insult, that I have used myself after some time. But there is nothing wrong with writing such code. If it solves the problem just right, it is okay if another developer has to think their way through it. You can document it, to make it easier for them 😉

How to improve the situation?

Treat your software as products, not projects. Then you can eradicate too much detail from your macro-architecture. If the team working on a software is consistent over a longer timeframe, they can “risk” specializing in their technology more, choosing appropriate tools and architecture styles for the problem they have to solve. They can “risk” writing more complex algorithms, leveraging the lesser known parts of their language. The knowledge will stay in the team, as the team itself will stay. Although microservices are now way behind the peak of the hype cycle, they can be a very helpful tool to achieve just this. This does not mean that it’s okay to have one team running go Applications on Heroku, one having PHP Applications on their own hardware and one having Java Apps on Kubernetes, the teams will have to agree on some common ground. But that common ground is not as big as you might think it needs to be.