youri babakhanians - Fotolia
In the real world, you can't always keep software simple. After all, application features that provide business value tend to have complex coding. But applications have a lot of accidental complexity too, in software design, code practices, deployment and other areas.
Essential complexity is worth having. Without complicated code, the software would lack necessary capabilities or be unreliable, or its quality would fall short.
Accidental complexity is not beneficial. It often arises when coding work happens as an afterthought, or in a compromise to keep legacy systems running. Software development teams can also introduce complexity when they adopt innovative new technologies. And accidental complexity can occur as IT shifts to new tools, change management and deployment models.
"We should boil things down to just what is truly needed to solve the user or business problem," said Bill Rials, associate director and professor at Tulane University's School of Professional Advancement. That's the extent of essential complexity. Any additional complexity in the software serves no value to the product or team.
Accidental complexity is often not visible to the product's user, Rials said, but it's still not acceptable. The underlying code, infrastructure and architecture can hinder a software product's success. Unintentional complexity can:
- hold up development work on new functionalities that must integrate with existing designs, causing developers to fall behind on feature releases;
- negatively affect productivity, as new software developers require extensive onboarding time; and
- make the codebase difficult to understand.
Developers in these situations struggle to roll out updates to components of the overall design, and modular features.
Developers and testers add accidental complexity to software builds when they misunderstand the user or the project scope. They might implement misguided coding or testing practices. Accidental complexity also stems from sources outside the code, such as dependencies and toolchain issues. And no architecture is safe: Both microservices and legacy designs are prone to issues.
In software design, accidental complexity often arises as a mismatch between UI design and how the user actually behaves. This mismatch contributed directly to the worst nuclear accident in U.S. history, at the Three Mile Island plant in Pennsylvania. Its complex control panel design was to blame for operators not realizing a valve was stuck open, argues Jason Buhle, professor in the online Master of Science in the Applied Psychology program at the University of Southern California.
The panel featured hundreds of switches and indicator lights, essential complexity to run the plant. But the interface design was confusing -- it did not clearly distinguish between working valves and open valves. As a result of this accidental complexity, a critical valve stayed improperly open for many hours during the accident, releasing radioactive material.
Accidental complexity in software design isn't always so severe, but even in mild scenarios, it does annoy users and waste time, Buhle said. Website sign-up flows, for example, ask for too much information, turning away frustrated prospects before they finish the form. Usually this bad UX manifests because the company thinks of more questions to which it would love to know the answers. However, the answers don't affect how the user will experience the site.
Another common example occurs in software menu systems. Rather than create menus that match how users think, designers derive ideas from internal business units and create menus accordingly, Buhle said. You don't want your menu to match your org chart. Confused users will fail to find critical functions when they need them.
To avoid these kinds of UI complexity, test products with real users. User acceptance testing is not always as easy as it sounds. Professional UX researchers should analyze how users interact with prototypes and existing products. Designers, engineers and especially startup founders do a terrible job in this role. They are invested in the products and lack training on how to conduct unbiased tests, Buhle said.
Stick with what users need, rather than designing for perfect software.
Design from the perspective of business needs, not nice-to-haves. Saad Malik, CTO of Spectro Cloud, a Kubernetes infrastructure tools provider, recalls an engineering brainstorming session for a UI framework in a past role. The question arose as to whether to paginate the data. The engineers spent hours wondering how such formatting would work if a customer wanted to load the data all at once or if the internet disconnected.
"It's not worth thinking about these questions unless there is a clear business need for it," Malik said.
Engineers waste their effort when they try to design the "perfect" system or inadvertently create complex architecture that is difficult to refactor.
Poor coding practices
Bad coding approaches can lead to accidental complexity. Common sources are:
- development team structures
- super developers
- exciting and new techniques
- Agile programs gone awry
Some management teams try pitting development teams against each other to gamify the development process. "Two creative units could be architecting the solution in two completely different ways," Rials said. They could even be programming in different languages.
Super, 10X, rockstar -- these are names for developers who produce a lot of code or work in many development areas. Such developers sometimes build a reputation -- and an ego. "These super developers usually make their code unnecessarily complex just because they can," Rials said. Other team members struggle to comprehend and edit the complex code. These colleagues might even compliment the super developer about the code's complexity, which encourages more of it. Businesses should empower junior developers to take a fresh look at code.
Software developers like to keep up with the latest languages and tool sets. But what's good for their resumes isn't always good for production code. Often there is little architectural, engineering or design justification to use newer options over tried-and-true methods.
Agile is a framework to guide software development with focused, rapid changes. Although Agile has many benefits, the methodology can also introduce new accidental complexity at each iteration cycle. Speedy time-to-market can cut down on QA efforts, compared with Waterfall development. To keep up, Agile teams might push underlying code problems to future releases, adding complexity over time. Developers should build code clean-up tasks into the software release lifecycle when following Agile approaches.
Rigid test requirements
Overly simplistic and rigid testing requirements introduce complexity in the software development process.
Unrestricted testing policies around code changes can precipitate development problems, said Dan Belcher, a leader at test automation tools provider Mabl. For example, some teams require all regression tests to pass in order for a code change to proceed to each next stage in development. The practice might hinder developers who could verify changes and address related issues with another method. Instead, let the developers identify the critical tests for a given code update and leave full regression testing to later stages.
"This more nuanced approach does require teams to spend some time upfront defining which tests exercise which features. But that relatively small investment pays off many folds in terms of reducing unnecessary complexity for developers," Belcher said.
Dependencies often impede efforts to improve software quality.
Third-party code dependencies are a substantial source of accidental complexity, said James Burns, developer advocate at Lightstep, an observability tool vendor. Third-party code, such as libraries, accelerate software development. Developers only code from scratch where it adds business value. However, most libraries have dependencies, which often have more dependencies, said Burns.
If an update could break core functionality of the application, developers often hold off, even for years. With regular updates, the difference between old and new versions and their dependencies is normally small. But wait years, and the risk of problems grows.
Burns recommends that teams plan and enforce a regular update schedule for dependencies. By requiring dependencies to be at the currently supported version, the team can find and fix issues with newer versions quickly, or change to alternatives with fewer issues or safer updates. For more peace of mind, do partial or canary deploys on updates, affecting a small subset of customers before the full rollout.
Complexity isn't always in the software product. Tom Petrocelli, research fellow at Amalgam Insights, sees teams adopting ever larger toolchains as they attempt to automate CI/CD practices. CI/CD is supposed to be an automated workflow that moves code from creation to deployment in step with Agile development velocity. But end-to-end CI/CD platforms are fairly new, and many teams construct a pipeline of disparate tools.
"This is problematic since there are a lot of steps in a typical CI/CD pipeline," Petrocelli said.
The CI/CD pipelines end up complex and, hence, error-prone. "I've seen CI/CD pipelines that use 20-plus products, all of which are integrated [by hand]," he said. Such a mess is often the norm, not the exception. All these tools handle different aspects of the job.
Many companies end up hiring more people to look after the CI/CD systems that came into being to remove human complexity from development and deployment processes.
Microservices gained popularity as a software architecture to eliminate complex monolithic applications. The approach has advantages for code update velocity and application resiliency. But microservices gave rise to a new source of accidental complexity.
"In practice, mission-critical enterprise applications have ended up with a lot of parts that need to be assembled into a complicated platform," Petrocelli said.
Monolithic applications rely on centralized services for networking, storage and logging. While each microservice is relatively simple individually, a microservices architecture requires distributed components that must be deployed with every application. Microservices create complexity between all of the different components due to this requirement.
Enterprises can address distributed application complexity with container-based deployment and a Kubernetes control plane to orchestrate it. Supported, enterprise-geared distributions exist, such as Red Hat OpenShift from IBM and VMware Tanzu, as well as engineered platforms like Cloud Foundry to hide some of this complexity from developers and ops teams.
While new architectures create complexity, old ones are no panacea. Rials sees many enterprises grapple with various forms of accidental complexity that IT pros introduce in efforts to keep legacy systems afloat.
These problems often arise when a business attempts to fit a new technology or licensing model in with a preexisting product. The technology wasn't designed to operate this way, and there are layers of unnecessary, accidental complexity in order to make it work together.
Legacy applications undergo many patches and break fixes over their long history. Maybe there's an employee or team with accumulated institutional knowledge to keep the system working. The scenario means there's a lot of accidental complexity that only a limited number of individuals understand, Rials said. What happens when those people leave?
In a common example, a business's primary back-end database is over a decade old, and its design and specifications dictate every other piece of technology -- or technology upgrade -- the organization uses. These databases usually start out small and grow over time. Eventually, the central database becomes too important and too risky to change, in the company's assessment. The IT team struggles to add newer technologies because they're incompatible with the legacy data storage.
How do you unravel this much accidental complexity? "Sometimes, it is simply more cost-effective to simply decommission legacy solutions and re-architect based on current technology," Rials said.