Do you call your agile development process “fragile,” “hybrid waterfall,” or “fake agile”? Is your agile backlog more like a request queue or a task board? More specifically, are the agile development teams exhibiting any of these 15 signs that you’re doing agile wrong?
Maybe your agile development process isn’t that bad, and teams are sprinting, releasing, and satisfying customers. Perhaps teams have matured agile methodologies, formalized release management, established agile estimating disciplines, and developed story writing standards. Hopefully, they’ve partnered with operations teams, and their agile tools integrate with version control, CI/CD (continuous integration/continuous delivery) pipelines, and observability platforms.
Chances are, the teams in your organization fall somewhere between these two extremes. Although many agile organizations have an ongoing process to mature and improve agile practices, at times the development process must change. Some organizations utilize agile KPIs (key performance indicators) and devops metrics to acknowledge progress and signal when changes are required. But some organizations may not have formal metrics in place and rely on people and processes to indicate if and where adjustments are needed.
Here are five indicators that the agile development process must change and my recommended adjustments.
There’s a shallow backlog and insufficient planning
Agile teams figure out fairly quickly that polluting a backlog with every idea, request, or technical issue makes it difficult for the product owner, scrum master, and team to work efficiently. If teams maintain a large backlog in their agile tools, they should use labels or tags to filter the near-term versus longer-term priorities.
An even greater challenge is when teams adopt just-in-time planning and prioritize, write, review, and estimate user stories during the leading days to sprint start. It’s far more difficult to develop a shared understanding of the requirements under time pressure. Teams are less likely to consider architecture, operations, technical standards, and other best practices when there isn’t sufficient time dedicated to planning. What’s worse is that it’s hard to accommodate downstream business processes, such as training and change management if business stakeholders don’t know the target deliverables or medium-term roadmap.
There are several best practices to plan backlogs, including continuous agile planning, Program Implement planning, and other quarterly planning practices. These practices help multiple agile teams brainstorm epics, break down features, confirm dependencies, and prioritize user story writing.
Sprints and releases fall short of commitments
There are times I recommend Scrum and other instances where Kanban has advantages, but I am a strong proponent of agile development teams committing to the work they accept. The commitment signals to product owners and stakeholders that there is a shared understanding of who, why, and what is required, and it requires agile teams to define an implementation plan.
Commitments represent a forecast, and expecting teams to meet or exceed targets consistently is not realistic. When agile development teams commit to getting user stories done, it’s often in the face of several unknowns around the implementation, team dependencies, and technology assumptions.
When agile teams consistently miss commitments, it may be time to consider changes and improvements. Committing to fewer stories may look like an easy answer, but it’s not if the coordination on meeting requirements within a sprint or a release’s duration is the problem.
The best self-organizing teams recognize misses in meeting expectations, use retrospectives to diagnose root causes, and commit to improvements.
Sprints end without well-attended demos
The agenda of the sprint review meeting is to demo completed user stories to the product owner and stakeholders and gather early feedback. Sprint reviews should be well attended and teams should have a lot to showcase.
The best agile teams that I’ve had the privilege of working with treat sprint reviews like theater. They discuss how to demo the story, who should lead it, when to sequence it, and what types of feedback to capture. A master of ceremonies ensures that sprint reviews run on schedule, feedback gets captured, and lengthy discussions are parking-lotted to follow up afterward.
Subpar reviews may point to several issues:
- Stories aren’t written from a user’s perspective, making them more challenging to demo.
- Developers are concerned about showing a user experience that’s a work in progress.
- Teams work until the last hours before the review and are not prepared to run a good show.
- Products owners set unrealistic expectations with stakeholders and leave their teams high and dry during the demo.
- Stakeholders don’t see the value in attending because of previous poor performances, or they feel no one’s listening to their feedback.
Sprint reviews should be times to celebrate a team’s progress. Weak or unattended performances can lead to team morale issues.
Increasing defects are found in production
Many agile development teams automate testing, configure CI/CD pipelines, and deploy infrastructure as code to improve the reliability of releases and deploy production changes more frequently. The more advanced organizations employ shift-left testing strategies and mature devops to include security early in the development process.
The prevailing wisdom is that frequent deployments lead to greater user satisfaction and fewer technical dependency issues. In the 2020 State of Devops report, 45 percent of high-evolution engineering-driven companies claim an on-demand deployment frequency, and 38 percent have less than one day’s lead time for changes. More conservative, operationally mature, and governance-focused companies have lower percentages.
Frequent deployments make sense until they don’t. A clear indicator that agile development teams are deploying too frequently is if a growing number of defects are found in production.
Productions defects may impact business performance and are highly problematic when organizations develop reputations for deploying buggy software. It is also challenging when development teams must respond to major production incidents, schedule emergency break-fix releases, or prioritize fixing defects instead of other priorities.
Teams finding increasing defects in production should discuss root causes and find solutions. In many cases, planning backlogs earlier, improving requirements, investing in test automation, increasing the variety of test data, or instrumenting continuous testing are all steps that can help reduce production defects.
Agile teams or their stakeholders aren’t happy
The most important factor for considering changes is if the agile team or their stakeholders aren’t happy.
Missing a sprint or even a release shouldn’t be cause for alarm, but leaders should define approaches to capture feedback formally. One-on-one dialogues are helpful, but larger organizations should consider customer satisfaction and agile teammate surveys.
Look for teams reporting blocks because of issues outside of their control. If there are too many dependencies between agile teams, or if people, skills, technology, or vendors impede their ability to execute, then prolonged issues likely will impact team happiness.
Unhappy stakeholders are equal cause for concern. Dissatisfaction may stem from overly high expectations, poor delivery quality, or just the working realities outside their collaboration with agile teams. In my experience, happy agile teams correlate with stakeholder satisfaction. When people are frustrated, it’s time to listen and prioritize appropriate changes.
One best practice is for agile teams to seek and prioritize incremental adjustments to their process, principles, collaboration, and standards. Agile organizations that seek smaller modifications can avoid harder pivots. Isn’t that what agile is all about?