VP Engineering Playbook

A practical guide for new leaders in software development

Category: Process (page 1 of 3)

Agile Revisited

We recently celebrated the 15 year anniversary of the publishing of the Agile Manifesto.  As you’ll recall, in February 2001, 17 thought leaders in software development gathered in Utah to discuss better ways to build software. A set of values and principles were established which provided guidance for planning, building and releasing software. As agile evolved, several methodologies emerged to provide specific practices for the application of agile. Examples of these are Scrum, XP, Lean, Crystal, etc. These methodologies can be opinionated, but do provide companies with ready-made blueprints for structuring their agile programs.

At a few past companies, I have had the opportunity to craft software development processes that align with agile principals. In these cases, I haven’t utilized the strict application of a single methodology.  Rather, I have tried to mold the agile principals themselves into a set of practices that best match the culture of the target organization. Collaborating with other product stakeholders, this usually results in a unique interpretation of agile for that company. Since agile espouses introspection and continuous improvement, adaptation is natural.

Given the 15 year milestone, I thought it would be useful to revisit the values behind the agile manifesto and how those can be applied to a modern software delivery organization. This may be rudimentary to experienced agile practitioners, but might offer insights for new software development leaders charged with being “agile”.

Individuals and interactions over processes and tools

One of the core shifts introduced by agile was recognition that great software is produced by empowered individuals working closely together on small teams. This ran counter to the approach used by large organizations attempting to coordinate activities across hundreds of developers. In order to manage enormous teams, tools were championed that allowed disparate teams to schedule project work into sequences of events (Gantt charts) through desktop applications. With this move to formalized project management, teams stopped discussing the software in person and focused their effort on managing a tool.

Agile shifts this mentality back to individuals and their personal interactions. It encourages practices like the ones below.

  • Limit team size. Agile teams should be small (less than 10) and should include representatives from each function required to address the project work. For Internet apps, this means the product manager, UX/UI designer and engineers from each functional area (front-end, back-end, services, devops, etc.).  The key is that the team doesn’t have a major dependency on someone not part of the core team.  There can be other influencers, like from marketing or customer service, who participate as extended team members.
  • Colocation.  The core team should sit together. Physical proximity encourages high bandwidth collaboration. I realize that newer communication methods like IM and group chat facilitate non-spoken communication, but as humans, we are wired to rapidly broadcast information and gather feedback from non-verbal cues. As one of the agile principles states, “the most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” I am also a big proponent of fluid seating arrangements, where individuals can shift their seating based on their current agile team assignment. They may still have a “permanent” desk, but for the duration of their team assignment, they sit with the other members of the agile team.
  • Encourage informal discussions. I love to see the following situation. One member of an agile team has a question about a requirement. Then, one or more team members suggest they grab a conference room to discuss. The group heads to the room, closes the door and can be seen feverishly working out the problem on a white board. This occurrence stems from the agile concept that these types of informal, in-person communications should occur frequently during a project. These should not be categorized as interruptions and discouraged.
  • The team should have the proper environment, tools and support needed to accomplish their mission. In order to be self-empowered, the team should have full control over their work environment, the tools they use and necessary support from other teams. This is where engineering leadership can contribute to the team’s success. If you are a member of leadership (director, VP Eng, etc.), a big part of your job is to monitor your teams’ updates and identify any outside circumstances that might be creating impedance. These could be dependencies on other teams, difficulty getting their work environment set up or access to the right tools. Look for these obstacles and clear them.
  • Trust the team. If the team is staffed with empowered individuals armed with a clear direction, they should be able to accomplish their goals. As a member of leadership, you should trust that they can do this and get out of the way. Resist the temptation to tweak or guide. As long as expected outcomes are clear, maintain your role as an observer.

Agile anti-patterns:

  • Heavy emphasis on updating tools. On a functioning agile team, you should see a lot of informal discussion. The tools used to manage the agile process are useful for reflecting status, but are not a replacement for personal interactions.
  • Rushed meetings.  Over the last 10 years, I have seen a growing resistance to holding meetings. The observation has been that many companies load up employee schedules with so many meetings that they don’t have time to actually perform work. The knee-jerk reaction has been to question the value of all meetings or rush them in order to “get some time back”. Meetings, which I will simply define as a conversation between two or more persons at a pre-scheduled time, have a  purpose in an agile environment. They allow for necessary team member interactions to answer questions, review designs, plan work, or do anything that requires agreement. Allowing these discussions to occur at a scheduled time simply makes it easier for individuals to plan their day.  So, I think meetings are fine, as long as they are limited to necessary participants and produce an outcome. If you see meetings avoided or rushed, step back and determine if the team has a mechanism available to foster debate and make decisions.

Working software over comprehensive documentation

Another key tenet of agile development is to focus the team on delivering software versus waiting for a full set of requirements. Documentation is encouraged in agile. However, agile doesn’t gate the start of development on the completion of documentation. I still see this practice in some shops that profess the use of agile. If the developers wait until the requirements document is complete before they start coding, it will serialize the project and extend the end-to-end delivery schedule. Also, complete documentation is a farce. It is impossible for a product manager to anticipate every facet of a new feature. Many design decisions can be made at time of implementation.

An emphasis on working software provides other benefits in the form of agile practices for structuring work.

  • Break up large projects into incremental deliverables. Agile’s focus on delivering working software is the foundation for the convention of a sprint.  A sprint’s purpose is to time box project work, forcing segmentation of the overall project. Sprints should always include a release to production. This release can either be represented by many small features or portions of a very large feature.  Even if a large feature isn’t fully “live” at end of sprint, it is still important to conduct the release. Access to the feature can be limited to internal users by a feature flag. Each release forces integration and testing, spreading that cost over more cycles.
  • Track progress. Working software is the currency of a project – it provides a clear means of measuring progress. Work items in an agile sprint are usually organized into a burn down chart. This provides an advancing view of completed versus remaining work items. Remaining work should approach zero as the sprint proceeds. Delivering working software throughout the sprint provides the clearest way to validate progress.
  • Lower technical risk. As engineers plan their work items, they should address the technically hardest deliverables first. This allows the team to clear the biggest obstacles early in the project. Addressing these items in working software proves the feasibility of the solution.
  • Apply the simplest approach. Organizing project work into discrete deliverables released frequently encourages product and technical designs to be simple. In product design terms, this is often referred to as the minimum viable product. The value of a MVP is well understood. Until users provide real feedback, it is difficult to estimate the potential success of a new feature idea. Similarly, for infrastructure design, simplicity prevents over-engineering.  While sound technical design is critical to scalability and extensibility, trying to address future scenarios with extra technology capabilities can become wasted effort if those requirements never materialize.  As one of the agile principles states – “simplicity–the art of maximizing the amount of work not done–is essential.”

Agile anti-patterns:

  • Perfecting the design spec before sharing with engineering.  Teams may sometimes delay showing the design spec for a project to engineering until it is fully baked and signed off. The reasoning is that the team doesn’t want to waste engineer time by reviewing an incomplete spec. There is some credence to this, but I also think there is value in sharing a spec before it is complete. This allows the engineers to begin processing the future design – conceiving of improvements and planning their technical approach. Waiting until the spec is fully baked usually creates schedule pressure that limits feedback or negotiation of items that would lower delivery cost.

Customer collaboration over contract negotiation

Agile imbues the software development process with a strong emphasis on the end customer. For consumer applications delivered over the Internet, the end customer represents the millions of users of a company’s apps. Since it isn’t feasible to involve all these users in the software development process, we rely on the product manager to represent their feedback. In this model, the product manager is the customer. There are other customers to an agile project – like outside stakeholders. Even the core team members can be viewed as customers of agile.

Promoting a customer-centric view is critical to the agile process.

  • Conduct regular demos. Outside of the release at the end of a sprint, the team should strive for frequent, informal demos of their work. When a task is completed that can be “shown”, the responsible developer should grab the product manager and do a quick demo. The purpose of this is to solicit feedback as early as possible. Any time delay will increase the cost of making changes. Also, I like to see agile teams demo to each other periodically. One team I managed conducted “Demo Fridays”, where each team member showed what they had built the week before.
  • Speak in terms of customer goals. When the team conducts sprint planning, the product manager should lay out their objectives for the sprint in terms of end user benefits. Focusing the sprint outcomes on the customer allows everyone to align their tasks with what will have the most impact on the business.
  • Maintain a feature backlog that includes tech debt. Agile teams should construct a ready backlog of work items. These usually include feature ideas and product enhancements, prioritized in terms of expected impact. The feature backlog should also include tech debt tasks.  If you aren’t familiar with the term, tech debt represents work items that make future feature delivery faster, easier or more reliable. These are generally items sponsored by engineers that don’t contribute to the product’s feature set. They are important nonetheless, as they represent an investment in future delivery of the product. Teams can determine the appropriate mix for their situation, but I like to see at least 25% of available cycles in a sprint allocated towards tech debt.
  • Radiate information. Agile advocates public display of information related to the project. This could be UI designs, the burn down chart, database schema, system diagrams, or anything else that provides meaningful information to members of the team. This public display is generally done on the walls surrounding the team’s work area, using print-outs or monitors. Information radiators also have the added benefit of providing a convenient mechanism for presenting the team’s work to any visitors. Some teams I have managed publicized periodic “office hours”, encouraging other employees to drop by.

Agile anti-patterns:

  • Formal sign-offs. If you find members of the team requesting formal “sign-off” of delivered items, then the team may have a trust issue. Usually, this occurs at hand-off points in the process, like presenting product requirements or validating completed work items. If sign-off is requested, then the team likely is struggling with their ability to respond to change.
  • Only demoing UI features. Some teams think that only features with a user interface are worth demonstrating. In the spirit of delivering working software, any completed component of a project should be demoed. Even a back-end service with no interface or a database design can be presented. The key is that the developer shows the work to another person.

Responding to change over following a plan

In Internet time, the product landscape is constantly shifting. Yesterday’s great feature idea may no longer be viable today. Teams need to be able to adapt quickly and shift priorities to capitalize on new business opportunities as they emerge. The agile preference for responding to change allows this flexibility. While a plan is important, the team should orient around an expectation for change. As one of the agile principles states – “agile processes harness change for the customer’s competitive advantage.”

  • Prioritize sprint tasks based on expected impact. As agile teams plan their sprint work, they should schedule the highest priority items first. This also implies that if the priority of an active item changes, then its position on the sprint plan will be adjusted as well.
  • Daily stand-ups. One practice to ensure that team members are kept updated on changes to the schedule is the daily stand-up.  I like to keep these focused on the standard set of updates – what each team member accomplished yesterday, plans to address today and if they are blocked. These updates give the rest of the team a sense for the progress of other team members, and most importantly, where the schedule may be deviating from plan. Any issues raised during the individual updates can be addressed in smaller break-out discussions following the stand-up meeting. The stand-up meeting should be short, no more than 15 minutes.  However, enough time should be allowed for sufficient updates and treatment of any follow-up items. Occasionally, I have observed stand-ups being rushed, as if finishing a stand-up in under 5 minutes is an accomplishment.
  • Mid-sprint reviews.  As team members complete sprint tasks, these should be marked off in the sprint tracking tool. This progress against the schedule will be reflected in the burn down chart for the sprint. While progress can be tracked daily, I think it is a good practice for the team to formally review sprint progress at least once during the sprint. I like to do these check-ins mid-sprint, either at the end of a daily stand-up or in a scheduled meeting. The team should display the burn down chart on a shared monitor and discuss progress. Ideally, half the work items are completed at this point.  If not, the team should explore why and make adjustments to the plan for the second half of the sprint.
  • Retrospectives.  Another theme of agile is the notion of introspection and continuous improvement. The process by which the team works should be constantly reviewed using a reliable feedback loop. Retrospectives are formal meetings at the end of each sprint in which the team reflects on the sprint’s activities.  Discussion revolves around what went well and what needs improvement. This review should be conducted with an objective, blameless tone. For the items that need improvement, the team should identify actionable tasks to address them. Those tasks are then added to the backlog for future scheduling.

Agile anti-patterns:

  • No changes to the sprint plan as the sprint progresses. If the sprint plan always proceeds exactly as expected, then there is likely something wrong. The team should take advantage of the fluidity of agile to ensure that the most important work items are being addressed first.
  • Retrospectives with no improvement items. I have observed retrospectives for some teams narrow to a list of compliments between team members. Recognition is important, but retrospectives should also result in improvements. I like to see a 2:1 ratio of items that went well versus those that need improvement. This ensures there is space for recognition, but enough critical thinking to drive the team towards continuous improvement.

Leveraging Automated Testing

Testing your application is a necessary part of the software development process. Each application has a set of behaviors that it is expected to perform consistently. As developers make code changes, established behaviors must be verified before the application is updated in the production environment. This verification of prior behaviors has traditionally been referred to as regression testing. This is contrasted with acceptance testing, which is conducted on new features. In both cases, an individual will run a set of tests and match the results against expected outcomes. Any discrepancies are logged as a bug for developers to fix.

Regression testing in particular can be very resource intensive, when performed manually by your QA team. As an application matures, the number of regression tests will grow, sometimes resulting in hundreds (or thousands) of behaviors that need to be verified. It can become impractical to run these manually before each release. Usually, the team will adapt by delaying regression testing and large releases until the end of a sprint.

Of course, the downside to this delay is that it increases the time between code completion and feature release to users. Also, bugs are reported long (relatively) after the developer has made the code changes. In order to address the bug, the developer has to refresh themselves on the code changes made at that time.

A solution to this is to reduce the time it takes to conduct regression testing through automation. On a modern, UI-driven internet application, much of this manual testing can be replaced by simulated tests run by computers. Computerized tests can be parallelized and completed faster than if a human ran them.  In this post, we will explore a couple of mechanisms available to automate regression testing and how you can measure your testing effectiveness.

Unit Testing

The first step in reducing the overhead of regression testing starts with unit tests. These are the code level tests that developers create to exercise individual functions or classes within a software component. A unit test provides a written contract that the code will satisfy. It specifies discrete inputs and an expected output in the form of an assertion.

Unit tests can be executed by each developer in their development environment before checking code into a shared repository. This catches logic errors created by that developer’s own changes. To ensure that one developer’s changes are compatible with all the other developers’ subsequent changes, we utilize continuous integration. Continuous Integration (CI) is a practice that requires developers to integrate code into a shared repository. A CI server will monitor for changes to the shared repository and then kicks off a “build”.  The server will execute the code’s build process, deploy it into a test environment and then run the set of unit tests. The output for the unit test run is published to the development team.  Popular CI servers are Jenkins and Bamboo.

As it relates to reducing the overhead of regression testing, a comprehensive set of unit tests can go a long way. These can catch simple logic errors before they get to the QA team. Developers should write unit tests as they code. These should be checked in periodically and not be postponed until after release. Also, you should measure unit test coverage, to ensure that all logical code paths are covered. A tool like Clover will examine your code base and provide a report of test coverage. How much coverage to target is debatable. More is usually better, but 100% is not realistic. Depending on the maturity of your test program and development team, somewhere between 50% to 90% should work as a target. If your team is just starting out with unit testing, increasing coverage can be a continuous improvement exercise.

Also, it is important that unit test failures surfaced by the CI builds are addressed in a timely manner. Some development shops prevent any new commits until a broken build is fixed. This is one way to ensure that unit test failures are fixed quickly. However, it blocks work for all developers. On the other hand, if unit test failures are ignored, their count will quickly increase until the continuous integration process is no longer useful. Each team can establish their own standard for fixing unit test failures, based on their culture and maturity. I think a reasonable guideline is to fix unit test failures by end of day.

Functional Testing

Functional testing generally involves testing interactions between the application’s user interface and its back-end logic. Similar to unit testing, inputs are sent to the application and expected outputs are verified. Testing of user interfaces will span all devices which the application supports – web, mobile, desktop, etc.

Because the user interface device is the test medium, this type of testing is initially performed manually by QA engineers. For acceptance testing, manual testing makes sense, but for large regression test suites, it can be unwieldy. Fortunately, functional testing can be automated by using a testing tool that simulates interactions with the particular UI device, directly making inputs and capturing responses. Test frameworks exist for each device type. For the web, a test tool can be used that simulates actions in a browser.  The most popular open source tool is Selenium, providing support for most modern browsers.  For mobile devices, ideally you would use a test framework that supports “cross platform” automated testing.  In this case, the test framework exposes a single API for interactions with iOS, Android and mobile web. A popular open source tool for this is Appium (created at Zoosk, incidentally).

A role within your QA team, called the test automation engineer, is usually responsible for maintaining the automated functional tests. Requirements for a test automation engineer are more advanced than for a manual black box tester.  An automation engineer needs the ability to code, as they will write test scripts within the automation framework.  Also, it is a good idea for an automation engineer to possess some devops skills, as they will likely own the test environment. Their role will be to create new automated tests as functionality is added to your applications and to update tests as business logic is changed.

Automating functional tests will significantly reduce the overhead in each regression test pass. Since they interact through a user interface, a nontrivial time is required to run them. A set of a few hundred Selenium web tests, for example, can take a few hours to execute. Even so, it is feasible to run these several times a day, or at least nightly. The output should be distributed to the team for investigation. Usually, a QA engineer will check test failures manually first, and then file bugs for verified code issues.

Service Interface Testing

Stand-alone services do not have a user interface. These are typically fronted by a RESTful API that facilitates interaction with a back-end application.  Services usually encapsulate a set of related application functions, involving substantial business logic and interactions with data stores. These types of interfaces are easy to test through automation. Automated testing of a service involves calling each API endpoint individually with standard inputs. The test then checks the response of the API for an expected value. A test automation engineer will script these tests, using documentation describing the service interface. Some open source tools that provide the capability to automate service interface testing are SoapUI and PyRestTest.

By moving your application functionality to stand-alone services with open interfaces, you will be able to automate more of your regression testing. This will reduce the amount of testing that must be verified through a UI interaction. Automated regression tests against an API generally run faster than through a user interface. I highlighted the advantages of a service-based architecture in a prior post.

Measuring your Test Automation Program

Creating and maintaining your test automation suites will represent a major resource investment. Like any allocation of resources, it is important to track the cost and benefit associated with that investment. You should collect a set of metrics which represent the amount of effort put into the test automation program and its relative success. Your QA manager should own this data collection. They should summarize the data periodically and present it to the team. These metrics will generate insights and provide feedback for additional changes to the automation program as it evolves.

Here is a list of sample metrics that are useful to collect:

  • Time spent on automated test creation. Track the amount of time each test automation engineer spends creating new automated tests. This is usually required when functionality is added to the application. These numbers can be aggregated on a per sprint basis.
  • Time spent updating automated tests. Track the time spent updating existing automated tests. Updates to existing tests are necessary when the business logic for an existing feature changes.
  • Bugs identified by automation. When automated tests are run, review the output. If a test failure results in filing a bug for a developer to fix, record this event. Aggregate the number of bugs caught by automation on a per sprint basis. These bugs represent the primary benefit of test automation.
  • Bugs missed by automation. After code is pushed to production, issues will be reported by users. If the issue report results in logging a bug to a developer, then make note of the bug. This represents a test failure that automation missed. Examine how testing didn’t capture the bug.
  • Time spent on manual regression testing. Hopefully, as more testing is automated, this time will go down.

These metrics should be reviewed periodically. Ideally, you will see an increasing number of bugs caught by automated testing and the amount of manual testing decrease.

Expectations for Automation Test Coverage

Over time, your metrics will give you a strong sense for the effectiveness of your test automation program. Pay particular attention to the number and types of bugs that are and are not caught by your automated functional regression testing. In my experience, automation of functional testing will not catch every bug. Some set of manual regression tests for critical functionality in your application is still advisable. You can include these for major functions like user registration, search, product pages, shopping cart, etc. Your product managers should be able to help identify the areas of the application that would have major business impact if they didn’t work. For these, your QA team can craft a short list of functional tests that can be run manually before each major release. While often redundant to the automated tests, these provide a good balance to an evolving test automation program. Once you have more confidence in your automated testing, you can cut back on these manual tests.

Also, as you collect data on your test cycles, your QA manager should share that data with other engineering leads and product managers. This data can be summarized monthly or quarterly and presented at group meetings. Sharing this type of data will generate productive conversations about the state of testing and what improvements can be made. The data should help address questions about the investment being made into the automated test program and its benefit.

Technology Selection

A big part of your job as VP of Engineering will be to oversee technology selection for your team.  Some examples might be a programming language, an application framework or a particular data storage technology.  When your company is first starting, there are many of these decisions to make.  As your technology stack matures, these become less frequent.  However, your technology choices should be visited regularly to ensure they are still meeting your needs and perform better than alternatives.

Your role in technology selection is to establish the process and to shepherd the team through the steps.  This means determining who will be involved in the evaluation, establishing evaluation criteria, tracking the team’s progress and ultimately guiding the team to the final decision.  This blog post provides some guidance on how to structure a technology selection process.

Approach

Whether you are building your technology stack for the first time, or making an incremental addition, it’s a good idea to establish a process for your evaluation.  The process should be documented, ideally in a public forum or shared document, like a wiki.  That way, all engineers can understand the rationale for the technology choice.  This is particularly useful long after the choice has been made, so that team members can understand the requirements and assumptions that existed at that point in time.

Here are some items to consider as you construct your team’s evaluation process:

  • Problem Statement.  What is the problem you need to solve or capability that you are seeking to add with the technology choice? Try to narrow this down to a single statement, or at least a short list of capabilities to add.
  • Requirements. For your problem statement, what criteria would indicate a successful choice? If there are benchmarks that must be met, like queries per second, list those. For a technology choice associated with development, what outcomes might indicate success? Perhaps time to learn, developer productivity, compatibility with other frameworks, etc.
  • Timeline.  How much time are you allowing to make the technology decision?  If you have an immediate problem to solve, this may be short.  Ideally, you can allocate a couple of sprints (2-4 weeks) to the process.
  • Evaluation Plan.  What are the rough steps the team will take to evaluate the technology?
  • Participants.  Who will participate in the evaluation? Who will be part of the group making the final decision? Evaluation and decision making participants should be kept small.
  • Support. Do you need support from other departments, like marketing or analytics, in order to conduct your evaluation? This support should should be committed before the evaluation starts.

In terms of the process, most teams use an approach of starting with a list of all possible technology choices.  Then, they narrow the candidate list based on their criteria, trying to stack rank the choices.  Usually, teams will create a scoring function for each criteria, and then add up the totals.  They will work towards identifying the top three choices and discard the rest.  With the top three choices, they will conduct some sort of proof of concept where they build a basic prototype of a feature. The proof of concept provides real world feedback on how the technology works within their environment.

Other Possible Evaluation Criteria

While you want to confirm that the technology choice will solve the problem you have, or deliver the capabilities you need, it’s also important to consider some other criteria that can give an indication of its maintainability over time.  Here are some suggestions.

Knowledge within the Team

One big influence over your technology choice is how many of your team members have worked with the technology in the past.  If most of your team is adept in the technology choice, your time to being productive with it will be shorter.  If most of the team hasn’t worked with the technology, the ramp up will be longer.  You can train the team in the new technology.  That kind of training will be viewed as a professional development benefit, contributing to employee satisfaction and retention. Just keep in mind that training will take time, and extend the period to being productive.  For example, if you are choosing a programming language for server side processing, candidate technologies might be Java, Scala and Go. Check the proficiency of your existing engineers in these languages.  Most will be familiar with Java. Scala proficiency is likely lower.  Scala can have a steep learning curve.  So, you should expect a fair amount of time between making the choice and having the team be productive. However, Scala has some capabilities that might outweigh the time delay.  These considerations should be taken into account, as you make your choice.

Popularity and Momentum

For each technology, consider how “popular” it seems to be.  Popularity can be measured by interest and activity within the broader development community.  Some ways to gauge popularity:

  • Searches for the technology term on Google
  • Activity on developer sites, like StackOverflow.  Also, examine the support forums associated with the technology for an indication of maturity. Look at the posted questions from developers and see how many are answered.
  • Frequency of commits and releases.  For an open source technology, check the project’s site.  Look for active commits and regular releases.  If a commercial product, how often is the vendor providing updates?  Also, check the completeness of documentation.
  • Conferences focused on the technology. Have whole conferences been dedicated to the particular technology?  Do these seem to be well attended?
  • Podcasts.  Do a Google search for “[technology] podcast”.  If you get back a lot of results, check a few.  Are the podcasts getting updated regularly, or did they die off a few years ago?
  • Choices by other technology companies.  Check the engineering blogs for other companies in your space. What technologies are they using for the problem or capability?  Most engineering teams at progressive internet companies have an active blog.  See Uber or Spotify for examples.

Popularity is important because it is an indicator for how much support you can expect and how complete the implementation will be over time.  Also, developers want to broaden and maintain their exposure to relevant skills, so having an opportunity to work on a popular technology will help with recruiting and retention.

Momentum is another dimension to consider.  Momentum represents the change in a technology’s popularity over time.  Does popularity seem to be growing or fading? Also, momentum encompasses how long a technology has been in place.  That will generally be an indicator for how much longer you can expect it to be popular.  An established database technology, like MySQL, that has been popular for a long time, will likely continue to have support.

Tooling and Frameworks

Related to popularity is the state of tool and framework support for your technology choice.  If you are selecting a new programming language, how many IDE’s are available that include it?  Similarly, have development frameworks been built to apply the language to your use case?  A simple example is with PHP – there are tens, if not hundreds, of frameworks for building web applications with PHP.  Also, you should check for libraries available to integrate with the other technologies in your infrastructure.  For example, if you have Cassandra data stores, does your language choice have a client library that connects to Cassandra?

Costs

Vendor supplied or proprietary technologies will have an obvious cost associated with licensing and support fees.  However, don’t assume that that open source cost is zero. There can be many costs associated with an open source roll-out.  Some examples include getting support and training. For example, I have established support contracts for MySQL (with Percona) and Cassandra (DataStax) in the past.  These are necessary because your team will sometimes reach a point where they can’t solve a particular problem with an open source technology quickly.  Or, more simply, it’s more efficient to get support from the open source project committers, than dig through the source code.  It is also useful in the heat of troubleshooting a major issue to have the ability to reach out to one of these vendors for help. Many offer 24/7 support for outage resolution.  Finally, getting your engineers trained on a new technology can accelerate adoption.  These same support vendors often offer training classes, either at a central location or will come to your office. So, you should budget for support and training costs, and include them in your comparison.

The Decision

After you have collected all the information necessary for your technology choice and completed your proofs of concept, you and your team need to make a decision. My recommendation is to make this decision quickly, and not deliberate indefinitely.  While it is important to go through the process, you can change technologies in the future.  Granted, there will be a switching cost, but there is also an opportunity cost for your decision delay.  Keep in mind that the requirements for your problem statement will likely change over time, pushing you to re-evaluate the technology choice later.

Once you have made the decision, share it with the team.  Make the documentation associated with the evaluation process available to everyone.  As work with the technology progresses initially, keep track of your assumptions and ensure that the technology is delivering in  the ways you expected.  Patterns for utilizing the technology will begin to form during these initial implementations, so it’s important to get these initial patterns correct.  And, if the technology isn’t doing what you expected, examine that immediately.

Older posts