VP Engineering Playbook

A practical guide for new leaders in software development

Page 2 of 6


I was recently asked about my leadership style. This got me thinking about what leadership traits are exercised as a VP of Engineering. My leadership style is heavily influenced by my early experience as an officer in the U.S. Army.  The military invests a lot of effort in the training of its young officers. This is because one’s first job as a second lieutenant in the Army is to lead a platoon of up to 50 soldiers. This requires you to develop and hone your leadership style very quickly. The leadership skills I learned in the Army carried forward to my professional career.  I have since refined these and added some new ones over time.

As this blog is targeted at individuals in the VP of Engineering role, I thought it would be useful to share the leadership traits I have leveraged. This should hopefully provide guidance to other developing engineering leaders. Keep in mind that there are many approaches to leadership and no right answers. I offer this perspective to spark your thinking about what’s important for you.


Skills represent the primary practices you will focus on as a VP of Engineering. Executing these will occupy over 90% of your time. These practices should influence outcomes for your role over the long term.

  • Project a vision. A clear vision will provide direction for every decision that your organization will make. It should align teams and reduce friction. To form your vision,  contemplate how the organization can leverage technology to meet its business objectives. Are your product offerings taking full advantage of what is technically possible? Is there a better infrastructure architecture that allows your service to be built faster or delivered more reliably? Are your developers equipped with the optimal tools and frameworks to do their jobs? As you think through these questions, you should form a vision of the ideal state of your operation and what would be required to achieve it. Describe this to your team. Does it excite them? It should. Compare your vision to the current state.  Form a plan to move towards the vision. Break it down into measurable steps. Consider the risks to your plan and mechanisms to mitigate them. Armed with your plan, communicate it to the team and drive towards the vision. Encourage team members as they execute. Advocate for them to ensure they are properly resourced and supported.  Make adjustments to the plan as you move forward. The key to vision is that you identify improvements made possible by technology and lead your team towards them.
  • Build teams. As a VP of Engineering, one of your main responsibilities is to build a highly productive team of motivated engineers who are able to deliver an industry-leading product. Building teams starts with selecting the right people. You should assess their individual strengths and align those with the right roles. You are also responsible for establishing the organization’s culture. I talked about how to build culture in a past blog post.  Culture basically comes down to how you build success into your team. Employees want to be part of a winning team – your culture provides the practices and behaviors that allows them to win. Team building also involves structuring the engineering group into small, nimble teams that can operate autonomously. Identify young leaders and mentor them into team lead roles.  Promote continuous improvement and introspection.
  • Establish systems. As your teams grow, it will become increasingly difficult to maintain standards for how they perform work. This is where systems come into play. Systems represent the processes and practices your organization uses repetitively to design, schedule, implement and verify software development deliverables. Working with your peers in the product organization, you should establish the processes for this. You can borrow from agile methodologies.  In establishing agile software development practices in past roles, I haven’t embraced a single method (scrum, xp, etc.) wholeheartedly, but rather borrowed the principals themselves and applied those to the unique characteristics of the organization.  Examples of agile practices cover many areas – gathering requirements, planning work, scheduling tasks, tracking progress, providing updates, etc. Tactically, these can represent everything from daily stand-ups to retrospectives.
  • Infuse accountability.  While we all strive to build self-directed, independent teams, I think that individuals function best when lines of ownership and expectations are clear. In the Army, this is ingrained from the division level all the way down to the squad. Leadership identifies who is in charge of what and how success will be measured.  This allows an organization of a million people to function cohesively.  Even with all of this top down control, there is still plenty of room for individuals to take the initiative and be creative in the execution of their roles. The same approach can be applied to a software engineering organization. Define who is responsible for each function and how performance will be measured. Success should be cast in the form of expected outcomes, allowing individuals to determine for themselves how to achieve their goals.  Additionally, as the leader of the team, you should provide the example for personal accountability.  This means you take responsibility when the team falls short of objectives. After you demonstrate this approach to accountability, you will notice that your attitude will trickle down to the rest of your leadership roles.  Accountability doesn’t just apply to negative outcomes.  You should praise team members when they succeed, as well.
  • Practice diplomacy: The skills presented to this point have been primarily focused on the engineering group’s internal operations. It is also important to build productive relationships outside of engineering with other executives.  This allows engineering to contribute its part to achieve common company goals and initiatives. Most people define diplomacy as the behaviors two leaders exhibit during formal meetings.  They think of social graces, like being polite.  I think that diplomacy is deeper than this and is grounded in the research that occurs well before the interaction. This research involves a full understanding of the other leader’s organization and what they are trying to accomplish. Similarly, you should build a thorough understanding of how the engineering group fits into the overall company. What are the other departments, how do they function and what are their key objectives? What are the interaction points between engineering and other departments? In a technology company, peer departments are often product management, marketing, customer service and operations. Similar to the exercise you conducted internally for engineering, figure out the vision, structure, systems and accountability measures for these other departments. Then, strive to build relationships with the leaders of these teams. Know their challenges, hopes and dreams. This perspective will help you negotiate the inevitable disagreements that arise as your teams work together. This knowledge is the key to finding common ground and surfacing “win win” outcomes.


In addition to leadership skills, there are a set of behaviors that should guide you. These are less about achieving a particular long-term outcome, but rather influence your actions daily.

  • Decisiveness. Make decisions quickly with the information available. As the VP of Engineering, you will be presented with many decisions. If you delay them all for more input, your team’s execution will slow down. You will never get all the information you want to make a 100% accurate decision. Have the confidence to evaluate options and make a call. Trust your gut. You can change your mind and alter direction, when new information presents itself.
  • Integrity. Integrity goes far beyond just being honest. It means doing what you say you will. Provide an example for others to emulate. Lead by example. Given the high expectations you set for your organization, integrity requires that you exceed the same standard yourself. Don’t expect your team to do something you wouldn’t do yourself. If you ask them to work late to meet a deadline or address an outage, then you had better be there too.
  • Composure. The VP of Engineering role can be very stressful. As the leader, how you act under stress trickles through the organization.  Being the VP doesn’t give you the license to act differently. If you are calm and collected during difficult circumstances, then the rest of the organization will feel your confidence and act accordingly.
  • Dedication. In a start-up, you are pushed to obtain exceptional results. You have to be willing to put in the time and effort necessary to achieve your goals. This can require you to get your hands dirty and do tasks sometimes outside of your scope. It can also require your attention when you least want to give it. Granted, we can all try to “work smarter”, but there will be situations where just grinding through a problem is required. Dedication provides the grit to get through these situations.
  • Drive. In this context, drive represents a strong determination to continuously improve and raise the bar of performance. It’s about maintaining a “can do” attitude and projecting the energy that makes everyone strive to push themselves that much more.
  • Curiosity. Have an insatiable appetite for learning. You always want to understand how solutions work and if there is a better way. You should be an active gatherer of information about your trade – regularly consume blogs, podcasts, books, tech talks, etc. I have found podcasts to be very helpful – you can listen to them at the gym or during your commute. As an example, I have been listening to Software Engineering Daily recently.
  • Proactivity. Take the initiative to solve a problem, seize an opportunity or address an issue. As a member of your company’s executive team, you will rarely be told what to do. You have to fill in the blanks. This means understanding what the company wants to accomplish and taking the initiative to direct your team towards what is needed.

Hopefully, this article provided some insight into how to approach the VP of Engineering role.  If you have other suggestions, please post them in the comments.

Relationship Building Tips for Product Managers and Engineering Leads

In past roles, I have spent a lot of time structuring relationships between product and engineering teams. This primarily focuses on establishing practices for planning, scheduling and communicating development work. As teams interact, individual relationships between product and engineering team members can sometimes break down. In these cases, I find myself listening to each side’s perspective, digging into problem areas and coaching people to more productive interactions. Given this experience, I thought it would be useful to share some tips for maintaining a healthy relationship between product and engineering leaders. To be most practical, I’ll focus on product managers and engineering leads at the small team level, where the business context is delivering software products over the Internet at a rapidly growing company.

For the Product Manager

  • Address why – not just what. When you are asking your engineering team to build a new product feature, take the time to explain why it is important. What are your goals for the feature and how do these fit into the overall objectives for the product? Explain how the feature will improve the user experience. If you haven’t reviewed key product tracking metrics with your engineering team, step back and schedule some time to do this. Understanding the business objectives for a product enhancement provides two benefits for engineers. First, they can use their judgment to answer small design questions for themselves, obviating the need to ping you incessantly about the product spec. Second, engineers like to solve problems. If a user engagement opportunity is presented as a problem (we would like to increase the user registration rate by 20% by adding Facebook auth integration), then engineers will be more aligned with how their work contributes to the product’s success. After launch, they can track the impact of the new feature and help optimize performance based on data.
  • Design the minimum viable product. This means investing the smallest amount of engineering effort necessary to get sufficient feedback from users to make a decision about a product feature. MVP may be a well-known product design concept, but I sometimes see it failing in practice. Product managers and UX designers naturally get excited about the potential for a new feature that is bound to delight users.  This can lead to a tendency to over-design. Once the feature is launched, however, users may not engage with it and it is promptly turned off. This reaction is often spun positively as “failing fast”. Failing fast is fine, but if the failed feature required multiple sprints to build, then there may have been wasted engineering cycles. Engineers get frustrated by waste and dislike writing a lot of “throw away” code. Building and testing a MVP can minimize the discarded effort. As an example, if the new feature idea is to provide users with free use of the product in exchange for friend referrals, think about the easiest way to determine if users would be willing to make this exchange. I have been involved in elaborate implementations of a friend referral feature, only to find out after launch that users aren’t interested in referring their friends to some product types. Instead of building out the full referral feature, perhaps sufficient feedback could be collected by just displaying a button describing the offer. While the button would need to link to some sort of “coming soon” landing page, it would quickly indicate interest in that kind of offer.
  • Share design changes as early as possible.  After you deliver a spec for a new product feature, engineering will begin the process of conducting their technical design and implementation. From that point forward, engineers subconsciously expect the spec to be fixed. However, in a rapidly evolving product space, new information about the feasibility of a product direction manifests every day. This is understandable. If new information will alter the design for the product feature, you should bring this information to your engineering lead as soon as possible. Even if you haven’t determined the extent of the changes, it’s best to still share the possibility of a change. Take the time to explain the new information and your rationale for the re-design. This allows the engineering lead to assume ownership for it with his team. With the anticipated change, the lead can adjust the feature implementation steps appropriately. Best case, the design change won’t impact the delivery schedule because the lead is able re-order implementation tasks around the changing part.
  • Understand implementation schedule trade-offs and don’t always pick the option with the shortest delivery time. Your engineering lead should prepare a couple of implementation trade-offs as they scope out your product feature request. Usually, these trade-offs involve technology choices and software design decisions, where there is a “fast way” and a “right way”. In most cases, the right way invests effort now to make future development of the product feature easier. Examples are moving some supporting code into a shared library, or building the feature on a different technology. If not addressed now, these items will be addressed in the future and collect as “tech debt”. As a product manager, your tendency may be to always pick the implementation option that results in the shortest delivery time. However, if the schedule will allow for some cycles to address the tech debt items now, they will generally be completed faster. This is because the team is already in context and won’t need to incur refresh cycles on past code. Regardless of your decision, ensure that you fully understand and evaluate the implementation trade-offs. You and your engineering lead should collectively agree on the best option based on your product objectives. If tech debt is generated by your decision, ensure that you backlog it and offer to schedule it during a future sprint planning exercise.
  • Share your product roadmap and solicit feedback from engineering. If you maintain a product roadmap, share this with your engineering team. Granted, your roadmap’s planning window may short, like a couple of months. That’s okay – even seeing a backlog of feature ideas is helpful. Sharing the product roadmap allows engineers to make technology design decisions with your future direction in mind. When done well, this can reduce implementation effort for new features, because an engineer is armed with the foresight to build extensions into a related code module or service. Also, as you share the feature backlog, it is productive to solicit feedback from your engineering team. As builders, and often users of the product, they may have additional insights into usability. Engineers can be a great source of product ideas, or suggest tweaks to an existing design that may result in a better outcome.

For the Engineering Lead

  • Think of your product manager as a customer. While your product manager technically is a peer, it is productive consider your product manager as a customer of your engineering team’s services. In essence, they are generating a product design request and hiring your team to build it. An analogy would be if they outsourced development to a consulting company. If you were the lead consultant, how would that change your interactions with the product manager? Would you be more collaborative, responsive or proactive? While not perfect, I have found that this kind of relationship modeling helps align the engineering team with the product manager’s expectations.
  • Understand the business and your product’s key measures. Given that your company is inherently a business, it is crucial that you understand how the company generates revenue from its products. This is usually translated by the product team into a set of key metrics that indicate the success of the product. As the engineering lead, it is important that you understand these metrics and how your team can impact them. Fortunately, your product manager is well versed in these metrics and makes all decisions based upon them. Make an effort to learn about these metrics and get your hands on whatever reporting system the product manager uses to track them. This way, you can understand what the product manager is trying to accomplish with feature requests. Additionally, you can quickly identify when key metrics are negatively affected by a change. Aligning with your product manager on this shared scoreboard will streamline decision making for you both.
  • Do not obfuscate implementation schedules. When presenting a proposed delivery schedule for a project to your product manager, be specific. Explain the details of your plan. Review your assumptions and highlight risks to project milestones. If you have added time to allow for unknowns, delineate these items and share your methodology.
  • Provide implementation trade-offs. An implementation plan will invariably have some trade-offs. Usually, these take the form of work items that could be deferred in order to reduce the timeline for launch of the feature. Granted, these deferred items will need to be addressed eventually, but it’s also possible that the feature may not resonate with users and be discarded. Therefore, it is important to include these trade-offs in your implementation proposal. Trade-offs should address both the anticipated schedule reduction and the “cost” of deferring the work. For example, if conducting a load test of the new product feature is part of your plan, you can describe the risks of launching without it. Alternately, if you wanted to bundle some of the business logic into a new micro-service, explain the benefits (future maintenance cost reduction) of that work. Then, review your implementation plan and trade-offs with your product manager. If work items are deferred, make sure these are added to your tech debt backlog.
  • Explain the technology. Educate your product manager on the technology that your solution utilizes. Don’t assume they won’t understand. Many product managers (like at Google) have an engineering background. This education can be done in one-on-ones, at lunch-and-learn sessions, or during monthly group all-hands meetings. The more your product manager knows about the technology, the better equipped they will be to appreciate the benefits of non-feature development work.  
  • Share bad news early and in person. If there is a new circumstance that would be considered “bad” by your product manager, share that as soon as possible. Bad news could represent a setback in the implementation plan, an undiscovered bug, a team member change, or anything else that might effect your product manager’s goals. This news should be discussed in person. Don’t postpone it for a team planning meeting, or include it in an email. A big part of relationship building is interpersonal communication. In your discussion, share the update and options to address it. If you and the product manager are not in the same office, communicate via a phone call or better yet, a video conference.

Danger Areas

  • Lack of trust. Loss of trust between a product manager and an engineering lead can be devastating. Like with any relationship, trust is foundational. Think of your product manager or engineering lead as a significant other. Do not speak in a way or do something that would undermine this relationship. Examples include prevaricating, withholding information, disparaging or double-dealing. Evidence of lack of trust might be if a product manager has little confidence in  delivery schedules or an engineering lead openly criticizes feature choices.
  • Poor communication. The product manager and engineering lead should be joined at the hip. They should be communicating frequently and informally. Ideally, they sit together. They should both be comfortable in talking to each other, without the need to be guarded. Strained or sporadic communication should be flagged.
  • Timidity – inability to push back. Like a marriage, it is not healthy if one partner dominates all interactions and always gets their way. Both the product manager and engineering lead should have the confidence and fortitude to push back if they have issue with the other’s actions. Conflict is healthy and usually leads to better outcomes.

I’m sure there is much more advice on this topic, but I will try to keep this article reasonably short. If you have additional suggestions, please post them in the comments.

Leveraging Automated Testing

Testing your application is a necessary part of the software development process. Each application has a set of behaviors that it is expected to perform consistently. As developers make code changes, established behaviors must be verified before the application is updated in the production environment. This verification of prior behaviors has traditionally been referred to as regression testing. This is contrasted with acceptance testing, which is conducted on new features. In both cases, an individual will run a set of tests and match the results against expected outcomes. Any discrepancies are logged as a bug for developers to fix.

Regression testing in particular can be very resource intensive, when performed manually by your QA team. As an application matures, the number of regression tests will grow, sometimes resulting in hundreds (or thousands) of behaviors that need to be verified. It can become impractical to run these manually before each release. Usually, the team will adapt by delaying regression testing and large releases until the end of a sprint.

Of course, the downside to this delay is that it increases the time between code completion and feature release to users. Also, bugs are reported long (relatively) after the developer has made the code changes. In order to address the bug, the developer has to refresh themselves on the code changes made at that time.

A solution to this is to reduce the time it takes to conduct regression testing through automation. On a modern, UI-driven internet application, much of this manual testing can be replaced by simulated tests run by computers. Computerized tests can be parallelized and completed faster than if a human ran them.  In this post, we will explore a couple of mechanisms available to automate regression testing and how you can measure your testing effectiveness.

Unit Testing

The first step in reducing the overhead of regression testing starts with unit tests. These are the code level tests that developers create to exercise individual functions or classes within a software component. A unit test provides a written contract that the code will satisfy. It specifies discrete inputs and an expected output in the form of an assertion.

Unit tests can be executed by each developer in their development environment before checking code into a shared repository. This catches logic errors created by that developer’s own changes. To ensure that one developer’s changes are compatible with all the other developers’ subsequent changes, we utilize continuous integration. Continuous Integration (CI) is a practice that requires developers to integrate code into a shared repository. A CI server will monitor for changes to the shared repository and then kicks off a “build”.  The server will execute the code’s build process, deploy it into a test environment and then run the set of unit tests. The output for the unit test run is published to the development team.  Popular CI servers are Jenkins and Bamboo.

As it relates to reducing the overhead of regression testing, a comprehensive set of unit tests can go a long way. These can catch simple logic errors before they get to the QA team. Developers should write unit tests as they code. These should be checked in periodically and not be postponed until after release. Also, you should measure unit test coverage, to ensure that all logical code paths are covered. A tool like Clover will examine your code base and provide a report of test coverage. How much coverage to target is debatable. More is usually better, but 100% is not realistic. Depending on the maturity of your test program and development team, somewhere between 50% to 90% should work as a target. If your team is just starting out with unit testing, increasing coverage can be a continuous improvement exercise.

Also, it is important that unit test failures surfaced by the CI builds are addressed in a timely manner. Some development shops prevent any new commits until a broken build is fixed. This is one way to ensure that unit test failures are fixed quickly. However, it blocks work for all developers. On the other hand, if unit test failures are ignored, their count will quickly increase until the continuous integration process is no longer useful. Each team can establish their own standard for fixing unit test failures, based on their culture and maturity. I think a reasonable guideline is to fix unit test failures by end of day.

Functional Testing

Functional testing generally involves testing interactions between the application’s user interface and its back-end logic. Similar to unit testing, inputs are sent to the application and expected outputs are verified. Testing of user interfaces will span all devices which the application supports – web, mobile, desktop, etc.

Because the user interface device is the test medium, this type of testing is initially performed manually by QA engineers. For acceptance testing, manual testing makes sense, but for large regression test suites, it can be unwieldy. Fortunately, functional testing can be automated by using a testing tool that simulates interactions with the particular UI device, directly making inputs and capturing responses. Test frameworks exist for each device type. For the web, a test tool can be used that simulates actions in a browser.  The most popular open source tool is Selenium, providing support for most modern browsers.  For mobile devices, ideally you would use a test framework that supports “cross platform” automated testing.  In this case, the test framework exposes a single API for interactions with iOS, Android and mobile web. A popular open source tool for this is Appium (created at Zoosk, incidentally).

A role within your QA team, called the test automation engineer, is usually responsible for maintaining the automated functional tests. Requirements for a test automation engineer are more advanced than for a manual black box tester.  An automation engineer needs the ability to code, as they will write test scripts within the automation framework.  Also, it is a good idea for an automation engineer to possess some devops skills, as they will likely own the test environment. Their role will be to create new automated tests as functionality is added to your applications and to update tests as business logic is changed.

Automating functional tests will significantly reduce the overhead in each regression test pass. Since they interact through a user interface, a nontrivial time is required to run them. A set of a few hundred Selenium web tests, for example, can take a few hours to execute. Even so, it is feasible to run these several times a day, or at least nightly. The output should be distributed to the team for investigation. Usually, a QA engineer will check test failures manually first, and then file bugs for verified code issues.

Service Interface Testing

Stand-alone services do not have a user interface. These are typically fronted by a RESTful API that facilitates interaction with a back-end application.  Services usually encapsulate a set of related application functions, involving substantial business logic and interactions with data stores. These types of interfaces are easy to test through automation. Automated testing of a service involves calling each API endpoint individually with standard inputs. The test then checks the response of the API for an expected value. A test automation engineer will script these tests, using documentation describing the service interface. Some open source tools that provide the capability to automate service interface testing are SoapUI and PyRestTest.

By moving your application functionality to stand-alone services with open interfaces, you will be able to automate more of your regression testing. This will reduce the amount of testing that must be verified through a UI interaction. Automated regression tests against an API generally run faster than through a user interface. I highlighted the advantages of a service-based architecture in a prior post.

Measuring your Test Automation Program

Creating and maintaining your test automation suites will represent a major resource investment. Like any allocation of resources, it is important to track the cost and benefit associated with that investment. You should collect a set of metrics which represent the amount of effort put into the test automation program and its relative success. Your QA manager should own this data collection. They should summarize the data periodically and present it to the team. These metrics will generate insights and provide feedback for additional changes to the automation program as it evolves.

Here is a list of sample metrics that are useful to collect:

  • Time spent on automated test creation. Track the amount of time each test automation engineer spends creating new automated tests. This is usually required when functionality is added to the application. These numbers can be aggregated on a per sprint basis.
  • Time spent updating automated tests. Track the time spent updating existing automated tests. Updates to existing tests are necessary when the business logic for an existing feature changes.
  • Bugs identified by automation. When automated tests are run, review the output. If a test failure results in filing a bug for a developer to fix, record this event. Aggregate the number of bugs caught by automation on a per sprint basis. These bugs represent the primary benefit of test automation.
  • Bugs missed by automation. After code is pushed to production, issues will be reported by users. If the issue report results in logging a bug to a developer, then make note of the bug. This represents a test failure that automation missed. Examine how testing didn’t capture the bug.
  • Time spent on manual regression testing. Hopefully, as more testing is automated, this time will go down.

These metrics should be reviewed periodically. Ideally, you will see an increasing number of bugs caught by automated testing and the amount of manual testing decrease.

Expectations for Automation Test Coverage

Over time, your metrics will give you a strong sense for the effectiveness of your test automation program. Pay particular attention to the number and types of bugs that are and are not caught by your automated functional regression testing. In my experience, automation of functional testing will not catch every bug. Some set of manual regression tests for critical functionality in your application is still advisable. You can include these for major functions like user registration, search, product pages, shopping cart, etc. Your product managers should be able to help identify the areas of the application that would have major business impact if they didn’t work. For these, your QA team can craft a short list of functional tests that can be run manually before each major release. While often redundant to the automated tests, these provide a good balance to an evolving test automation program. Once you have more confidence in your automated testing, you can cut back on these manual tests.

Also, as you collect data on your test cycles, your QA manager should share that data with other engineering leads and product managers. This data can be summarized monthly or quarterly and presented at group meetings. Sharing this type of data will generate productive conversations about the state of testing and what improvements can be made. The data should help address questions about the investment being made into the automated test program and its benefit.

« Older posts Newer posts »