Categories
Software

How I build my product engineering teams

The toughest part of software development is human communication. If we group together words like specification, validation, experience, exposure so on they all represent communication.It is the centerpiece of your development procedures. If we dig deeper into these words the essence of why software development is not a fully controllable discipline like many other branches of engineering we realize that we have idea-communication problems.

When we sit together with a newly formed product development team this is the first thing I stress. It is your idea of what their idea of whats customers’ idea of the reality is. And often even our sense of reality itself is incomplete, leave alone its articulation and understanding. This realization makes our developers more humble( especially the ones with a hackrank high), the project plans realistic and the leadership vigilant. But how do you make sure that like many other mission statements postered on the wall, or at the end of email signatures or as messenger status this doesn’t become the most ignored and broken often statement. Read on.

The hard aspects of product engineering teams

Product development is hard. Hard in the sense of how some things are computationally hard. A typical services project assignment masks a lot of complexity and preparatory work from your team. The idea justification and realization might have happened already and you get a curated feature list. User experience studies; focused group discussions; technology stack debates, prototyping, sizing and many “other than programming” tasks might be hidden from you to a varying degrees. But one can always go back and read about them once the project starts.

Enter products. A commissioned software whether inhouse or outsourced might have a large exposure area, both in terms of users, features and run conditions. When you do products this happens to be your baseline case. Add to it the complexity of providing features to customers of your product customers. It changes the game from being a good housewife(or husband) to teaching people the art of courtship. In particular It :

  1. changes your requirements from a list to a range (where wise choices need to be made)
  2. Exposes you to comparison with known and unknown alternatives (some of which could even be non software !)
  3. Stretches the boundaries of user behavior – run conditions and regulations by multiple sigmas (normal distribution analogy)
  4. Forces you to design for evolution (no longer a design ideal)
  5. Challenges your notion of S/Ws purpose ( when customization needs are considered, a specialized product can easily start looking like a general-purpose of that domain ) 
  6. Slowly introduces version infidelity (a cocktail of customer pressure, delivery delays, competitive threats, and bad design can entice teams into breaking the baseline into customer-specific releases(welcome to product management, service guys 😛 ) )

and this is not a complete list but you get the idea. And we need to keep our teams ready for this ride.How?

I do it with what I call => complexity simulation.

Complexity Simulation

The first and foremost mindset item we need to put into the product team is that “programming language, data structures, frameworks & libraries are choices and you don’t have to always make all of them”. So is architecture ( but then I need to put lots of caveats, so I skipped). Between the time I started my career in products and now; great books and talks have come on this topic. So these days I can simply ask them to read books like code complete-clean coder and likes or ask a senior member to do a session on them( these days lots of code retreats that happen, also help a lot).

The main task of complexity simulation is to give them miniaturized product assignments. The idea is to give them specification that is

  1. Not an imaginary app but represents a real-world usage scenario
  2. Loose enough to allow choices but clear in scope ( so shared learning can happen )
  3. Has at least one occurrence of each QoS [Auth, Logging, Performance and some failure scenario]
  4. Is evolutionary (Prompt for a monolith and then graduate to modular or alike)
  5. Has some arbitrary and conflicting specifications ( like demand the design to REST midway, demand configurability just before the project is to end )
  6. Encourage lots of buzz words and latest hype (and then put requirements that expose their boundaries ).

A common thread in this approach was we would always have one external person walking in on a few occasions. He would sometimes question the approach, change some requirements and make some “suggestions” with intent to disturb their rhythm.

Many times we started with one person assignments and one of the disruptions was to then merge everyone’s code into a single repo. It was fun to see how long developers struggled to agree on common structure and design(despite all the training they went through).

In some cases, we would deliberately split the work across layers or components. This was to expose them to interfacing problems and also the problems on different in speed of individual development(and what they did with it?). In some brave cases, we even suggested code/module swap to simulate take over/maintenance.

Outcomes and learnings

The learnings were immense and broad-based on both sides. It was interesting to notice that when such projects were more functional (than purely technical ones )most developers would do some sort of domain-driven structure to their module. In technology-related assignments, there would be an obsessive overdose of patterns. At times they would suggest modular deployment in the first draft and it was fun to see the reactions (and later learnings ) when I would push for a monolith for the first version. The gradual evolution of their design, later on, made a lot of them smile.

Sanctity of Interfaces would always break! most good developers applied some sort of logging say: a log4 or at minimum console logs but forget to introduce meaningful traceability. Similarly, overlapping functional requirements like 5 types of banking accounts with their own special features and 4 types of roles that can act on them with varying authorization threw a lot of theory they had read off balance(say like REST, OOAD, proto links, schemas , reuse).

We would also schedule periodic peer reviews and I would particularly probe them on places where google’s help was saught and how they fitted in.

The biggest learnings, however, came in the experiment where we asked individual repo’s to be merged into a single code base.

The team would invariably run into chaos as to whose version was better but natural leaders would emerge and finally teams would self organize. By natural I don’t mean the first vocal person to occupy the mind space(which often turned out to be bad at teamwork, mostly the people on standby emerged as leaders who would guide the team to conclusion effortlessly). Tech jargon and buzzwords caused most frictions; it was only after we introduced a few common principles and vocabulary that consensus happened. I could go on detailing various such observations and how they changed as we started getting millennials in teams. But as an objective of complexity simulations, this helped our future teams prepared for the real.

In terms of experience, these teams could be a mix of freshers to say tech lead levels. Sometimes they would be new to the technology or be trained on languages/frameworks that we used.

In my opportunities with product teams, we have done various runs of this. We built: web-based chat applications(to demo lots of protocol and design issues), stock market apps(for RWD, grids and especially the spider web of 2 way communicating components); in some scenarios wrote our own version of MapReduce and toy file-based SQL engine an ESB engine of our own. And a few more which I can’t write lest I leak some company-specific designs. 

The main outcome

One might call this exercise as training. To be fair, much of the training in s/w is focused on teaching nuances of technology or framework and can’t afford to dilute their focus with the above aspects. Nor can hackathons (which are output-oriented as opposed to our approach of increasing the depth).

The biggest and not so obvious objective of my exercises was on the team aspects. Getting help, co-operating, keeping commitments, electing leaders, asking lots of clarifications are key skills. More importantly, managing conflicts, arriving at beneficial conclusions(as opposed to consensus) or responding to later stage changes, design rework and negotiating your way out of arbitrary requirements or time pressure are hard skills.

It is (also) this complexity that needs to be stimulated to working product teams(which cant come just by agile , methodology training). Many of my experiments ran for 2-4 weeks but with more self-aware, well-read and hands-on team members joining our teams these days, we also did shorter runs or few hours a day for a month side assignment. A few cases it would be a one-person run(High potential development,you see 🙂 ) . And there was a different version of this reserved especially for tech leads and budding architects ( more on it in some other post).

The only pitfall was personal equations, hype or self-doubt could seep in silently. But these were expected. And these were my teams so we took care to confront these later on.How? In future posts …

Do tell me what do you think and your experiences as a technology leader in charge of building product engineering teams.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.