Categories
Software

API first for Products

Lead question : Why API First

Ask a product manager what is the different between made or order software and a product .He will tell you how product is general purpose for the given domain (not generic tough ).How it captures a broad set of use cases from the domain .How it is flexible in adopting to diff combinations of rules that are included in these use case .How it is future proof and is ready for future changes in domain .How it benefits from vast experience from we have in the domain/customers /consultancy . It goes on .

Yet when it comes to day in life a of developer he is often made to code first , code just what he has been asked for and ship it . It is obvious that a start up will only put coding effort into what sells but should that rule apply  to the very definition of the product as it is conceived  ?Is defining a product/service/roadmap same as coding it ? Such gap between what is “Seen” by product and what developers are made to do are solved by evolution read software design . API first is a prime example of being comprehensive and thus evolution ready yet riding the demand wave frugally .

Business scenario for  API first

  • Integration /Consumption : Software need to be integrated by different parties as the success happen .Would you like to struggle to meet demands of this success come your way ? or would you like to be ready for it with some top-up effort .The answer is in thinking API first . Once you  think of well  thought of APIs  gluing them become easy and future successes becomes faster . This could mean integrating in or integrating out ie consuming (and monetization ).
  • Aggregation/federation  : The internet economy of aggregation made famous by amazon/uber has made clear a case for softwares co-operating with each other .This increases market reach and revenue .However aggregation economy is volatile and competitive . One need to be ready at shorter notice than typical integration projects .The integrations also need to be robust for the parent party (and their customers ) to trust you .Another flavor could be federation of services amongst equals . A well thought of API on both sides make such integration easy-fast-reliable .
  • Extension  : Acquisition ,Saas enablement  ,inhouse aggregation(called integration ) or even customization are typical cases of base capability of software being enhanced by extension .We may thinking of moving from web to mobile to chatbots/alexa . We may also consider taking base software and enabling that as multitenant offering .  While these extensions are value positive for every one they need be very liberal in capability but restrictive in scope .This help us keep sanctity of base software  yet allow creative but reasonable value add . An API layer will clear laid out  extension helps .

All of these driver point towards one fact that business that are built with well defined contracts stand to benefit from this opportunities as opposed to software that are inwards focused and need extra work for any opening up .While this discourse underlines that API first help us with readiness in terms of external forces , the same game plays out within the teams also .It is just that inter team play is seen in the light of following themes

Advantages of API first approach

  1. Domain fitment : The tendency for Business analyst and developer to grab a piece of functionality under discussion and “just code it” .This has huge chance of teams creating code that is few layers higher in domain and miss out in fundamental building blocks .An API first approach also forces teams to think bottom up inside the domain where they are made to think of the “built up” of the functionality .This ensures that the domain of software in considered in totality before team chooses to pick up one slice of it and code it .
  2. Parallelization : API also stand for clear contracts .So it allows teams to do more parallel work as api-first minimizes risk of large scale integration of parallel work streams .Productivity and time or marker related gains follow .Agile ie .
  3. Testing : Since APIs are laid our clearly the function or security test team get more and enough time for deep thought testing approaches at all levels of test coverage .
  4. Functional resilience : Since API commits to a final contract it gives a robust functional capability which is strength .At the same time there are many internal implementation details are that are open to amendments which adds to overall resilience .Think of a case where old rule based API is replaced with new Machine learning based API ,while the API is same the quality of algorithm is smarter .
  5. Evolution : A well laid out contract also facilitates future evolution easy .It could be new version , a customized or localized flavor which offers changes in the same neighborhood as original older version of API (hence evolution ) . This however needs a better governance put in place .
  6. Tech Debt repayment : A well defined contract also allows decoupling between team producing it and team consuming it . This allows the team to keep on changing internal technical detail of their API healthy and repay tech debt without disruption .
  7. Other than code needs : There are many other than code needs like performance ,build , documentations , availability  , monitoring ,audit ,security which are get into motion after the software is committed to repo .API first lays out clear shape of the software to be and allows these team to think thought early on and facilitates these other than code needs at much deeper level of engagement .

Pitfalls and background of API First

API first is often confused with webservices .This often leads teams into thinking that if they are not exposing any url style webservices to anyone they are free .A better term for API-first could be contract first (but that’s very technical world so API is used ). This means that no matter whether one is creating piece of software that is consumed by outside world or other teams/developers in your company ,so long there is interdependency one need to commit to clear contract .And in order to see the gains of agility etc one need to commit to this contract first .This also means that all layers of software from external facing UI -webservices to components-frameworks in middle layer till data-integration layer one need to think  of defining clear contract .To that effect every component is an API that is producing or consuming other API . Even so-called cross cutting layers like logging are mater of laying out clear cut APIs.

The concept of defining clear interfaces in now new to programmers . We often talk of specifying clear interfaces and programming to interfaces .However this good practice start at individual program and ends at design patterns . May be it is seen as something only serve side programmer do . But , as we approaches more layered , more distributed(teams as well as software) , more integrated and more reusable (think open source ) , more componentized and so on the value of clearly laid out contracts between these seams is apparent . API first is this a mainstream realization of this fact , tough people confuse API with webservices only .

more reading ,Bezos Amazon API Manifesto and link to Yegge of Googles Rant : https://gigaom.com/2011/10/12/419-the-biggest-thing-amazon-got-right-the-platform/

Categories
economics Software

Salary Alternative to Layoffs in recession and Leadership Burden

The Upcoming Recession

With CoronoaVirus Pendamic, in 2020, the news of Layoffs and cost-cutting are happening again. For my generation this will be third such global recession ie 2000 DotCom bust, 2007-8 Global Finacial Crisis and Now CoronaVirus lockdown recession. In the times when the business stops, revenue dries are mass-scale layoffs the only option? This post argues that it is not and goes on to propose an alternative and new leadership paradigms to avert it in the future .

Current Salary and layoff Model

In my LinkedIn reply to Dan Price of Gravity Payments(2020), I suggested the concept of alternative recessionary salary ARS. If you see the current salary structure it has the basic/guaranteed part including the statutory/legal. Then there are commissions, bonuses, Stocks and some performance/result based salary. The later are the optimistic components based on growth. At times these growth components are applied to people who don’t really have any realistic control over growth. But as part of the norm it remains.In Indian IT setup, variable pay for even junior level is in practice and some CFOs even touted them as levers available for quarterly results!

Alternative Recessionary Salary ARS

The ARS that I suggest is an alternative salary structure offered to people in recession time. It has 3 components. One is the salary cut that everyone takes because the revenue is down, say 1/3 part. The second one is the mandatory pay that everyone gets say 1/3. The last one is the salary credit, say another 1/3. The salary credit is accumulated (accrued) in persons’ account for future payment . Say a year later or sometime in the future when a certain percentage of revenue is back one can vest it . But it is legal entitlement so long the business remains alive. Even better will be to allow employees to decide their percentage of cuts and credit as opposed to 1/3 I suggested above .

Now the boundaries. First, it should not be seen as a loyalty program. Many times loyalty programs allow the deadwood to stay in companies. It remains undetected as its such a non-CEO thing to talk about skill obsolescence and retain good PR on loyalty. The model above is then a sacrifice, an investment made by employees in the companies they wish to continue working for. The finance guys don’t like to accumulate future claims on their books. So this model can kick-in for a year only after which the layoff can happen. But the main question is what to do with these employees when the workload is less. The answer is, efficiency. At 1/3 pay, it is reasonable to put them to all the improvements and innovation tasks you always wanted. If you are an IT shop, putting them to work on open source will be a great investment. If nothing else works putting them to work on community/environmental issues is always possible. If this model becomes the norm some governments might include them into their tax incentive systems!

Threats of misuse of ARS

The threat that this ARS becomes the main salary structure and doesn’t remain an alternative is real. But your HR people have lots of wisdom to offer here :).Recessions don’t last, once it’s over and the ARS is used as exploitatory practice the talent will simply move on or change its productivity. There are even more possibilities in which the workforce will retaliate. The aspects of the social bargain are well known and it would be a detour for this post to repeat it.

Why doesn’t ARS happened

I can’t say for sure that ARS or like model hasn’t happened . But it’s not mainstream. (In an opinionated way, read the following).

Modern leadership is leading by finance! Quarterly results weight much more important than:social good, nurturing employee or even national priorities! Even terms like innovation are not valuable unless it results in profit or cost optimization[one example,In India FlipKart started using air-inflated bags for packaging as opposed to thermocol , huge innovation on the environmental front, will it count on wall street ?]. The current corona crisis is a test of the resilience of our social structures. This includes organizations too. Nations are made to think about the sufficiency and efficiency of their health system, aid programs. Some have even started to question their supply lines and independence in fundamental tech. Even the matter of trust and cooperation between larger structures such as the EU or G20 are happening. This will result in a sort of reboot and redesign at many levels of society and companies(which might make recovery very fast).

It is the test of structural resilience. But it is not as mainstream as the matter of solvency, optimization and profit are. In that effect, modern business leadership is dictated by personal ambition of bonus payout of some aggressive hedgefund guy than thinking about social structure. That has become such a norm that buck-backs during the recession are not even questioned by anyone.Whereas it’s the social structure, of diff kind, the supports/incentivizes/benefits/suffers the organization. This is the basis of capitalism which has given way to stockmarketilism . The invisible hand of capitalism is cuffed. High time the CEOs are freed from the burden of quarterly increases and newer metrics of growth-benefit- wisdom emerge. In today’s business landscape, where profit and personal ambition often take precedence over social structure, tools like an instant checkstub generator can provide a tangible solution.

Notes for further reading :

  • Harari on the world after https://www.ft.com/content/19d90308-6858-11ea-a3c9-1fe6fedcca75
  • Ben Thompson on Compaq moment: https://stratechery.com/2020/compaq-and-coronavirus/
  • Adam Smits Invisible Hand: https://en.wikipedia.org/wiki/Invisible_hand
  • HBR article on compensation alternative: https://hbr.org/2018/07/7-compensation-strategies-for-cash-strapped-startups
  • Amar Bhide on capitalism mishaps :https://hbr.org/1994/11/efficient-markets-deficient-governance
  • Tim Urban on why things are the way they are https://waitbutwhy.com/2019/12/political-disney-world.html
  • World After ,Survey of experts https://foreignpolicy.com/2020/03/20/world-order-after-coroanvirus-pandemic/
Categories
Software

Calling out the BS on Full stack recruitment

T recruitment in 2020 is living in Full (stack) paradise. If you are into the junior ranks which i.e anywhere less than 8 years of experience then Full stack is your zone. At higher levels, it might not be part of the designation but its always implied. But at that level, you are way better “experienced” to handle job and career .So this post is focusing on Ture Full Stack Developers (TFSD, a term which if not coined already, might become reality soon).  

The ask: Full Stack Developer (FSD)

The most popular description of FSD is a software engineer who can do backend and frontend, both. An alternative description is that FSDs do client-side and server-side development in addition to database work. In specific you are the one who knows to :

  • Work on UI  
  • Work on services 
  • Work on data storage 

Based on your IT stack this could mean web/mobile/desktop/low-end IoT displays for UI, some combination of web services/streams/security backing it and at minimum a database/cache/session working across.

Benefits of FSD 

For startups, this offers a sweet spot. We can work with smaller teams of talented engineers who can do the “whole thing”.Often FSD also leads to complete ownership of the “whole thing” to a developer. Which is why most training plans for entry-level engineering talent are Full-stack. In fact, it also expands to some orientation on OS, UX, network and so on. Rightly so. A good breadth both in training and work exposure is good for engineering organizations. Except that it is a means and not an end.

From Full-stack to Fools Stack 

Proficiency, expertise, and words like them are used in Job Descriptions(JD) of software engineering to imply a good quality of depth. Except that it should be Full-Stack. That’s asking “to have great depth across the breadth of the technology stack(we use)“. This is not asking for specialist developers with good breadth. This is asking for a specialist in everything! 

We even have cute words coined for them, LAMP-MEAN-MERN and so on, signal this very thing. And it works. 

The hidden fallacies 

Developing a liner flow of requirements fits naturally to how we think. We start doing one thing and can finish it well. The same linear flow can be coded full stack – end to end. Except for software development ,it doesn’t remain linear. Functionally as well as technically. The lines cross, merge-diverge, conflict and suspend. This needs specialization. If your org is working well with FSDs then you are sure to have these specialists hidden (and operating and sustaining your FSDs). 

Some examples. On an agile task board, the UI development might look like a diagram that needs to be drawn on a device/browser. But it also involves lots of animations, offline ability, support for disability norms. We have to also make sure that the components don’t make too many server loads and it needs to handle clickjacking and lots of web security stuff. And your business user also wants a responsive web design where the page supports text entry on full form but falls back to toggle on smaller ones. And yes we need to also support integration with the camera. And yes lots of browser types we need to support. And yes it will be great if somehow we can also make it work(render) on Alexa show or your watch.

These are not very unusual or problematic requirements. But they are so specific that one cannot handle it without going deep into core issues like HTML/CSS/HTTP/Device or delegating to some specialist.

The same can happen to your back end. We can start with some combinations of REST-microservices. And you need to also work with auto scaling design, support distributed commits, comply with tracing, also use some queuing. And we also need to support headless mode for some peer to peer calls. And some batches also. And some of the data can be binary. And we need to also use API gateway/service registry. And also support multiple service versions.

We could go on writing the same expansion of depth for the data side as well. All of which/this is not unreasonable. But taken together this is a lot. And we also expect the developers to understand and model the domain wisely (along with their tinder and insta 🙂 )

Yet we see neither FSD’s complaining or projects failing. For 3 reasons, I say.

  1. People are more talented than the slice of intelligence we pay them for. So they pull it off.
  2. This pull off is at the cost of some other super developer or their own passion for something else.
  3. The technical debt and bugs it creates are not accounted for in the delivery/shipping based criteria of the project’s success.

Why FSD is hard

In the time where I started my career, we used to call them technical architects. These were little experienced people who happened to work across the application and managed to retain the skills acquired. It took them time. But they had stronger grip and wiser intuition of it all.

Most of the JDs we see for FSDs are 2 to 5 years or similar experience. But FSD is hard due to multiple factors that play out.

We are coding to frameworks

If you the FSD JD’s they are full of frameworks. Frameworks are great in abstracting problems and offer a great productivity boost. But they don’t eliminate them. That’s the nature of an abstraction. In IT projects the fundamentals of technology, as well as design, show up unannounced. This increased workload is not factored in the FSD world. Not only does it cause increased workload for Jr developers but it also kills their opportunity to have a detailed understanding of how the framework builts on top of the underlying technology, the choices it made, the problem it solves and then ones it skips.

You mistake layering for the stack 

If we carefully analyze the full stack it closely aligns with how the projects architecture diagram looks like. Often the full stack is vertical slice of this diagram. What is, however, missing is that your diagram skips lots of techo-framework details due to its 2D drawing nature. Security, Monitoring, Deployment, Packaging, Scaling, Performance, etc. are part of the work in equal measure which cant be completely kept separate from Jr developer’s work package. And did I mention the design principles, process, testing, and documentation?

While FSD style hiring can give initial relief on staffing, we are greatly missing on maturity aspects.

Your stack is not alone

Oh yes, Unless you are a startup building the next uber/google/amazon or what it is. Your stack is not alone. There are always some enterprise systems that we need to integrate with. Some SaaS-based products, a BPM, an ERP, Rule engine. Some schedulers or some in-house “stuff” that is recommended as a norm.Or at least AI in included 🙂 . New hires are lucky if they are told about this with JDs.Often this is missed and needs to be paid for later, somehow.

Developers have inclinations

Each developer has its own inclination towards soft aspects of technology. A UI developer often called as frontend developer can intuitively realize the event-based nature or the aesthetic pleasure of laying out and the experiential nuances (UX) and pursue it. A server or DB side developer might like the notion of event sequence and the time-space nature of their work and hover there. And So on. May be younger developers are not able to verbalize it well, but when an able programmer says he enjoys it(his work), the reason is not comfort-zone but the sense of joy he gets.

The problem FSD creates for developers 

We can go on about structurally how FSD misses on lots of aspects of the actual work conditions and how it needs to be paid for later by organizations. But the developers who manage to pull of FSD tag for years have a lot waiting for them also.

Stack Stress

The frameworks change very fast. It’s very common to get a new version of your tool every 6 months. Some times the changes are also breaking. Taken together, full-stack. This is a lot to cope up with. On top of it, your project might not even choose to use the new thing. In which case the individual need to figure out how he will keep updated.

Dwarf experts 

The world is full of great brains. Brains that can pull off all stack development and brains which are truly masters of all parts of the stack. I love to work with such brains.

But are they so common? If not, then as the industry we are pushing the teens to war without letting them develop a deeper understanding of tech.

Resume driven development 

How ready are you to recruit a great swift developer for nodejs project? or an angular developer for React ? or hands-on Unix admin for Kubernetes?

The answers that we get from the hiring community and mangers are not encouraging. Everyone wants a perfect fit for the project without realizing that we are breeding disposable developers. But the developers know this intuitively. And they fightback. With resume driven development.

The greatest and shiniest thing has to be on one’s resume or we are willing to change the tech stack/project/company. (If only we were paying them much higher for such disposable hiring ).

Joyless Work 

Given the target of FSD hiring and this post is below 30 years developers. This is a major factor. The FSD hiring movement doesn’t talk of development plans, career paths, mentoring models or code retreats. We instead have hackathons, reskilling, upgrading. We have even stopped talking of reading long format books on our work area .The buzz in skills development is all the online courses. If we ask other arts professionals, the fundamentals give them joy and the latest style and trends let them keep their jobs. The FSD movement in S/W misses the technology fundamentals part.  

And I am not even talking why they also need to code on algorithms and data structures in their first-round(what a nuisance of proxy it is for talent) 

The Way out

The way out is to train them across the pyramid and be candid if FSDs mean disposable recruitment.

So much we talk about how fast the landscape is changing for technology professionals it is not the reality. The underlying technologies that drive our tech stacks change in span of multiple years. It’s only the stylistic solutions based on them change. Even then we often see these frameworks use designs and techniques from the same neighborhood. A very terse list [(System V to VMs to containers),(make-maven-npm),(taglib-web components),(grid computing-bigdata)(DW-ML)].

My approach was to always train them (or make them self train)on how the framework built up happened. This means knowing the basic technology in its bare form. Knowing the significant earlier attempts in the framework space that existed. And then training them the given framework. The same approach was used for the so-called tech stacks we had in our projects. We also exposed them to wider QoS needs that the frameworks might miss or partially cover. And the standard design approaches in S/W. This allowed them to see through the stack. This not only allowed them a great grip on them but also the new changes in these framework/space did not cause surprise.

I have been working in architecture teams.The examples I give here took anywhere between 3 to 5 years to train a good developer across layers and components of the products. These were also full-time direct reports. (often our mentor relations will continue across companies).

Chances are that the developer you are recruiting as FSD has done this/similar approach on his/her own. (S)He has also done the hard work to read a lot and do hands-on work. And will have to continue doing so.

But we must pay them well for this hard work and the tough job of keeping up to date. We must also realize these are the folks that seek joy in their work on the lines of a craftspeople and have different self-esteem than usual. Handle them well recruiters….

(and to these folks…I will welcome you all to the technology architect community soon, unicorns 🙂 )

Categories
Software

Generic Solution fallacy in IT

if we take word cloud of all the design discussions a technical architect participates in (or made to participate in ) generic solution would stand out as one of the key term. Be it a waterfall process or an agile one or even appraisals.

The claim is simple. There are multiple things that need to be done in an IT scenario and there is a generic solution that takes care of it. It is often also a direct claim to some efficiency or effort-cost saving. Which makes it very appealing in meeting rooms and is often makes technical people trip (the verb). Often it is not the case tough.

if we take word cloud of all the design discussions a technical architect participates in (or made to participate in ) generic solution would stand out as one of the key term. Be it a waterfall process or an agile one or even appraisals.

The claim is simple. There are multiple things that need to be done in an IT scenario and there is a generic solution that takes care of it. It is often also a direct claim to some efficiency or effort-cost saving. Which makes it very appealing in meeting rooms and often makes technical people trip (the verb). Often it is not the case though.

Examples of Generic solution claims 

In the era when NoSQL DBs were all that buzz, I was reviewing a generic solution that would allow connection to all types of DBs.What the solution claimed was that between relational and different NoSQL DBs the way to connect to DB was different so a generic solution was build to handle it. To the experienced eyes, this was actually a common wrapper for various DB. Internally there were factories that were forced under a Common Interface. And not to say a lot of IF-ELSE around DB_TYPE flag and so on. It wasn’t objectional as a proposal except for that claimed lot of savings and fell apart when DB specific constructs were to be handled. And DRY of what was in the field already.

Another example was when we were doing REST. This was more than a decade back and lots of legacy code existed with us. Again a generic solution. This was, hold your breath, a generic REST service. The claim was that is more effort efficient! This generic REST service would expect service name and all parameters in the header and a generic service message body. One needs to then just write specific handlers and bingo lots of effort saving. It was a long struggle to convince the involved that this solution folded the URLs into headers and the effort to write service related code was still there. In fact, the enforcement of generic had increased the coding work. Not to mention it had killed REST.

I had seen multiple variations of this. A generic integration toolkit for service integration. A generic authentication manager.A generic message parser, a generic webpage, generic deployment tools(cloud + bare metal), generic build tool, generic log. Invariably it would be some combination of factories-command-strategy that missed the essential complexity of specific processing and claimed to be a faster way to project success. 

How to evaluate such claims?

Given that most of such occurrences missed the same points, I use the following line of probing when I was to approve-review the designs.

  1. How did it handle the differences? (of whatever was summarized as generic viz different formats, protocol, delivery mechanism, etc. The answer would invariably be to write code to handle it )
  2. How would it communicate the different types of errors? (many times this would be a total miss or some generic error wrapper that needs to be parsed by calling party)
  3. How did it cater to the conditions that apply to the execution flow? (it would be either missed or one need to write code or sometimes a domain-specific language ie DSL was to be used)
  4. Does it/have to maintain some sort of state or stack in order to finish the execution? (this would expose deeper design flaws )
  5. How much effort would it take to add a completely new variation of specific to this solution(again a very fair question that would test the robustness of the design)

I would specifically not ask whether such a solution exists already or can we open-sourced it. The fact you are in this situation means the point is lost. I would also not ask which all design patterns apply which when answered would offer false proof of validity. The intent was to do a fair assessment and check if the generic solution discounted the specifics.

Invariably the generic solutions had missed the complexity of the specific and claimed that because it has been wrapped -it doesn’t arise (and hence the savings ). Gradual questioning like this would uncover the effort/processing complexity. This often leads to the generic solution giving space to rightly estimated solutions that represented the real depth of the complexity.

The positive side 

But there is also a positive possibility. Sometimes there is a real fitment that exists for such adapters(a very broad range I cover with this word). An expanded version of the above questioning makes the design specifications deeper and allows the team to correct the course. The example in the above list where a DSL was proposed to handle all common tasks applicable to our problem space was indeed a good solution that offered effort saving and also empowered the developers. We must also note that in such cases the correct term would be general-purpose as opposed to generic. It conveys the trade-offs neatly and effort aspects transparently. 

In another case mentioned above, a generic webpage solution was offered. This was basically a wrapper of HTML and JS that could handle a number of UI related needs we had. One could argue (and I also had my reservations)on merits creating markup language for markup. In the era where componentized micro front ends are proving to be practical, we could have dismissed this solution. In my case, we stressed it on the team that this is not a generic solution. It was, in fact, a templatized design-solution. This made the team aware of the enormity of the claim but it also allowed the respective scrum teams to make their choice if it really fit their scenario.

In cases where I had more control over the teams, we also used all such submissions of the generic solution as a training opportunity for the architects. It would often be contrasting their work with existing tools and frameworks in the mainstream and whiteboarding on how all their solution could evolve to a professional level.

But that’s about a positive spin. The appeal of generic solutions in effort discussion has not reduced….. I hope this post helps you with that somewhat.

Categories
Software

How I build my product engineering teams

The toughest part of software development is human communication. If we group together words like specification, validation, experience, exposure so on they all represent communication.It is the centerpiece of your development procedures. If we dig deeper into these words the essence of why software development is not a fully controllable discipline like many other branches of engineering we realize that we have idea-communication problems.

When we sit together with a newly formed product development team this is the first thing I stress. It is your idea of what their idea of whats customers’ idea of the reality is. And often even our sense of reality itself is incomplete, leave alone its articulation and understanding. This realization makes our developers more humble( especially the ones with a hackrank high), the project plans realistic and the leadership vigilant. But how do you make sure that like many other mission statements postered on the wall, or at the end of email signatures or as messenger status this doesn’t become the most ignored and broken often statement. Read on.

The hard aspects of product engineering teams

Product development is hard. Hard in the sense of how some things are computationally hard. A typical services project assignment masks a lot of complexity and preparatory work from your team. The idea justification and realization might have happened already and you get a curated feature list. User experience studies; focused group discussions; technology stack debates, prototyping, sizing and many “other than programming” tasks might be hidden from you to a varying degrees. But one can always go back and read about them once the project starts.

Enter products. A commissioned software whether inhouse or outsourced might have a large exposure area, both in terms of users, features and run conditions. When you do products this happens to be your baseline case. Add to it the complexity of providing features to customers of your product customers. It changes the game from being a good housewife(or husband) to teaching people the art of courtship. In particular It :

  1. changes your requirements from a list to a range (where wise choices need to be made)
  2. Exposes you to comparison with known and unknown alternatives (some of which could even be non software !)
  3. Stretches the boundaries of user behavior – run conditions and regulations by multiple sigmas (normal distribution analogy)
  4. Forces you to design for evolution (no longer a design ideal)
  5. Challenges your notion of S/Ws purpose ( when customization needs are considered, a specialized product can easily start looking like a general-purpose of that domain ) 
  6. Slowly introduces version infidelity (a cocktail of customer pressure, delivery delays, competitive threats, and bad design can entice teams into breaking the baseline into customer-specific releases(welcome to product management, service guys 😛 ) )

and this is not a complete list but you get the idea. And we need to keep our teams ready for this ride.How?

I do it with what I call => complexity simulation.

Complexity Simulation

The first and foremost mindset item we need to put into the product team is that “programming language, data structures, frameworks & libraries are choices and you don’t have to always make all of them”. So is architecture ( but then I need to put lots of caveats, so I skipped). Between the time I started my career in products and now; great books and talks have come on this topic. So these days I can simply ask them to read books like code complete-clean coder and likes or ask a senior member to do a session on them( these days lots of code retreats that happen, also help a lot).

The main task of complexity simulation is to give them miniaturized product assignments. The idea is to give them specification that is

  1. Not an imaginary app but represents a real-world usage scenario
  2. Loose enough to allow choices but clear in scope ( so shared learning can happen )
  3. Has at least one occurrence of each QoS [Auth, Logging, Performance and some failure scenario]
  4. Is evolutionary (Prompt for a monolith and then graduate to modular or alike)
  5. Has some arbitrary and conflicting specifications ( like demand the design to REST midway, demand configurability just before the project is to end )
  6. Encourage lots of buzz words and latest hype (and then put requirements that expose their boundaries ).

A common thread in this approach was we would always have one external person walking in on a few occasions. He would sometimes question the approach, change some requirements and make some “suggestions” with intent to disturb their rhythm.

Many times we started with one person assignments and one of the disruptions was to then merge everyone’s code into a single repo. It was fun to see how long developers struggled to agree on common structure and design(despite all the training they went through).

In some cases, we would deliberately split the work across layers or components. This was to expose them to interfacing problems and also the problems on different in speed of individual development(and what they did with it?). In some brave cases, we even suggested code/module swap to simulate take over/maintenance.

Outcomes and learnings

The learnings were immense and broad-based on both sides. It was interesting to notice that when such projects were more functional (than purely technical ones )most developers would do some sort of domain-driven structure to their module. In technology-related assignments, there would be an obsessive overdose of patterns. At times they would suggest modular deployment in the first draft and it was fun to see the reactions (and later learnings ) when I would push for a monolith for the first version. The gradual evolution of their design, later on, made a lot of them smile.

Sanctity of Interfaces would always break! most good developers applied some sort of logging say: a log4 or at minimum console logs but forget to introduce meaningful traceability. Similarly, overlapping functional requirements like 5 types of banking accounts with their own special features and 4 types of roles that can act on them with varying authorization threw a lot of theory they had read off balance(say like REST, OOAD, proto links, schemas , reuse).

We would also schedule periodic peer reviews and I would particularly probe them on places where google’s help was saught and how they fitted in.

The biggest learnings, however, came in the experiment where we asked individual repo’s to be merged into a single code base.

The team would invariably run into chaos as to whose version was better but natural leaders would emerge and finally teams would self organize. By natural I don’t mean the first vocal person to occupy the mind space(which often turned out to be bad at teamwork, mostly the people on standby emerged as leaders who would guide the team to conclusion effortlessly). Tech jargon and buzzwords caused most frictions; it was only after we introduced a few common principles and vocabulary that consensus happened. I could go on detailing various such observations and how they changed as we started getting millennials in teams. But as an objective of complexity simulations, this helped our future teams prepared for the real.

In terms of experience, these teams could be a mix of freshers to say tech lead levels. Sometimes they would be new to the technology or be trained on languages/frameworks that we used.

In my opportunities with product teams, we have done various runs of this. We built: web-based chat applications(to demo lots of protocol and design issues), stock market apps(for RWD, grids and especially the spider web of 2 way communicating components); in some scenarios wrote our own version of MapReduce and toy file-based SQL engine an ESB engine of our own. And a few more which I can’t write lest I leak some company-specific designs. 

The main outcome

One might call this exercise as training. To be fair, much of the training in s/w is focused on teaching nuances of technology or framework and can’t afford to dilute their focus with the above aspects. Nor can hackathons (which are output-oriented as opposed to our approach of increasing the depth).

The biggest and not so obvious objective of my exercises was on the team aspects. Getting help, co-operating, keeping commitments, electing leaders, asking lots of clarifications are key skills. More importantly, managing conflicts, arriving at beneficial conclusions(as opposed to consensus) or responding to later stage changes, design rework and negotiating your way out of arbitrary requirements or time pressure are hard skills.

It is (also) this complexity that needs to be stimulated to working product teams(which cant come just by agile , methodology training). Many of my experiments ran for 2-4 weeks but with more self-aware, well-read and hands-on team members joining our teams these days, we also did shorter runs or few hours a day for a month side assignment. A few cases it would be a one-person run(High potential development,you see 🙂 ) . And there was a different version of this reserved especially for tech leads and budding architects ( more on it in some other post).

The only pitfall was personal equations, hype or self-doubt could seep in silently. But these were expected. And these were my teams so we took care to confront these later on.How? In future posts …

Do tell me what do you think and your experiences as a technology leader in charge of building product engineering teams.

Categories
Software

Meet the sibling of poc, RnD claims in IT Appraisals

Once you have got past the POC claims in IT appraisals its time to meet its sibling RnD. While the POC claims are easy to handle emotionally the RnD claims might get any techies nerve :).

Research and Development shortened as RnD is probably the most important activity that corporations do. It could the hard research that is done or an innovative solution or marked improvement in service offered by what your corporation deals in. That’s about the formal definition. Much like how innovation, strategy, vision, etc are used in a very diluted fashion we have also got used to RnD as a word used in work life. The misuse of the word is hardly the problem.

RnD claims in IT appraisals are probably the largest claim that could be made for top ratings. The cases are often presented as crisis+void vs heroism as the formula. Sample the following.

  • The team was completely new to flutter development and (s)he did lots of RnD in solving the critical issues.
  • As we moved from local machines to prod (say cloud) we had many unforeseen bugs to close (!!) and (s)he did lots of RnD…

At times the claims are even sweeping …

  • Nobody in the company had worked in AR-VR or IoT or BlockChain and (s)he did lots of RnD …
  • (s)he had no background in this technology ( say a .Net person on Java project) and (s)she did lots of RnD …
  • There were many critical performance/Security/Deployment/Usability issues and (s)he did lots of RnD …

If you see the structure of these claims it follows crisis+void vs heroism. We must admit that the person in question has put in lots of effort (and must be given credit/reward for that). And that’s about a fair job done as far as performance assessment of an Individual is concerned. 

It’s the “why” of this structure that has a lot to reveal. The RnD claims made in appraisals hint at the following.

  1. The team is clueless about the technology
  2. The team was not offered adequate hands-on training  
  3. The intended structure of knowledge flow/review/mentoring is not working
  4. The estimation/Design/Team composition is not correct or clear
  5. People are clueless about the complexity that enterprise-scale or products bring in (and the discovery of them is overwhelming and someone the does the RnD)
  6. In cases where project takeover/maintenance is happening it might signal a lack of documentation of various forms.

Beyond this list, there are many people-related factors that might be at play: naïve but disillusioned developers, a (pretending) architect in the team, a panic driven style of project management or even a deep organizational culture issue. All of these are complicated issues to solve and appraisals are not the time and place for them to be taken head-on. But if you have some influence or control on the techno-people aspect of your teams the above list has 2 good uses. 

First is, of course, we can take this line of questioning and find out the extent to which we should honor this RnD claims. The list above serves as a simple and neutral line of probing.

Secondly is this is your TODO list. In my experience as a product architect which also was an in-charge capacity, we use these claims as inputs for 

  1. Team training and Individual training ( Hands-on assignments, pairs identification, and so on)
  2. Team recompositions 
  3. Process improvements 
  4. Documentations, sample code snippets, bootstrap code projects 
  5.  and a big session of how to get help and where to reach out

This is a very broad list, in some cases where we could do deep dive into issues, we also realized that our training needed some fine-tuning. For eg. my freshers always struggle with ng-serve and exposing their apps on machine-specific IPs. In another case, few people could that fathom that their angular app and rest code can be packaged and run as single deployable (because the angular and spring boot were 2 diff training).In other cases, we realized that promises were not covered as part of training (and we needed them) and that the REST was oversold to them. But not everything can be so obvious and clear. Our rule of thumb to our team was “if you are struggling with something for more than 30 minutes please reach out to me”. In our case, I was the old man I/C of tech troubles in the team. But this 30 minutes timeout helped people a lot with burnout and avoided future RnD claims. 

The session on “How to get help and where to reach out” is obvious but not intuitive. Read on. Between the time I spent in product and services we had this new assignment of rescuing a project. Between me our PM and leads we were clear on what design we need to do and how much effort it will take. We even had a detailed plan ready but the time criticality mandated that we get it right the first time. We needed a team that was skilled with the technology and was quickly to reach out when stuck. And we changed our approach to team formation and working.

We did not take new people from a list of xls offered and then call for a “discussion”. We instead called say 50 people in batches. We gave them a very diluted version of the piece from our design and asked to code something around it. Yes, something that they could reason out. Googling was allowed and so was copy-paste. 

And then each one of us would pretend that we are secretly helping them pass the assignment and offer lots of suggestions and debate points. The focus was not on how well they could code a service or create a table-query it or write multi-threaded sorting. The focus was on to see if they got the solution construct right, could they understand when we told them to follow, the choices we prompted them to make or ignore a stackoverflow answer. The experience was enlightening.

People who did not know the technology struggled to find answers to their tasks online (in which case we would tell them to forget the task and code something and explain it(as sort of fresh start/second chance) ).People who understood technology could differentiate between applicable/relevant google /stackoverflow answers and also spot the correct but not applicable answers. People who were skilled at their craft, the ones who understood our tasks at the first sentence, then they would go to their favorite tutorial directly or could quote from a book where they saw it. All this while we were prompting them with different ideas, it was good to know that the ability to seek-receive help is a personal trait (and not related to your skill level directly).We could not only rescue the project but it also rocked (I had moved on by then time it finished). But most importantly we did not slog, no burnout and 7 years later our team members stay in touch fondly.

I am sure that these insights are not novel, there would be some study done on these aspects and a lot written. But to know that people didn’t know how to deliver code and didn’t quite get how to get help was not known as this major a problem. This also made me create a long session on “How to get help and where to reach out”. And I end my session on “if you struggle with something for 30 mins please reach out to me”.

And yes there are still many topics, techniques, and libraries where I and we as a team won’t have first-hand knowledge. We call these as exploration items in our sprints. Either experts or hands-on. The Millennials also loved this growing in into technology experience …

but none made a RnD claim and no brownie points asked for in appraisals … 

Was your appraisal experience this lucky?

Categories
Software

The PoC scam in IT appraisals

One of the responsibilities that comes your way when you are a mid-career techie is annual appraisals. Some of them are your own team members and a lot where you participate as an external voice.

Your opinion as a technical person or subject expert is valued highly when it comes to top performers.

These top performers in the list have typical attributes, they have done lots of work in the project, often they have worked long hours and done “critical” work, won lots of “appreciation”, everybody(read the booses) like them and they have done lots of PoCs!

And what about skills? you may ask. Of course, there are skilled people, the ones on the floor that you have known as good with their craft, ones whose peers go to for their opinion ( and not help, which is overloaded term for outsourcing the hard mental work under guise of help ) and ones whose code is elegant and designs thoughtful. But there are not the traits that get highlighted often.

A manager representing his cases invariably talks in terms of hard work+critical tasks+star of the team that has done the work excellently (yes there is a word like that, an adjective(that I had to swallow many times) ). One might also sprinkle innovation as an adjective, which somehow again goes back to PoCs!

But first, let’s be fair to PoCs as their rightful place in software’s and later we can talk of their misuse in annual appraisals.

PoCs are great and helpful if they are aiming to validate (something). For example: In a mobile app in banking domain, can we skip showing real-time account balance and put Facebook like refresh button? By 2020 this technique had become mainstream but we first encountered it 5 years back it was a first-class case for a PoC on pull-based UI interactions for bank use cases. Another, more software-ly example, can we use actor patterns for fulfilling bill payments and will it spoil the user as well as IT expectations(read service guarantees). Such pointed question on will it work ? or how can we make it work ? are often a good candidate for PoC.

If the question is how will it feel, both in user experience or as a piece of running software are better-called prototypes. ( One might be keen to use MVP concept but the revenue and funding imperatives of startups are way different than typical IT setup ) .Ah yes, we can also call it as demonstration, plain and simple, without the decorative armor of “concept”.  

But all this is to redeem PoCs as a rightful technique in evolutionary software development. This is also a diluted non-exhaustive overview of PoCs. 

Back to appraisals, the list of PoCs that get listed as great points in favor of candidates goes like these. I did a POC on XML to JSON conversion in java (RIP Jackson ). I did a POC on predicting monthly account balance based on spending pattern using NumPy (regressive ! what have we proved here ? (anybody for twitter sentiment analysis (facepalm))). I did POC on ESLint in our CI-Pipeline (ah, not already using? ).

One can take something from docker, AWS, SHA 256, some spring integration scenario, message mapping, DB or Queue that is shiny new thing formulate a sentence on PoC we did.

Often these are the hands-on someone did upon which his boss is marveling and you are expected to put a stamp of approval on it. Given that its appraisal time and you are an external reviewer giving a blunt judgment might not help.

In my conversations with appraisal review committees, I used a gradual probing approach. That helps the representing manager see the real depth on the claim made and it also saves you from the potential pitfall of missing some finer point due to rushed judgment.In simple language the questioning goes like this :

1)    Is that an established fact in industry/community? (No I am not suggesting asking what is the benefit of the PoC, which is already a rehearsed narrative)

2)    Does it help other/newer team members as a reference-sample piece to use? ( in which case it’s a demo piece, a respectable work but outside the greatness claim)

3)    Why did we need it to be proved? ( in case the first question is not feasible, this helps nail the motivation behind the work )

And assuming that the case presented wasn’t a trivial work…

4)    What are the boundaries we have considered for this POC? ( the intention is not to ask assumptions which tend to be arbitrary, but to probe the limits chosen from available feature spectrum )

5)    What is the level of stubbing done here ( this one is a crucial criteria, often the solution which makes this far is heavily stubbed across layers and it’s not told honestly )

6)    What will it take to detail out this concept into complete version (this is the fairest chance given, a purpose and utility-driven work will give a neat list of evolutions that has to happen )

7)    Can we put it on GitHub? ( it might not a feasible but this is one shorthand question to expose petty projects, also this could be asked in any sequence(another variation: can we patent this ?) )

We could always split these lines of probe further and a better assessment of the work presented and see if it’s really something amazing that has been done or is it another case of ADD => Appraisal Driven Development.

Most often our frustration with Poc presented is not that they are some redo of well-known pattern/technique/sample repo. It is also fine and I believe its natural, for people to claim mundane as awesome which one can gently counter in appraisal discussions.

PoCs that are mentioned in IT appraisals are often a demonstration of hands-on ! which itself a such a wonderful thing in skill development that its misuse of PoC is frustrating (and hence this post ).

If you have experience with Spring or MEAN stack, where my viewpoint rests, you would identify with this feeling easily. We get team members, with a fair amount of trained freshers. They have a good general grasp of the framework, say angular or spring. But the moment you move away from mainstream coding scenarios like MVC (in respective frameworks ) the seams start to get loose (be happy if your team is better J ).

An engineered demonstration of hands-on works like wonder in such a situation (I hope your schedule/org/budget allows this, I have been lucky though ).First, it allows your team to try it first hand and be better equipped, it also helps with more reasoned discussion with talent managers with the team member choices are made.

But the real benefit is in what I call as complexity simulation. Enterprise s/w or product development is not easy of a development task to do. There are various Quality of Service criteria that apply to them. It could easily be the ability of design to evolve, the traceability requirements, cloud-native surprises, design compliance, legal requirements on privacy, security. At times one also needs to choose competing frameworks and libraries. And if you are in products we have cross-version feature movement, refactoring, tech debt payments, targeted customization, design compliance and a long list of “work” that is not obvious and throws developers and schedule off balance.

Such situations present a fantastic opportunity for the architects to give targeted demo assignments that simulate the real work complexity to the developers. One recent assignment I gave to my team was to write chat applications using Spring boot,H2 Db and Angular. Absurd? read on. The version one there were asked to keep one on one chat. Later we evolved it to one to many chats. Further, still, we made chat history to be a mandatory feature (exit H2, enter mongo or even MySQL in some cases ). And yes, all this with REST endpoints with swagger.

BY now the browser would crash so I let them put a 1-sec refresh interval. This later evolved into login –oauth –WebSockets –user registration –docker and so on. We could complete this exercise 3 weeks’ time.

Of course, the chat web app that everyone wrote was independent and thus was different in design and form. It wasn’t production-ready. But I had a team that had a more nuanced idea of development, design, and delivery. It was a good proof of capability for a team that would go on to work on chatbot projects (yes we also added coreNLP to the equation).

After these projects or assignments are done the team members got allocated to diff projects, they do well there. And before that, I made it a point to let the new booses know the project they did the exposure they had with me/my project.

It helped them get a realistic idea of their abilities. Many of the design choices, new items, etc that they worked on in my assignments flew into the new projects without getting morphed into PoC and get claimed into appraisals. (of course, all of them had better concrete claims to make in their appraisals minus the pocs : )

Was your appraisal experience this lucky?

Categories
Software

AngularJS,Polymer,Vue,React:An Architectural analysis on fitment

Client side JavaScript is seeing rapid churn in last 3-4 years .There are many libraries and framework that are available . Some of them like angular try to span the complete framework space .Some of them like reactJs maintain their focus on single aspect of the rendering fragments . In the blog we will try to give a summary evaluation of how these libraries fit as an architectural  choice . We will refrain X versus Y feature between them as there are many posts available on internet on this.

While the MVC/MVC2 pattern has been around for decades there is marked difference when web frameworks try to adopt them. The traditional windows or mac OS native application frameworks had established couple of things clearly .

  1. Need to define clear models for application data
  2. Event lifecycle and catalog
  3. Orchestrations
  4. Resource handling

This is not a definitive and just summary of elaborate framework .Moreover its oversimplified but it helps us underline the fact that any SDK that tires to cater to MVC will evolve into similar fashion .

The web however imposes a new constrain on OS/Native framework which is the web. This includes the dual problem ie fetching resources from internet , typically over http and rendering them over browser .

Especially the browser as runtime for rendering and interaction poses 2 important problem .First being the laid out ,mostly document nature of the (hyper)text and media . Second being the impedance mismatch between the layout structure and the structure of data fetched from server. There were 3 clear movements in J2EE/JEE world over last decade  to solve this problem .

1. Introduce a near native runtime : This is the applet era where the code would travel from server to client browser and a JRE plugin will serve as additional runtime .The addon runtime provided the ability to give native OS like rendering and more programmatic control over data handling .As a philosophy this continued over rewritten JFX and to some extent Flash also .The key part being browser side stepping as runtime engine .

J2EE MVC By Libertyernie2 – Own work, CC BY-SA 3.0,

2. Pure server side rendering : The mainstream of JEE applications and also PHP/.Net but we will limit it to JEE due to my limited exposure to later. The flag bearer of this were/are servlet specification .This ie many version of servlet/filters and the other frameworks based on them like Spring MVC and Apaches Struts attempted to organize the MVC style fitment of components around the spec . The page however remained the Active-Java server pages . A phrase that is denote server side code execution that will  have better control over data lifecycle but will output markup on serverside. At times the pages could a curious mix of HTML/CSS/JS , java code ,and stringified HTML/CSS/JSS . The stringified code being the programmatic server side generation of markup that will then execute on browser (we skip other template engine here).

JSP tag library lifecycle © Apache.org

An attempt to systematize and eliminate this stringified  server code was done using tag libraries /JSTL. The taglibs introduced a component like boundary for markup and a lifecycle around code execution sothat the inter component and pure server side code can exchange data in defined manner .

3. Ajax driven semi server rendering : The sudden resurrection of AJAX resulted in more interactive application .Due to the ability of AJAX to facilitated isolated ie targeted rendering of fragment the server side page definitions become less and less relevant .A mix of only essential server side pages + pure HTML and direct access to services for data via AJAX resulted in more interactive jazzy web application .The fame of Gmail/Jquery/Dojo etc is this era .

Portlet 2 spec ©JCP/Oracle

JSF LiifeCycle ©Oracle

There was also an attempt in pure JEE space to provide server side framework that will have elaborate lifecycle which will represent both server and client side stages of rendering and control. JSF-Java Server Faces and portlet being the main ones .However it was awkward marriage to being with .This is due to mixture of event as phenomenon in linear execution model  where developer had to juggle lot much code handle events arising out of browser interactions.

It is then natural that the Ajax driven semi server controlled rending will morph into pure client side rendered applications. The likes of various jquery contributions libraries and backbone represent such transitions.

Due to the both the increasing power of browsers and the demanding UX requirements it was imperative that a pure client side or better called as browser native  (as opposed to OS native for VB) web frameworks evolve .Even the web standards specifications evolved rapidly giving support to this. When we talk or angular or React/Vue or Polymer web components we talk of this era.

While angular popularized the term SPA ie single page application and MVVM-a clone of MVC it is important to note that the design inspirations of such libraries are to how JEE stack frameworks evolved in early 2000s .

A typical webpage MVC framework will try to solve following problems

  1. Navigation, routing and subsequent data passing
  2. Rendering, events and Interactions
  3. State management, preliminary security and application organization

The first generation JS frameworks ie likes of Jquery ,Dojo,prototype solved the problem # 2 nicely  i.e. rendering ,events and interactions  .The attempt of a pure client side mvc application  mandated that Navigation and state also come under the purview of the framework .

Angular JS was the leading framework to provide the complete mvc style framework with elaborate sub frameworks for each layer ,sometimes adding its own complexity .  At the same the  even more evolving nature of UX , devices and browser support added following  forces to choice of framework .

  1. Focused rendering and state management at component level, sometimes reactive
  2. Collaborative components that can declaratively exchange data, on top of html spec
  3. Ability to delivery multi device and constrained device interactions (hybrid apps are taken for granted)

This resulted in almost all framework including angular ,vue ,reactjs and polymer adopt to a template driven component design with 1-2 way data binding . The components having an elaborate lifecycle to allow it to fetch data ,prepare markup and manage lifecycle transitions neatly . The component and lifecycle being commonality the following table describes these frameworks  as fitment for your architecture evaluation and choice .We are excluding aspects like test-ability , custom syntax and helper modules , isomorphic rendering  and packaging .We are assuming the vendor support ,community support ,developer enthusiasm etc are considered by most architects during open source framework evaluation .It must be also noted a contribution module can completely alter a given criteria so we stick to standard offering as a basis for our evaluation .

Angular4 Lifecycle via Angular.io

Area of responsibility Sub Area Angular

(mostly v4)

React Polymer

(mostly 1)

Vue
Navigation ,routing and subsequent data passing Model realization Component properties/metadata properties
App wide Menu navigation Routes ,Route components Component focused ,not available
Inter Fragment navigation Route/ routerLink Iron pages component
Navigational data exchange Route parameters,

Other contribution modules

Component focused hence data binding
Rendering , events and interaction Fragment definition Components , 1.5 onwards Web Components
Fragment lifecycle Yes .

OnInit-Ondestroy

DoCheck and After* methods

Yes .

Created-ready-attached-detached

Isolated  rendering Not in architecture Shady DOM
Templeted  rendering Component template assisted by directives Component template assisted by templateElements (as directives)
Cross fragment  Communication Data binding ,Event binding (two way) Data binding and listeners
State management, app initialization and application organization Cross fragment app level data/exchange Services Component focused so custom built
Convenience ·         Dependency Injection

·         Built in directives

·         Pipes

·         Behaviors

·         Custom elements

Styles External Custom CSS properties
Sponsor Google Facebook WebStandard/Google

A separate note mush to given about Polymer .In terms of its ability to address the componentized and templetized components that can be used to compose UI polymer relies on native browser support as opposed to vendor libraries . All the building blocks that webcomponets spec relies are part of w3c spec(true for both polymer v1 and v2). This eliminates the polyfills as the newer browser versions come . It also minimizes the angular 2 like major breaking changes or react like code mashup .

At the same time , given the component focus of this library ,one need to code the building blocks on page structure ,app shell and app wide idiosyncratic needs like central HTML storage or cross use case data passing.Much of these are available in polymers element catalog which gives a feel of application shell but they are still components contributed by Google outside the specification. So Polymer is best suited for application where componentization, isloation and reuse is more important. This is one area where React/Vue and their partner contribution modules that you will choose can score .  Angular 4 ,having acquired component nature with its elaborate  framework layers can cater to demanding application interactions and data lifecycle but this comes at the cost of complexity-maintainability and reuse .

The real question to ask when evaluating Polymer/RactJS/Vue versus angular 4 as architecture choice is how much fragment wiring by virtue of data binding will be good enough . Seasoned web developer who have experienced the event/callback hell as a design parallel  know the answer J .  If your requirements are sorted and support clean componentization then React/VUE or polymer is good enough fit . When modulerization and reuse are more important then Polymer scores over React /Vue . And when the requirements  guy  can be unpredictable Angular offers better choice between  design ideas and (complex)design arsenal …opinionated ie J

Edit : between the time this was written and now the industry has gained enough experience with this tool set.

This link https://www.matuzo.at/blog/2023/single-page-applications-criticism/ and this https://infrequently.org/2023/02/the-market-for-lemons/ and the sub references in these posts have thought that i would have posted today .Anyone following my page can benefit from reading these 2 .

Categories
Software

The case against chatbots

So , 2016 was year of chatbots . See the graphic at the end (its big image )and we see so many of the mainstream companies having built their bots . A github repo search will reveal similar story.The buzz that chatbots are creating is huge .So much so that people are claiming that chatbots will kill websites and mobile apps soon .We even notice that similar buzz was created a decade ago when mobile apps and app stores became mainstream .The mobile app wave was also looked at but incredulity so it is natural to be more welcoming towards chatbots wave .But the similarity does not hold beyond the English sentence you just read.

The move from website to mobile was actually reshaping of the form factor of the computing device .It is but natural ,albeit in hindsight ,that the content and its delivery fits itself into the new form.In the mobile wave also , the mobile and the app , were used interchangeably .While the mobile represents the shift , the app merely represents engagement .This difference is vital to analyzing the chatbot buzz .

Matt Schlicht,of the Chatbot Magazine defines,“A chatbot is a service powered by rules that a user interacts with using a chat interface. The service can be any number of things, ranging from form to function and live in any major chat product” . One might agree with this definition as is or diff with it in parts .While chatting itself is not a new paradigm nor is the concept of a daemon processing accomplishing the task , it is the combination of a always running agent , the bot facilitating the chat.The daemon could assigning your chat requests to human beings ,in a typical support center scenario .It might be reading your sentences and applying some pattern rules to serve reconfigured response .And because the state of the commoditized art allows us to program interpret ,human sentences (either typed or spoken), the so called NLP engine can be plugged into the chat bot to enhance the precision to the intent inference .

Most of the NLP styled chatbot guides mention NLP and intent-action mapping in the same breath ,but this is false connection .Intent-action mapping is what it is ,whether the inference is made via NLP,regex or rulesets. One might also throw machine learning in the stack to further enhance the inference via correlation and other techniques .Well that’s the short summary of what we get to read in the chat bot buzz as the first thing. What goes missing many times is the true nature of the chatbot for the end user , i.e. the conversational quality .

It is this conversational quality of the interaction , that needs deeper scrutiny ,because it represents an alternative to the laid out quality of our interfaces .

Its fine to say that Language is the most natural interface humans understand, and that’s the interface that bots use.but to miss that human language has an intent-explain-infer-confirm cycle embedded into it .

This model is very powerful the “range” of the expression is huge , like asking for a qualified advice amidst multiple factors .But it is very lengthy when the expressions are straight forward .

A typical chatbot sample told to us will be either a flightbooking bot, a billpay bot or an e-commerce bot.All these interactions are well modeled to human mind and the laidout-selection model fits better there .Infact it is an liberal model (read open) for both the parties to explore more options alongside the intended interaction.
Where as a personal assistant that can suggest a song based on the whether condition,travel duration,earlier playlist usage and so on , is the right case for chat (voice or test of gesture) to digest the complexity of intent-explain-infer-confirm cycle .
Thus the argument that because the end user are moving and are more and more available on chat platform and hence the business process should also move to them will add to chat fatigue once the novelty fades out .
We need to trust the end use to be able to decide on more convenient and time efficient model of interaction and offer them.Hence the conversational quality , used at qualified places while retaining the laid-out quality of presented content is the right blend .A jump into chat via bot will be shortlived for most mainstream businesses .Infact the power of ML or AI ,as we like,implies that the business understand and serve (ready to serve) us better even before we start interacting .To transport this whole responsibility onto AI assisted chat interfaces via chatbots is laziness .
In the end , there is a Google search experience to rescue.It lays out a nice search box for us to express , then does huge work in the background to make best sense of what we intended yet subtly suggest us alternatives and corrections ,like human conversation,if the confidence in the inference it made wasn’t high .
It is good case of fitment driven software than buzz driven software ..and that’s the case against chatbot , AI or no AI.

Here are some samples of how subtly google lays out the conversations , courtesy: littlebigdetails .
1. If you search the word “recursion” in Google, it’ll suggest “recursion”. If you click on the suggestion, it’ll suggest “recursion” again… creating a recursive search.And don’t miss the spell correction prompt “did you mean”

2.Google Chrome – Displays some search results in the suggested input area

3.When searching for an upcoming movie, the Knowledge Graph box shows the release date and asks if you’d like to create a reminder.

4.The Oreily report :

Bot Landscape,Oreily

Categories
Software

Digital transformation don’t get confused

So every organization and their sales guy is pitching you about the wave of digital transformation that is essential to your organization and to be ready for the future. It is a must a do for cxo and their architects, if their business has to stay agile and thus, relevant.

Quite true, isn’t it? There is indeed a great sense is taking all your process online, enhancing your digital presence, offer deeper engagement to your end users and grow your business, except that under the noise of these right sentences, lies a misuse, overuse and confusion of the underlying terms which, if true, can lead to different path than intended.

Earlier attempts

The fact that businesses need to go online, i.e. be present on the same electronic plane that their customers exist and then offer interactions on the electronic plane is something that we all came to realize with the dot com era. A business should rightfully move in sync with their customers likings. This used to be called, for a long time , e-everything. E as in electronic. We had e-commerce-payments, e-medicine and so on, overlapping with its twin buzz words, web as in website and online as in online store. What this wave started was indeed a digital view of the world except that for the most part, it was a conversion of existing paper based on a human-facilitated way of doing business. Of course it had resulted in gains by sheer virtue of the expansion of reach it produced.

Digital transformation wave on the contrast is about expansion of the depth . The depth to which end-user can interact with a business and everything around it. Along with this, businesses must also look into essential services like the top Internal audit services if they truly want their brands to be efficient.

Genesis of digital transformation

Human mind is a double edged sword. At one end it can offer rich contextual insights about everything that ,computers are still struggling to match. At other end it can create huge roadblocks due to the linearity with which it sees the world. Imagine a loan application, while you are filling up the forms, can one parallelly start estimating your loan worth?, can it eliminate your need to sign at multiple places or your need to photo copy and attach supporting documents. We need to also consider that the process of application is designed with average human being in mind, on both sides of the interaction, mind you; so we introduced, a sequentially to the process.

It is this human intervention both apparent as well as embedded that the digital transformation wave intends to eliminate. Eliminate so that ,the depth to which the end-user can interact with business is enhanced by order of magnitude. It is apt to call this process as digitization. Digitization of all human intervention, obvious as well as implicit so that the interaction process is as below

Charetiristics of useful digital transformation

1.Instant :Instant as in immediate .Because we eliminate the human aspect of service ,we also eliminate queue nature of our business interaction.This is not same as number of simultaneous users but the ability of the system to react immediately and lead the user to the next logical step or even fulfilment of the process itself.

2.Always on:Always on is not the traditional availability,which is a service indicator that is essential to digital business .But if say ,your payment is waiting for payment network to be on or your application is waiting for an approver and appraiser to process it is not Always on.

3.Resumable : Tought we enable to business process to always on and instant , the same constraint does not apply to the end user .This calls for a resumability of the business process at a later point . Resumability is not a mere memory of users unfinished actions but also updating those actions for the business context that might have changed in the (larger) time lapse that might have occurred .

4. Simplified :As we digitize the actors and process in our system ,it also allows us to eliminate many constrains thus allowing us to simplify the whole user experience in our process . However simplification is not a call for giving up prudence ,but to shift it to a later phase or in background so that the regular flow in the system is far more short, easy and clear. This might even call for coordination between diff systems for this . A case in point is use of precomputed credit score to allow on demand consumer loan. Stay competitive by offering affordable processing credit card fees to your customers.

5.Parallelized : Removal of human view of the business opens up many opportunities to parallelize the tasks in the process so as to shorten as well as smoothen the overall journey .

6.Inferential: As we digitize diff aspects of our business , we also enable addresability of all these entities .This allows us to infer lots of data points,decision making,exception in our system to everyones benifit .An easy example is use of QR code or face recognition or even user segment to accelerate overall process .

Whereas,simplified is often misquoted as frictionless and Inferential is often mis-understood as big data analytics.

It is also important to stress that the earlier attempt of e-fication has sometime resulted in partial achievement of above goals.But due to generational nature of transformation waves,it often merely converted existing process in electronic forms.Case in point,tough e-payments was established early on,it took long time to allow e-cheques inside the system. Where as the main aim of digitization should have been ti eliminate the need of cheques as means of payment as well as legal record.

What then becomes obvious is that digital transformation will need lots of effort to question the business practices , to re imagine the world in digitized manner, and then do it ie digitally transform.For which,most often, that CMS platform,MbaaS suite,service gateway and so on,is not a pre condition.It might ,at max be, an eventual fitment.

Good luck