At TWOSENSE.AI, we build Enterprise products that implement continuous biometric authentication. Our core technology relies on Machine Learning (ML) to learn user-specific behaviors and automate the user workload of authentication. Although this idea has been around for a bit, the challenge is to execute on it. An Agile approach was absolutely necessary for both execution at speed, and to get our product into the hands of customers early. Our goal was to combine Agile Software Engineering (SWE) with Machine Learning Research (MLR) to create what we call Agile Machine Learning Engineering (MLE). In order to do this, we had to make changes to our existing Software Engineering process on a tactical level (Process), on a strategic level (Paradigm) and on how we see ourselves in the process (People).
There are two schools of thought on how people fit into the MLE process (we’ve looked at this before). The first is to have distinct roles and tracks usually around ML Research or Data Science, ML Engineering and Data Engineering. There are several advantages to this approach, like simplified hiring for one since you’re working with a more generic candidate profile. The down-side is that a strong separation and compartmentalization of roles can lead to execution gaps between research and engineering. The friction of having to port solutions to production that are developed by researchers who don’t understand the constraints of the production system then becomes a recurring theme. On the other side, there is the concept of having a homogeneous team of full-stack MLEs, where everyone is equipped with the skills of both an ML researcher and is a proficient SWE. This point of view is espoused by many of the largest and most reputable firms who have a track record of successful execution on ML visions and is therefore very attractive to us. The upside here is that product and research are de facto in lockstep, and the execution gap between research and production is minimal and closes at the end of each task. The problem is that full-stack MLEs are extremely expensive, extremely rare, and demand far outstrips supply.
Initially, we pushed for the full-stack MLE approach but quickly ran into problems. First, we slowed way down, moving at a crawling pace. After a few retros of trying to work through the kinks, we identified the problem. Under the full-stack MLE assumption, each person was pulling tickets in both MLE and research. Each ticket had been estimated by the team collaboratively to be less than 3 days of work. However in reality, for a team member that’s much stronger in engineering than in research, a 3-point ticket in research ends up taking much more time than a 3-point engineering ticket. The effect was that everyone ended up bogged down on tickets focused on areas where they were weak, and we were not moving forward. We did see improvements on the individual level on the skills gap towards full-stack MLE which was great, but we were just too slow.
Larger companies can afford to have more junior engineers ramping up to full-stack skill levels on their dollar and we clearly see the benefit in a full-stack MLE team. However, at an early stage of any startup, the existential risks are short term. We can’t take that hit right now, so we needed to be pragmatic, in both how we operate and how we hire.
The first decision we made was to play to individual strengths, but without creating hard distinctions between research, MLE and data engineering. We do this with ticket sizes that play to the strength of the engineer who will be working on them. So for example, if you are strong in research, you’ll be working on broader scoped research tickets and smaller scoped production tickets. This way we get the benefits of working towards a full-stack team (although we’re still not sure if our long-term goal is to be cookie-cutter on this), while avoiding bogging down right now and keeping product engineering and research in lock-step.
Interestingly, this still provides some of the benefits on the hiring side that would come with stricter roles. Full-stack MLEs, are both incredibly rare and incredibly expensive, creating problems in the hiring funnel. The main difference is about our expectations of ourselves, and our new hires. Now we are optimizing to fill a skill-gap on the team with a person who is strong in a specific area, but has at least basic skills in both data engineering, research, and software engineering.
An unintended negative consequence is that there is more effort required in planning to think about tickets on a per-engineer level, and the process is slightly brittle to changes and delays since tickets are personalized. This is working for now but could become a much larger problem as we grow. Hopefully, by the time we get there, we’ll be closer to a full-stack MLE organization and can afford the skillset ramp-up time for new team members post onboarding.
We’re always hiring. If this sounds like an environment you’d like to work in please reach out!