We embrace the scientific method in our approach to assessing companies, the job roles they need to fuel their growth, and the skills candidates need to flourish in those same roles. But we are not so arrogant as to claim it is a science. While we’re still around in a decade or two, maybe we’ll be a bit bolder then. But for now, here’s how we think about it:
- General → Taxonomy
- Taxonomy → Roles
- Roles → Skills and subskills
- Subskills → Theoretical and applied
1. From general muddiness to the SkillMirror™
Our first hypothesis is that “data science” has become a marketing term used to describe many different buckets of activities. That’s nothing original — we hear that from everyone we speak to in industry or in search of a job in the field. So we decided to apply the tools of the field to examine the scope of work performed to see if clusters of activity were discernible against a large body of expert-described data science job descriptions. No, the image below is not a representation of our actual data– we have to have some “secret sauce” after all. For illustrative purposes, the image is very useful though.
In brief, those clusters represent the differing work and output domains of a) data analysts b) data scientists c) machine learning engineers and d) data engineers. Pushed a bit further statistically, those individual clusters yield up both their principal components, that is the key skill attributes, as well as the relative importance of those respective attributes, in targeting a successful outcome. We do not mean to suggest this is wholly original — mostly it corresponds well to what veterans of the analytics field (as it was once known) would tell you. But we do argue that our derived understanding of the relative weighting of skills allows us to predict very accurately if a candidate will thrive in the role for which you’re hiring them. And there are some surprises there.
This brings us to the guts of kaam.work’s SkillMirror™ tool for assessing data science, machine learning, analysis, and data engineering talent.
Each candidate on our platform undergoes a battery of role-specific tests. Each test assesses both theoretical grasp of a given role’s domain knowledge as well the mastery of tools typically used to apply that theoretical knowledge in real-world situations, i.e. company data sets. For a data analyst, for example, we will test their theoretical knowledge for core mathematics, probability, visualization, and concepts in data wrangling. Abstractly, the theory test looks to know if they can understand how to handle a range of situations they’ll encounter in manipulating data. Conversely, we look at their ability to apply that knowledge in coding contexts via SQL and Python, i.e. can they take the theory and relentlessly translate that into clean code to deliver the theoretically valid output?
The SkillMirror™ is a novel view in that it maps out only the candidates’ verified abilities We superimpose those abilities into clusters of subroles that are best served by those strengths. So, for example, some data analysts are great as decision support for marketing teams because of their ability to marshal data and visualize it in simple, digestible ways that highlight business patterns. Other data analysts may not be as strong at visualizing but are in fact superb at manipulating data across complex table hierarchies (often the kinds of data systems one gets at legacy companies with lots of historical M&A activity) and ensuring clean output. We think of those who are strong in both capacities as Swiss Army Knives– unsurprisingly, experience comes into play here, as they’ll have seen a variety of data situations and business contexts.
We apply similar logic with different skills and subskills to break apart more sophisticated theoretical domains like data science and machine learning where we test skills ranging from linear algebra to natural language processing, deep learning, and model scalability. Nevertheless, we reduce the outputs to simple, easily navigated pockets of talent in one place– in effect, allowing companies to zero in on a bucket of extremely well-calibrated candidates almost immediately. With a few clicks, they will be scheduled for video interviews to form their own opinions. In the best case, a few more clicks will have that candidates starting work for the company in another week. On that time-scale, our candidates can be adding transformative value to your business by the time a traditional data science hiring process is just getting to a shortlist of candidates. Hiring goes from months to days.