The method used to build the HR Technologies Database followed an iterative, participatory process grounded in real-world recruiting practices and practitioner feedback alongside an in-depth analysis of the HR Tech market.
The TARAI Index's design began with our research team conducting over 100 semi-structured interviews with HR and recruiting professionals across several countries, between 2021-2025. These interviews focused on work processes, technology use and experiences, frustrations with AI systems, and desired transparency and competencies regarding the AI used in HR Technologies. The research team documented the various HR tech products professionals discussed in interviews and compared this list with exhibitors from major HR tech industry conferences, ultimately selecting 113 unique products based on overlap and frequency of mention.
Key data for each HR Technology product included in the TARAI Index was sourced directly from company websites, treating marketing claims and product documentation as primary materials practitioners would typically encounter. Additional company details (like location, size, mergers) were supplemented from business databases such as Pitchbook and Crunchbase. Manual review and cross-verification of the information about HR Technology products gathered by the research team ensured accuracy in each entry. The collected data was cleaned and sorted by hiring funnel stage, tagged with AI and generative AI features, and translated into recruiter-friendly terms.
Click here to learn more about how each data field within the TARAI Index.
To ensure usability and relevance, we conducted three rounds of workshops involving recruiters, HR executives, data scientists, and technologists. Workshop participants evaluated database prototypes, provided feedback on features, and helped shape navigation interfaces for both practitioner and researcher audiences. Additionally, we used personas representing key user groups (recruiters, procurement professionals, AI auditors) to guide scenario-based feedback. Subsequent database iterations based on this data increased the number of detailed products, improved the tagging and filtering system, and refined the ranking of AI clarity for each tool.