We scaled Cisco’s design mentorship program to 191 participants across time zones, cut matching time from three weeks to three days, and achieved 85% satisfaction by using an internal LLM for data cleaning, clustering, and match scoring with human review.
Challenge
- Small committee of organizers attempting to dramatically scale a design mentorship program
- Encouragement to use a new internal LLM (BridgeIT) while maintaining data governance and responsible AI practices
- Keep the program and the connections deeply human
Approach
- Use AI to process unstructured data and support internal tooling
- Software tees up the best possible matches
- Humans-in-the-loop make final decisions with institutional context
Outcomes
- Participants: 191
- Growth: +115% year-over-year
- Time-to-match: 3 weeks → 3 days
- Satisfaction: 85% (survey-based)
Watch the full case presentation from the Cisco d.zone conference, or skim the highlights below.
Framing
Frame the problem
During the program we found that AI supported us in various ways, some expected and some not. It was crucial in evaluating solutions, cleaning up data, and providing valuable insights that guided our decision-making process.
Personal note: as a former developer and a late-diagnosed adult with ADHD, AI helped me re-enter complex contexts quickly. It reduced the cognitive tax of stop-start work so I could focus attention on higher-order judgment.
Reduce the cognitive load
Early wins came from cleaning, sorting, and managing survey data to reduce manual effort and context switching for organizers.
Bring the right things into focus
We evaluated Gale–Shapley matching algorithms against a base human-in-the-loop flow. It improved assignment stability but underperformed on mentorship fit in niche domains; we retained human review while keeping algorithmic scoring as decision support.
Feedback
Survey responses were cleaned and normalized, clustered on skills and goals, and scored with constraints such as timezone and seniority. High-score pairs were reviewed by organizers, who made final assignments and recorded rationales to improve the scoring over time.
85% of participants reported satisfaction with assigned mentors or mentees, citing relevance in skill sets and interests.
Iteration
In the following cohort we implemented improvements to strengthen outcomes and maintain quality at scale:
- Allocate time for targeted recruiting of leadership
- Refine registration to collect more structured and actionable data
- Build out content and guides to support both roles