Last Updated on March 22, 2022
“Talent is everywhere, opportunity is not.”
At 100mentors, we democratize opportunity through our mentoring technology.
We’ve identified three core problems, towards accomplishing our mission, when discussing mentoring programs with the world’s top organizations
- Measure learning outcomes
- Develop skills and prove it
Every month we select one of the above problems to discuss with global thought leaders in mentoring, and our stakeholders at 100mentors, e.g.partner organizations, universities, and companies that run mentorship programs, EdTech investors, and our technologists.
This month our focus is on measurability.
The problem: Although it is the most powerful & longest-lasting concept in learning – mentoring – even today, it’s still considered a luxury. Typically a “luxury” is anything that makes us feel good when doing it but we can’t measure its instant & tangible benefits like in art, when having a meaningful conversation, or anything that makes us connect deeper with others and ourselves.
Particularly, although organizations are good at measuring Output/Engagement metrics (e.g. # of participants, # of mentoring hours), they struggle to measure Learning Outcomes (e.g. skills development/progress).
Typical 100mentors KPI reporting, per week/month/quarter (Corporate Internal Mentoring)
On October 21st, we bring together Machine Learning and Natural Language Processing experts along with our Inquiry-Based Learning researcher, to share our approach on measuring mentoring and extend the dialogue to our stakeholders, and all of you.
Our Mentoring Measurability function is based on the automation of evaluating questions, a project with a family of features already in place, or scheduled to be released over the next months.
The Question Evaluation presentation will be held by:
1. Haris Papageorgiou, Director of Language Processing and Machine Learning, Athena Research Lab.
Haris will go through what we have already built as far as how the text of the questions is processed and how the algorithm a) evaluates every transcribed question and b) assigns a score based on our three inquiry criteria.
2. Yannis Vlassopoulos, Researcher in unsupervised language models & tensor networks.
Yiannis will explain the equation and the different weights on variables for the calculation of the score assigned to every question on the 100mentors app. He will also address a brief list of language processing challenges/questions.
3. Pepy Meli, Head of Research, 100mentors.
Pepy will cover the learning theory and research behind setting the scoring criteria for Relevance, Feasibility, Learning Potential. Also, she will discuss how we assess the inquiry skill progress of our learners and how this connects with how we measure mentoring and the development of three soft skills.
4. Miltiadis Zeibekis, CTO, 100mentors.
Miltos will share the roadmap for the gradual integration of the automated scoring mechanism, how this reflects into User Experience. He will also share what features are already available and tentative release dates for the upcoming ones.
5. Yiorgos Nikoletakis, CEO, 100mentors.
Yiorgos discusses use-cases, the current software mentoring industry, and what’s next as far as Measurability, Scalability and Skills’ Assessment and Development in mentoring goes.
We will begin with our own tech approach next week and will continue by sharing multiple corporate and academic mentoring best practices, allowing us to take a closer look at the measurability, scalability, and skills development methodologies applied at top mentorship programs that we admire.
We are on this mission together with hundreds of educators, L&D, HR experts from the corporate world and academia who work to empower everyone, even the less sophisticated learners, to enjoy an instant gratification and easily realize the return on their mentoring investment.