AI (Measuring/Coaching)

EDUM AI

Edum AI algorithm is the core technology of Edum services, which which utilize state-of-the-art artificial intelligence techniques such as Deep Learning and LLM (Large Language Models). It can suggest the personalized learning guidance tailored to the learning behavior and psychological state of each learner, and provide training and rewards accordingly.

1) Facial Recognition AI: The AI technology utilizes deep learning algorithms to detect the learner's movements via smartphone camera and analyze their learning style to ascertain learning time and concentration level. Rewards are provided to enhance concentration based on the level of concentration during learning, thereby the learning efficiency can be improved.

2) BIO-LLM AI: The BIO-LLM technology, utilizing state-of-the-art Large Language Models (LLM) in the field of AI, enables the analysis of learners' learning behaviors and psychological states by measuring their biological signals such as brainwaves (EEG - Electroencephalography), pulse waves (PPG - Photoplethysmogram) and movements. It is the world's first bio-signal-based BIO-LLM technology, which connects personalized learning behavior guidance, training, and rewards with this analysis.

Face recognition AI

The "Learning Measurement AI Algorithm" developed through collaboration with KAIST, offers a more sophisticated learning measurement based on facial recognition, providing learners with reasonable rewards.

By utilizing the camera on smartphones or tablets, it detects learners' movements to analyze and quantify their learning styles so that it can enhance the effectiveness and concentration for studying.

The facial recognition AI algorithm utilizes Person re-identification to identify whether the same individual is present, by learning the initial user data and analyzing the vidio with tube masking technology. It also analyzes various movements and sitting postures with facial comparison analysis at the same time to further improve accuracy. Applying the VideoMAE deep learning algorithm, known for its ease of reconstruction due to temporal redundancy and responsiveness, to allow the analysis of learning patterns precisely by discerning the learner's learning information with the AI algorithm.

Action Recognition utilizes two algorithms, BEVT and UniFormerV2, to learn various movements and improve the accuracy of measurement results. The BEVT algorithm evaluates data by combining modeling for image and video data. It performs image modeling for image data initially and then jointly assesses masked video modeling for video data. The UniFormerV2 algorithm resolves spatiotemporal redundancy and dependency by comparing short-term similarity between video frames. This algorithm is based on the ViT (Vision Transformers) algorithm, enabling it to learn long-term video dependencies. Moreover, it compares and analyzes motion states based on trained data to enhance the accuracy of measuring sitting posture, achieving over 90% accuracy in Deepface similarity.

The integration of artificial intelligence technologies and automated data processing allows for rapid performance improvements and continuous enhancement through automated validation. The user data collected in this manner is protected for privacy through techniques such as de-identification, anonymization, data reduction, and aggregation. Furthermore, the data is encrypted into a form that is not human-readable and cannot be identified without combining it with other information, ensuring privacy protection through the impossibility of decryption.

BIO-LLM AI

  • The biometric recognition device: The biometric recognition device includes Earbuds equipped with EEG (brainwave) and PPG (pulse wave) recognition sensors, as well as accelerators and gyro sensors capable of detecting eye blinks and head movements. These sensors measure the psychological and behavioral responses of the learner.

  • Preprocessing of biosignal data: Convert the analog signals from the Earbuds into digital format, and amplificate and filter the data. The refined data will be cleaned to extract the necessary information required for AI algorithms, and transformed into a mathematical form, a process known as embedding, and stored for further analysis and utilization.

  • LLM Algorithm: Combining brainwaves, pulse waves and movement data, and analyzing patterns to identify factors influencing learners' psychological stability, concentration, and learning outcomes. Based on these results, personalized learning guidance and coaching information are delivered through the app service.

EEG Data Learning Algorithm

  • To facilitate the transfer of learning between emotional states and cognitive load, we utilize EEG signals and EEG datasets associated with psychological states.

  • It is a cognitive load classification algorithm, utilizes a self-supervised masked auto-encoding approach to pretrain the AI model and fix the weights to enable transfer learning.

Biometric Data Embedding

Converting raw EEG signals into tokenized sequences of EEG features

  • Pre-Processed EEG Signal: The input frequency data undergoes filtering to remove noise and artifacts other than EEG by applying a second-order Butterworth bandpass filter with a passband frequency range of 1-75Hz.

  • Feature Extraction: The Power Spectral Density (PSD) and Differential Entropy (DE) are two key features. PSD is a measure of the signal power across different frequency components and use Welch's method to calculate this feature.

  • Tokenization: Divide the signal into small segments and apply a window function to each section. Then calculate the Discrete Fourier Transform (DFT) of each segment and the average value of squared magnitudes.

  • Masking: Mask the results not to expose any information.

Transformer Architecture

  • Feed masked sequences to the pre-trained model using L1 with MAE

  • To investigate the effectiveness of the pre-trained model, consider two scenarios: (1) Transformer block with frozen fashion and only train prediction head (2) Train transformer block along with the new prediction head

  • In both cases, train model with Binary Cross Entropy loss

Use EDUM AI

The learning patterns and biometric data acquired from the Edum service can revolutionize the Edutech industry through the utilization of Edum's AI system, "Athena." This enables the enhancement of efficiency and accessibility in education for learners by leveraging AI to understand learning patterns and requirements. It also facilitates improvements in content, system enhancements, and various other services.

  • Personalized Learning: AI algorithms optimize learning behaviors, types, and speeds according to each user's learning style, maximizing engagement and learning effectiveness.

  • Outcome Prediction: Based on various learning data of learners, it predicts academic achievement and admission possibilities. This can be utilized in products such as entrance exam prediction services of Jinhak and offered in various bundle product forms.

  • Automated Assessment and Feedback: AI automatically analyzes and evaluates various learning behaviors, providing personalized feedback tailored to each individual. This helps students identify their strengths and weaknesses and improve their learning.

  • Mentoring and Learning Assistance: Real-time monitoring of students' studying processes allows for personalized mentoring to address challenges, motivate and offer strategies for achieving goals.

  • Learning Trend Analysis: Analyzing large-scale data to understand learning trends, individual learner characteristics, and the effectiveness of learning content.

In Edum's APP & DEVICE services, both structured and unstructured pattern data such as study habits, brainwaves and concentration are collected. It is processed and analyzed into learning data such as brainwaves, willpower, concentration, PPG, EEG, patterns and self-directing. Analyzed data enables detailed customer understanding, future behavior prediction and setting user-centric strategies, via AI technologies. This facilitates service enhancement and the introduction of various new services.

Use Case #1 _ Personalized Coaching

Edum's Edutech leverages AI systems for sophisticated and advanced analyses, and it clusters and classificates user behavior prediction and intent analysis. It analyzes various data collected from Edum's APP and Devices to provide personalized learning behavior services.

With over 4 million of Jinhak’s user base, and 150,000 daily active user (DAU) of Catch, Edum can access to a powerful user pool targeted at education and growth. Leveraging this high-efficiency data enables the prediction of user behavior, optimization of business strategies and delivery of customized services. For partners facing challenges in acquiring new members, securing content or struggling with marketing costs, Edum offers various business opportunities through user acquisition, content exchange and sales as well as channeling, all facilitated by utilizing data effectively.

Use Case #2 _ Bundle Products

Understanding users' learning patterns allows for the creation of tailored products or the expansion of services into various package offerings. For instance, standardizing the EEG and concentration data of top-performing users can enable the sale of content to users with relatively lower concentration levels. Additionally, packages combining learning NFTs and EEG content can be created. This servitization, combining Edum DEVICE and NFT services with data, not only increases direct revenue from Edum DEVICE and NFTs but also expands services into derivative or bundle products, thereby creating greater added value.