AI Learning Innovation: Knowledge Distillation Using OpenAI's 'o1' Model

AI Learning Innovation: Knowledge Distillation Using OpenAI's 'o1' Model

A study has been published showing that utilizing data generated by OpenAI’s inference model ‘o1’ can significantly improve the learning performance of other AI models. Major research institutes such as DeepMind and OpenAI have stated that ‘knowledge distillation’ technology is becoming a key strategy for developing AI inference models.

AI Learning Innovation: Knowledge Distillation Using OpenAI's 'o1' Model

▲[Korea Today] Photo unrelated to the article. Source=FREEPIK © Reporter Byun A-rong

DeepMind announced on the 5th (local time) that it can overcome the data exhaustion problem of large language models (LLMs) and improve inference performance through ‘test-time compute’ technology. This technology divides queries into several stages and solves each stage, thereby creating a Chain of Thought (CoT) that solves the problem step by step. The AI model moves to the next stage only after solving each stage, and can derive more sophisticated responses.

DeepMind researchers used this technique to generate synthetic data and introduce it into a “knowledge distillation” process. Using OpenAI’s “o1” model as a “teacher” model, the researchers generated new training data, which greatly improved the inference performance of a smaller “student” model.

Microsoft (MS) CEO Satya Nadella recently referred to this technology as a “new scaling law” and evaluated that combining pre-training data and test-time sampling can create more powerful AI models. OpenAI co-founder Ilya Sutskever also said, “By utilizing the output data of the o1 model, we can supplement the pre-training data and continuously improve model performance.”

DeepSeek, a Chinese AI company, trained its own AI model, ‘DeepSeek-V3’, using the output data of the o1 model. DeepSeek-V3 is evaluated as having the best performance among open source inference models, and has reached a level comparable to GPT-4o.

However, the research team pointed out that while test-time compute is effective in areas such as math problems where the correct answer is clear, its applicability is questionable in creative tasks or writing tasks where there is no correct answer. Accordingly, it was suggested that further research is needed to explore the limitations and possibilities of this technology in various application areas of AI.

This research opens up new possibilities for AI model learning, but it also presents a major challenge to AI researchers and companies in that it requires solving technical and ethical issues to put it into practice.

Read the original article

© The Korean Today – All Rights Reserved

Leave a Comment