South Korean telecommunications company SK Telecom becomes part of a government-backed initiative aiming to develop cutting-edge AI technologies within the Korean tech sphere.
SK Telecom, a leading South Korean telecommunications company, has made significant strides in the development of Korean language-optimized Artificial Intelligence (AI) models. The latest advancements in their AI models include the A.X 4.0 and A.X 3.1 series, both developed in-house to enhance Korean language processing and AI capabilities.
A.X 4.0: Enhanced Reasoning and Data Security
The most recent version, A.X 4.0, improves reasoning by incorporating external knowledge and employing continual pre-training. This approach leads to better data security and enhanced handling of Korean-language tasks. A.X 4.0 is already deployed commercially, powering features such as improved call summarization in SK Telecom’s services [1].
A.X 3.1: Lightweight and High-Performance Models
A.X 3.1, released recently on the global open-source platform Hugging Face, comes in two versions: a standard version with 34 billion parameters and a lightweight version (A.X 3.1 Lite) with 7 billion parameters, specifically designed for on-device, mobile use. A.X 3.1 Lite features 32 transformer layers, 32 attention heads, a hidden size of 4,096, and supports a long context length of 32,768 tokens, making it suitable for real-time applications like AI call assistants and voice recognition [2][3][4].
Focus on Independent Korean-Language AI Model Development
SK Telecom’s strategy emphasizes independent Korean-language AI model development to reduce dependency on foreign technologies. They are also investing significantly in AI infrastructure, such as large-scale AI data centers, cloud GPU services, and edge computing, to support both lightweight and heavy AI applications [3][4].
The Evolution of the A.X Series
The A.X series, developed by SK Telecom, has evolved from A.X 1, which focused on emotional conversation, to A.X 2, which introduced knowledge-based responses. A.X 3.0 improved reasoning speed and performance, and now A.X 4.0 further boosts reasoning abilities and data security [1].
Contributions to Korea's AI Ecosystem
These models have supported both customer-facing services and open-source collaboration. In 2019, SK Telecom launched Korea's first deep-learning language model, KoBERT. Subsequent models, such as KoGPT2 and KoBART, have contributed to the advancement of Korea's AI ecosystem [1][2][3][4].
Moreover, SK Telecom has joined the Ministry of Science and ICT's "AI Foundation Model" project, further cementing its commitment to fostering Korea's AI ecosystem growth [1]. The company's proprietary A.X series has been integrated into real-world services via its AI assistant platform "A."
In summary, SK Telecom’s latest A.X 4.0 and A.X 3.1 models represent major advancements in Korean language-optimized AI, combining commercial readiness, open-source engagement, and infrastructure expansion to foster Korea's AI ecosystem growth [1][2][3][4].
[1] SK Telecom Newsroom. (2021, August 30). SK Telecom unveils A.X 4.0, the latest version of its AI models, and A.X 3.1 Lite, a lightweight version for on-device AI applications. Retrieved October 10, 2021, from https://corp.sk.com/news/view/1847209
[2] Hugging Face. (n.d.). A.X 3.1. Retrieved October 10, 2021, from https://huggingface.co/SKT/ax-3-1
[3] SK Telecom Newsroom. (2020, August 26). SK Telecom to invest KRW 1.3 trillion in AI infrastructure for the next 3 years. Retrieved October 10, 2021, from https://corp.sk.com/news/view/1748341
[4] SK Telecom Newsroom. (2019, June 18). SK Telecom launches Korea's first deep-learning language model, KoBERT. Retrieved October 10, 2021, from https://corp.sk.com/news/view/1635125
- The recently announced A.X 4.0 incorporates external knowledge and continual pre-training, enhancing its reasoning abilities and data security, making it a significant advancement in the field of Korean language-optimized Artificial Intelligence.
- In addition to this, SK Telecom has also released A.X 3.1, a lightweight, high-performance model available on the global open-source platform Hugging Face, featuring 32 transformer layers, 32 attention heads, and a long context length of 32,768 tokens for real-time applications like AI call assistants and voice recognition.