TAIPEI (Taiwan News) — Taiwan Mixture of Experts (Project TAME) large language model (LLM) was released to the public on Monday (July 1).
Jointly funded by Pegatron, Chan Chun Group, and Unimicron, Project TAME passed university entrance exams, bar exams, and traditional Chinese medicine exams and beat Open AI's GPT-4o in some tests, per Tech News.
The Taiwanese LLM harnesses the power of Nvidia’s AI supercomputer, Taipei-1, per CNA. The project uses domain data contributed by companies and trains on nearly 500 billion tokens to develop a traditional Chinese LLM. A token is a basic unit of text, such as a word or a punctuation mark.
In 39 comprehensive evaluations covering nearly 3,000 questions, Project TAME scored higher than all other models, with an accuracy rate 6.8% higher than the second-place Claude-Opus model.
Project TAME leader and Pegatron Associate Vice President Andrew Hsiao (蕭安助 said the LLM used 350,000 GPU hours, involved 31 engineers, and required 1,285 work hours to create the new model.
Pegatron said through Project TAME it hopes to accelerate the application of generative AI in industries through innovative, open cooperation, per UDN. The company added it hopes to combine data from various fields with the expertise of academic LLMs.