On Jan. 4, 2026, KoreaTechDesk reported that Sionic AI CEO Ko Suk-hyun issued a public apology to Korean startup Upstage, retracting earlier claims that its Solar Open 100B model copied China’s ZhipuAI GLM‑4.5‑Air. Ko acknowledged that relying solely on LayerNorm cosine similarity was insufficient to allege model weight reuse.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This episode is a micro‑drama with macro implications. As open and “sovereign” models proliferate, the risk of public accusations over weight copying and training set misuse goes up. The Sionic AI CEO’s retraction—explicitly saying that LayerNorm cosine similarity isn’t enough to prove plagiarism—amounts to an industry‑level acknowledgement that we need much stronger methodological standards before making claims about model provenance.([koreatechdesk.com](https://koreatechdesk.com/korea-upstage-controversy-sionic-ai-ceo-apology))
For the AGI race, that matters because trust in open ecosystems is now strategic. South Korea has made sovereign AI a pillar of its industrial policy; Upstage’s Solar Open 100B is part of that push. If key domestic players can be publicly undermined by weak or misinterpreted forensic signals, it chills investment and cooperation. Conversely, handling this dispute with a formal apology and emphasis on academic rigor suggests the Korean ecosystem is maturing: reputational norms and informal governance are emerging in parallel with state regulation.
Longer term, expect more work on cryptographic attestations, watermarking and transparent training documentation to reduce room for ambiguity. As weights and data circulate more freely, the ability to credibly prove “this is our model, trained this way” will become as important a competitive asset as raw benchmark scores.



