Newsroom
Meet our latest news and insights
Featured
A Latency Processing Unit: A Latency-Optimized and Highly Scalable Processor for Large Language Model Inference
Published in: IEEE Micro ( Volume: 44, Issue: 6, Nov.-Dec. 2024)Authors: Seungjae Moon; Jung-Hoon Kim; Junsoo Kim; Seongmin Hong; Junseo Cha; Minsu Kim Abstract The explosive arrival of OpenAI’s...
HyperAccel Latency Processing Unit (LPU™)Accelerating Hyperscale Models for Generative AI
Latency Processing Unit, the world-first hardware accelerator dedicated for the end-toend inference of LLM.
![[CEO Interview] ‘World’s First LPU AI Chip’ Achieved by Korean Startup: “2.4 Times Better Performance Than Conventional GPUs” [Future of K-Semiconductors ①] Thumb](https://hyperaccel.ai/wp-content/uploads/2026/03/BG-300x178-1.jpg)





