Media Coverage
Global media spotlight HyperAccel’s innovation in AI acceleration.
[CTO Interview] HyperAccel bets LPU to cut LLM inference costs and challenge Nvidia in Korea
Korean startup’s LPU chip targets cheaper LLM inference with DRAM and Samsung 4nm backing “The core goal is to bring...
AI computing shifts from training to inference; heterogeneous architectures go mainstream
As generative AI continues to advance, its capabilities and application scenarios are rapidly expanding, driving structural changes in computing infrastructure....
[CEO Interview] Korean startup targets Nvidia-dominated AI inference market with 2027 chip launch
Hyper Accel CEO Kim Joo-young poses for a photo with the company’s first AI chip, codenamed Bertha 500, during an...
Korean Startup Takes On Cost and Latency With LLM-Specific Chip
HyperAccel is also working with LG on an SoC version for edge appliances and robots. South Korean AI chip startup...
HyperAccel and Advantech Sign MOU for AI Infrastructure Technology Collaboration
AI semiconductor startup HyperAccel announced on the 3rd that it has signed a memorandum of understanding (MOU) with Advantech, a...
[CEO Interview] ‘World’s First LPU AI Chip’ Achieved by Korean Startup: “2.4 Times Better Performance Than Conventional GPUs” [Future of K-Semiconductors ①]
Kim Joo-Young, CEO of HyperAccelSpecialized Chip for Advanced LLM Operation2.4 Times the Performance of AI GPUs50% Faster Processing SpeedPrimarily Used...
South Korean startup to launch Samsung 4nm-process LPU chip, at one-tenth cost of Nvidia’s H100
South Korean startup HyperAccel—an IC design firm focusing on AI semiconductors for large language models (LLM)—will launch a language processing...
HyperAccel Enters On-device AI Market, Targeting Robots and Home Appliances
Startup Aims to Compete with NVIDIA in LLM Processing


