Filter

ws-cann

Efficient Computing Framework for LLMs Driven by Knowledge Density

September 13

10:30 - 11:00

Location: Venue 7 - B02

As the scale of large language models and their training data continues to expand, the bottleneck between model performance and resource consumption is becoming increasingly prominent. Traditional approaches to model optimization often focus on architectural improvements or hardware acceleration, but overlook the fundamental question of how knowledge is distributed and utilized within these massive parameter spaces. This presentation introduces a novel computing framework driven by knowledge density analysis, which enables more efficient utilization of computational resources by identifying and prioritizing the most knowledge-rich components of large language models during both training and inference phases.

Speakers