Plenary Lecture
Dr. Minakshi Pradeep Atre
Savitribai Phule Pune University, India
Title: Beyond the Accuracy Trap: Navigating the Transition from Red AI to Green AI in the LLM Era
Abstract: While the global proliferation of Large Language Models (LLMs) such as ChatGPT, Claude, Gemini, and Perplexity has transformed digital life, the staggering energy footprint required to generate every individual token remains a critically neglected dimension in both public perception and standard benchmarking. Current industry standards primarily emphasize accuracy and output quality, often treating tokens as "free" to multiply despite the fact that inference now accounts for over 90% of the total energy and compute cost of LLM deployments. This "Red AI" paradigm focused to pursuing peak performance through massive scaling, has led to significant environmental costs. This paper highlights the critical shift toward Green AI by examining methodologies to quantify and optimize the energy footprint of token generation. The discussion underlines the principle to treat tokens as a finite resource rather than "free" to multiply. By utilizing hardware-agnostic benchmarks like OckBench, research can now map models onto an "accuracy-efficiency plane" to identify Pareto-optimal solutions that balance performance with decoding token count. This paper further explores "green-in AI" optimization strategies that enable semantic adaptivity at runtime. Frameworks such as QuickSilver demonstrate that up to 39.6% FLOP reduction can be achieved through techniques like dynamic token halting and contextual token fusion. They eliminate redundant computations for stabilized representations without altering model weights. By aligning computational effort with linguistic structure and semantic salience, this research underscores a viable path toward computationally self-aware and environmentally responsible AI systems.
Bio: To be announced soon