SplitFrozen: Making LLMs Work on Edge Devices

SplitFrozen: Making LLMs Work on Edge Devices

Efficient fine-tuning for resource-constrained environments

SplitFrozen is a novel framework enabling efficient large language model fine-tuning on resource-constrained edge devices through strategic model partitioning.

  • Device-side freezing reduces computational requirements by freezing layers on end-user devices
  • Server-side tuning centralizes parameter-efficient fine-tuning for optimal performance
  • Heterogeneous compatibility addresses device diversity challenges through adaptive partitioning
  • Privacy preservation keeps sensitive data on-device while enabling personalization

This approach significantly reduces entry barriers for personalized AI on edge devices, opening possibilities for smarter IoT systems, personalized mobile assistants, and embedded AI applications without requiring high-end hardware.

SplitFrozen: Split Learning with Device-side Model Frozen for Fine-Tuning LLM on Heterogeneous Resource-Constrained Devices

36 | 52