Efficient LLM Fine-Tuning at the Edge

Efficient LLM Fine-Tuning at the Edge

Optimizing language models for resource-constrained devices

This research introduces FedSPZO, a novel approach that enables efficient fine-tuning of Large Language Models on edge devices while preserving data privacy.

  • Reduces memory and computational requirements by using zero-order optimization techniques
  • Implements a split-perturbation strategy that accelerates convergence time by 1.8-2.5x
  • Achieves comparable performance to traditional methods while using only inference-level memory
  • Enables privacy-preserving model improvement on resource-constrained devices like smartphones

This advancement matters for engineering because it makes sophisticated AI capabilities accessible on everyday devices without compromising user data privacy or requiring expensive hardware upgrades.

Efficient Zero-Order Federated Finetuning of Language Models for Resource-Constrained Devices

12 | 52