Paper:

LoRA: Low-Rank Adaptation of Large Language Models

Github: https://github.com/microsoft/LoRA

Project: Technology

Member: Nguyễn Thành Phát, Edward Nguyen, Trang Pham, , Nguyễn Bùi Ngọc Hân

Category: AI model Application

Timeline: 28/05/2023-28/06/2023

Meeting link: https://meet.google.com/ybz-evms-fer


"LoRA: Low-Rank Adaptation of Large Language Models"

LoRA presents an efficient method for adapting language models to specific tasks by incorporating trainable rank decomposition matrices into each layer of the Transformer architecture. This approach reduces the number of trainable parameters and GPU memory requirements while maintaining or surpassing the performance of traditional fine-tuning methods. LoRA outperforms fine-tuned models on popular language models such as RoBERTa, DeBERTa, GPT-2, and GPT-3, with higher training throughput and no additional inference latency. The LoRA package provides integration with PyTorch models and includes implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2.

lora_Picture4.png


SUMMARY