Paper Title
Fine-Tuning Large Language Models for Enterprise Applications: A Comparative Study
Abstract
This paper provides an in-depth look at today’s leading methods for fine-tuning large language models (LLMs) in enterprise settings. Fifteen studies have been reviewed that highlight various approaches for fine-tuning, efficient parameter adjustments, and domain-specific enhancements, all aimed at making LLMs more adaptable and scalable for business applications. Key techniques include Retrieval- Augmented Generation (RAG), LoRA (Low-Rank Adaptation), and specialized conversational frameworks built for areas like customer service and product recommendations. Our analysis outlines the strengths, limitations, and practical use cases of each approach, offering a guide to efficient tuning strategies and real-world implementation. This study aims to provide enterprises with a clear foundation for leveraging LLMs effectively, focusing on solutions that are both scalable and practical for diverse operational needs.
Keywords - Large Language Models (LLMs), Fine- Tuning, Enterprise Applications, Retrieval-Augmented Generation (RAG), Low-Rank Adaptation (LoRA)