Fine-Tuning

No-Code Fine-Tuning of Phi3: A Step-by-Step Guide with LlamaFactory

Illustration of no-code fine-tuning process for Phi3 model using LlamaFactory.

No-Code phi3 Fine-Tuning: A Hands-On Guide Using LlamaFactory

Hello! I'm Tommy, and today I'm excited to show you how to fine-tune the powerful Phi3 model without writing any code. Whether you're a software developer, AI enthusiast, or just someone curious about machine learning, this tutorial will guide you through the process using the intuitive LlamaFactory interface.

Fine-tuning models like Phi3 might seem complex, but with LlamaFactory, it's straightforward and accessible. In just a few steps, you'll be able to customize Phi3 to fit your specific needs, all through a simple, no-code platform. Let's get started and unleash the potential of AI together!

Prerequisites

Before we begin, ensure you have the following:

  • A Google account to access Google Colab
  • Basic understanding of LLMs and fine-tuning concepts
  • Familiarity with Hugging Face (optional, for model export)

Setting Up the Environment

Google Colab

To get started, open Google Colab and create a new notebook. Make sure to enable GPU support for faster training. You can do this by navigating to Edit > Notebook settings and selecting T4 GPU as the hardware accelerator. Ensure you select T4 GPU for optimal performance.

Installing Dependencies

Run the following commands in your Google Colab notebook to install the necessary dependencies:

!nvidia-smi  # Check for GPU connection

Adding Your Own Dataset to LlamaFactory

To customize the Phi3 model with your own data, you'll need to add your dataset to LlamaFactory. Here's how you can do it, whether your data is stored locally or on the Hugging Face Hub.

Navigate to the Data Folder

In the LlamaFactory repository, locate and open the LLaMa-Factory > data directory. This is where you'll define and initialize your datasets for use within the LlamaFactory UI.

Adding a Local Dataset

If you have a dataset stored locally, save it in the data folder. The file should be named in a format like name-of-dataset.json. Next, open the dataset_info.json file within the same data folder. Add an entry for your dataset using the following format:

{ "name": "dataset-name", "path": "name-of-dataset.json" }

Adding a Dataset from the Hugging Face Hub

If your dataset is hosted remotely on the Hugging Face Hub, you can also link it through the dataset_info.json file. Add an entry for your dataset using the following format:

{ "name": "dataset-name", "path": "huggingface-hub/dataset-name" }

Initialize Your Dataset

Once you've added your dataset to dataset_info.json, it will be initialized and available for selection within the LlamaFactory UI.

Start the LlamaFactory Web UI

After installing the necessary dependencies, run the code snippet below to start the LlamaFactory web UI:

!python -m llama_factory.app  # Launch LlamaFactory UI

A public URL will be generated after running the snippet above. Click the URL to get into the LlamaFactory interface, where we will fine-tune our Phi3 model.

Fine Tuning the Phi3 Model

Upon opening the Public URL, you'll find several sections. We will go through the steps to start training our Phi3 model.

Step 1: Select the Phi3 Model

Click on the dropdown in Model Name and select the Phi3-4b-4k-Chat model. The model path automatically updates when the Model Name is selected.

Step 2: Setup Advanced Configurations

In this section, select 4 from the Quantization bit dropdown, set the Quantization method as bitsandbytes, and the prompt template should be set to phi. For the booster, set this to Unsloth for efficient training. However, if you encounter issues during testing, switch this to Auto.

Step 3: Configure the Train Section

Next, click on the Dataset dropdown and select the dataset of your choice or the one you added. In this case, alpaca_gpt4_en is selected to be used to fine-tune the Phi3 model. Ensure the selections in the red rectangles tally with yours. You can also feel free to change any parameter/selection.

Step 4: Setup LoRA Configuration

Set the LoRA Rank to a higher rank if you're using smaller models like Phi3. In my case, I set the rank to 64, but feel free to experiment based on your specific use case.

Step 5: Start the Training Process

You can change the Output dir and the Config path which store the training checkpoints and saving arguments respectively. Click on Start to kick off the training process. The training lasted for approximately 20 mins before finishing. During training, a line graph of the loss to step is shown to give us insight into our Phi3 model being fine-tuned.

Testing the Fine Tuned Phi3 Model

Now that our model has been successfully fine-tuned, it is time to test the model. Follow the steps below to test the fine-tuned model:

Step 1: Under Checkpoint Path

Select the output dir name earlier (train_phi3) and click on the chat subsection.

Step 2: Load the Fine Tuned Model

With the Defaults untouched, Click on Load Model to begin testing the fine-tuned model. This might take 2-5 minutes to load depending on the GPU in use.

Step 3: Test the Fine Tuned Model

After the model has been loaded, you can test your model with different prompts and gauge if the response suits your taste or not.

Exporting the Fine Tuned Phi3 Model

Having tested the fine-tuned model, you can export it either locally or remotely to Hugging Face Hub. Select the Export subsection beside Chat to start the export process. Change the name of the Export dir to where you want the model to be saved locally. To save the model to the hub, add your Hugging Face Hub ID to HF Hub ID. Click on Export to start the process. Once the export is complete, you can find the model in LLaMa-Factory > saves > Phi3-4B-4k-Chat > lora.

Practical Tips

  • Start Small: Begin with a smaller subset of your dataset for initial fine-tuning. This allows you to quickly test and iterate on configurations without long wait times.
  • Optimize LoRA Rank: For smaller models like Phi3, experiment with higher LoRA ranks. A LoRA rank of 64 worked well during testing, but feel free to adjust based on your dataset size and GPU capacity.
  • Use T4 GPU Wisely: Leverage the T4 GPU's 16 GB memory efficiently by adjusting the batch size and learning rate to prevent out-of-memory errors. Monitor your GPU usage to optimize performance.
  • Booster Settings: Use Unsloth for training to maximize speed and efficiency. However, if you encounter issues while testing, switch to Auto to ensure smooth operation.
  • Dataset Integration: When adding your dataset, double-check the format in dataset_info.json to ensure it appears correctly in the LlamaFactory UI. Consistent naming and paths are key to avoiding integration errors.

Conclusion

Great job! You’ve successfully fine-tuned the phi3 model using LlamaFactory’s no-code interface. From adding your dataset and training the model, to testing and exporting it, you've mastered each step. Now, you can harness the power of a customized phi3 model tailored to your specific needs.

Puede que te interese

Cohere Playground overview showcasing text generation, embedding, and classification features.
Learn to fine-tune TinyLLaMA with Unsloth in this detailed guide.

Dejar un comentario

Todos los comentarios se revisan antes de su publicación.

Este sitio está protegido por hCaptcha y se aplican la Política de privacidad de hCaptcha y los Términos del servicio.