In this video, I'll show you the easiest, simplest and fastest way to fine tune llama-v2 on your local machine for a custom dataset! You can also use the tutorial to train/finetune any other Large Language Model (LLM). In this tutorial, we will be using autotrain-advanced.
AutoTrain Advanced github repo: [ Ссылка ]
Steps:
Install autotrain-advanced using pip:
- pip install autotrain-advanced
Setup (optional, required on google colab):
- autotrain setup --update-torch
Train:
autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft
If you are on free version of colab, use this model instead: [ Ссылка ]. This is a smaller sharded version of llama-2-7b-hf by meta.
Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
My book, Approaching (Almost) Any Machine Learning problem, is available for free here: [ Ссылка ]
Follow me on:
Twitter: [ Ссылка ]
LinkedIn: [ Ссылка ]
Kaggle: [ Ссылка ]
The EASIEST way to finetune LLAMA-v2 on local machine!
Теги
machine learningdeep learningartificial intelligencekaggleabhishek thakurllama finetuningllama v2 finetuninghow to finetune llamallama finetuning on local machinellama v2 instruction finetuningfinetune llama on custom datasetllama google colabfinetune llama on google colabhow to finetune llmfalcon finetuningllama fine tuningllm finetuninghow to train llmllm training custom datasetautotrain llm trainingautotrain llmautotrain advanced