I’m new to the field of large language models (LLMs) and I’m really interested in learning how to train and use my own models for qualitative analysis. However, I’m not sure where to start or what resources would be most helpful for a complete beginner. Could anyone provide some guidance and advice on the best way to get started with LLM training and usage? Specifically, I’d appreciate insights on learning resources or tutorials, tips on preparing datasets, common pitfalls or challenges, and any other general advice or words of wisdom for someone just embarking on this journey.

Thanks!

  •  Zworf   ( @Zworf@beehaw.org ) 
    link
    fedilink
    17
    edit-2
    12 days ago

    Training your own will be very difficult. You will need to gather so much data to get a model that has basic language understanding.

    What I would do (and am doing) is just taking something like llama3 or mistral and adding your own content using RAG techniques.

    But fair play if you do manage to train a real model!

      • No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

        On my RTX 3060, I generally get responses in seconds.

      •  Zworf   ( @Zworf@beehaw.org ) 
        link
        fedilink
        1
        edit-2
        10 days ago

        Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it’s the 8B model because the intermediate sizes aren’t out yet and 70B simply won’t fly on a single GPU.

        But it really screams. Much faster than I can read. PS: Ollama is just llama.cpp under the hood.

        Edit: Ah, wait, I know what’s going wrong here. The 22B parameter model is probably too big for your VRAM. Then it gets extremely slow yes.

      • Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

        I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?

    • If you just want to use a local llm, using something like gpt4all is probably the easiest. Oobabooga or llama.cpp for a more advanced route.

      I use ollama with llama3 on my macbook with open-webui and it works real nice. Mistral7b is another one I like. On my PC I have been using oobabooga with models I get from huggingface and I use it as an api for hobby projects.

      I have never trained models, I don’t have the vram. My GPU is pretty old so I just use these for random gamedev and webdev projects and for messing around with RP in sillytavern.

      •  TehPers   ( @TehPers@beehaw.org ) 
        link
        fedilink
        English
        312 days ago

        I managed to get ollama running through Docker easily. It’s by far the least painful of the options I tried, and I just make requests to the API it exposes. You can also give it GPU resources through Docker if you want to, and there’s a CLI tool for a quick chat interface if you want to play with that. I can get LLAMA 3 (8B) running on my 3070 without issues.

        Training a LLM is very difficult and expensive. I don’t think it’s a good place for anyone to start. Many of the popular models (LLAMA, GPT, etc) are astronomically expensive to train and require and ungodly number of resources.

  • I really appreciate all the responses, but I’m overwhelmed by the amount of information and possible starting points. Could I ask you to explain or reference learning content that talks to me like I’m a curious five year old?

    ELI 5?