The important thing is that you did what you felt haha. The economic part is not always everything.
Maybe someday I'll pay for some training to learn more about AI. I had read that you used deepseek. I tried one of their models on a VPS and it really wasn't that good, I think it still has some way to go to beat openAI.
Interesting...which model did you use exactly, there are some distilled models too. Was it the larger 671 billions parameter model or not?
Deepseek R1 i think
R1 has multiple deployable model ranging from 1.5 billion parameters (weak) which even I can run on my system to 671b parameters model (needs 32 GB VRAM GPU and ~400 GB Storage). This one is the strongest, but takes a lot more resources to deploy. We just got a gaming GPU with that much VRAM 5090. $2K for a GPU is insane though 🤪
https://ollama.com/library/deepseek-r1:671b
We did it on a VPS with low specs, but it took too long to develop a response and gave false information. Perhaps it is because of the components
I have now checked the 7B model I had used.
That explains it. It's really weak with just 7 billion parameters compared to the best 671 billion parameters.