You are viewing a single comment's thread from:

RE: Teaching AI to Play Chrome Dino Game: Reinforcement Learning

in Geek Zone10 days ago

The important thing is that you did what you felt haha. The economic part is not always everything.

Maybe someday I'll pay for some training to learn more about AI. I had read that you used deepseek. I tried one of their models on a VPS and it really wasn't that good, I think it still has some way to go to beat openAI.

Sort:  

Interesting...which model did you use exactly, there are some distilled models too. Was it the larger 671 billions parameter model or not?

 9 days ago  

Deepseek R1 i think

R1 has multiple deployable model ranging from 1.5 billion parameters (weak) which even I can run on my system to 671b parameters model (needs 32 GB VRAM GPU and ~400 GB Storage). This one is the strongest, but takes a lot more resources to deploy. We just got a gaming GPU with that much VRAM 5090. $2K for a GPU is insane though 🤪

https://ollama.com/library/deepseek-r1:671b

 9 days ago  

We did it on a VPS with low specs, but it took too long to develop a response and gave false information. Perhaps it is because of the components

 9 days ago  

I have now checked the 7B model I had used.

That explains it. It's really weak with just 7 billion parameters compared to the best 671 billion parameters.