You are viewing a single comment's thread from:

RE: LeoThread 2024-09-15 16:25

in LeoFinance2 months ago

To give a frame of reference.

Llama 3 was trained on a massive dataset of 15 trillion tokens, which allows me to have a broad range of knowledge and answer a wide variety of questions.

This is the amount of data, quantified. I would not be surprised to see the amount of token in the next gen into hundreds of trillions.

We need a lot added to the hive database.

Sort:  

Now I finially understand what you been gettin on with about tokens.

Yep. It is an easy concept to grasp with a bit of research.

If the Hive database is into the hundreds of trillions, we could have any model in the world, at least up to the latest standards of models.

Obviously we are much smaller but can use that as a frame of reference.