You are viewing a single comment's thread from:

RE: LeoThread 2024-11-03 06:11

in LeoFinance3 months ago

“In the language domain, the data are all just sentences,” says Lirui Wang, the new paper’s lead author. “In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture.”

The team introduced a new architecture called Heterogeneous Pretrained Transformers (HPT), which pulls together information from different sensors and different environments. A transformer was then used to pull together the data into training models. The larger the transformer, the better the output.