One interesting fact that may justify why it is interesting:
Why it matters: Hunyuan-Large generally outperforms Llama 405B, achieving the performance of a 405 billion parameter model while computing only 52 billion parameters. That’s a significantly lower processing requirement, and the model is free for many purposes.
The other fact is that using radically differently trained models in a react model creates better results based on my experience.