Qwen1.5 72B is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
Significant performance improvement in human preference for chat models
Multilingual support of both base and chat models
Stable support of 32K context length for models of all sizes