Mercury Coder: New scaled up language diffusion model achieves #2 in Copilot Arena and runs at 1,000 tokens per second on H100s…

This new language diffusion model just got announced, is insanely fast, and scoring very well against other coding copilot models. They have been independently confirmed by Artificial Analysis to be running their models at over 700 tokens per second.

The team has some big talent behind this, including some of the people behind previous significant advancements and papers like: Flash Attention, DPO, AlpacaLora and Decision Transformers.

They claim their new architecture is upto 10X faster and cheaper than traditional autoregression based transformer models, and they also claim that their diffusion approach can have double the model size compared to autoregressive transformer models with the same cost and latency.