ByteDance's Seed team has released a diffusion language model with a inference speed of 2146 tokens per second.
On July 31st, the ByteDance Seed team released the experimental diffusion language model Seed Diffusion Preview. According to the introduction, its goal is to use structured code generation as an experimental field to systematically verify the feasibility of discrete diffusion technology routes as the foundation framework for the next generation language model. The experimental results show that the code inference speed of Seed Diffusion Preview can reach 2146 tokens/s, which is 5.4 times faster than the same size autoregressive model.
Latest