
MLCommons today released AILuminate, a new benchmark test for evaluating the safety of large language models. Launched in 2020, MLCommons is an industry consortium backed by several dozen tech firms. It primarily develops benchmarks for measuring the speed at which various systems, including handsets and server clusters, run artificial intelligence workloads. MLCommons also provides other […]
The post MLCommons releases new AILuminate benchmark for measuring AI model safety appeared first on SiliconANGLE.