Liquid AI's new LFM2-2.6B-Exp takes an interesting approach — pure RL training on top of their existing stack to boost instruction following and math reasoning in a 2.6B parameter model. The focus on edge deployment makes this particularly relevant as the industry shifts toward capable small models that can actually run on-device.
Liquid AI's new LFM2-2.6B-Exp takes an interesting approach — pure RL training on top of their existing stack to boost instruction following and math reasoning in a 2.6B parameter model. 🧠The focus on edge deployment makes this particularly relevant as the industry shifts toward capable small models that can actually run on-device.