Hmm, why does the author keep using the word "upcoming"?
> the upcoming 300-series mobile processors (“Strix Point” and the presumably higher-end “Strix Halo”)
Both Strix Point and Strix Halo chips have been out for a while now, and you can even buy laptops (and mini PCs) with them right now, like the new ASUS ROG Flow Z13 w/ a Ryzen AI Max+ 395 (Strix Halo).
Probably in the attempt to stress that it is a "long wave" which started but is not over, a "coming" technology of which we are just seeing the first instances and implementations. See:
> Caveats: The ecosystem is nascent. Reliance on ONNX, the current Windows limitation for acceleration, and the context size cap are significant hurdles compared to the mature llama.cpp ecosystem. // // The success of this initiative hinges heavily on software maturation, particularly expanding context length limits, and addressing the community’s strong preference for llama.cpp integration and robust Linux support
Hmm, why does the author keep using the word "upcoming"?
> the upcoming 300-series mobile processors (“Strix Point” and the presumably higher-end “Strix Halo”)
Both Strix Point and Strix Halo chips have been out for a while now, and you can even buy laptops (and mini PCs) with them right now, like the new ASUS ROG Flow Z13 w/ a Ryzen AI Max+ 395 (Strix Halo).
Probably in the attempt to stress that it is a "long wave" which started but is not over, a "coming" technology of which we are just seeing the first instances and implementations. See:
> Caveats: The ecosystem is nascent. Reliance on ONNX, the current Windows limitation for acceleration, and the context size cap are significant hurdles compared to the mature llama.cpp ecosystem. // // The success of this initiative hinges heavily on software maturation, particularly expanding context length limits, and addressing the community’s strong preference for llama.cpp integration and robust Linux support