F5 Collaborates with Intel to Simplify the Security and Delivery of AI Services
F5 announced it is
bringing robust application security and delivery capabilities to AI
deployments powered by Intel. This new joint solution combines industry-leading
security and traffic management from F5’s NGINX Plus offering
with the cutting-edge optimization and performance of the Intel Distribution of
OpenVINO toolkit and Infrastructure Processing Units (IPUs) to
deliver superior protection, scalability, and performance for advanced AI
inference.
As
organizations increasingly adopt AI to power intelligent applications and
workflows, efficient and secure AI inference becomes critical. This need is
addressed by combining the OpenVINO toolkit—which optimizes and accelerates AI
model inference—with F5 NGINX Plus, providing robust traffic management and
security.
The
OpenVINO toolkit simplifies the optimization of models from almost any
framework to enable a write-once, deploy-anywhere approach. This toolkit is
essential for developers aiming to create scalable and efficient AI solutions
with minimal code changes.
F5
NGINX Plus enhances the security and reliability of these AI models. Acting as
a reverse proxy, NGINX Plus manages traffic, ensures high availability, and
provides active health checks. It also facilitates SSL termination and mTLS
encryption, safeguarding communications between applications and AI models
without compromising performance.
To
further boost performance, Intel IPUs offload infrastructure services from the
host CPU, freeing up resources for AI model servers. The IPUs efficiently
manage infrastructure tasks, opening up resources to enhance the scalability
and performance of both NGINX Plus and OpenVINO™ Model Servers (OVMS).
This
integrated solution is particularly beneficial for edge applications, such as
video analytics and IoT, where low latency and high performance are crucial. By
running NGINX Plus on the Intel IPU, the solution helps ensure rapid and
reliable responses, making it ideal for content delivery networks and
distributed microservices deployments.
“Teaming
up with Intel empowers us to push the boundaries of AI deployment. This
collaboration highlights our commitment to driving innovation and delivers a
secure, reliable, and scalable AI inference solution that will enable
enterprises to securely deliver AI services at speed. Our combined solution
ensures that organizations can harness the power of AI with superior
performance and security,” said Kunal Anand, Chief Technology Officer at F5.
“Leveraging
the cutting-edge infrastructure acceleration of Intel IPUs and the OpenVINO
toolkit alongside F5 NGINX Plus can help enable enterprises to realize
innovative AI inference solutions with improved simplicity, security, and
performance at scale for multiple vertical markets and workloads,” said Pere
Monclus, Chief Technology Officer, Network and Edge Group of Intel.
Leave A Comment