← Back to context

Comment by debarshri

11 hours ago

Lately, I have seen few gateways around LLM. Namely, openrouter, portkey.ai, etc.

My key question is, who would be the ideal customer who would need a proxy or a gateway like this? Why couldn't it be an extension or plugin of existing LBs, proxies etc.

I'm affiliated with Portkey, so can answer who would need such a proxy/gateway:

Sidenote: Arch is def interesting!

A typical user we've seen at Portkey is a mid or a large size eng org where a central "Gen AI team" has now come up. This Gen AI team builds services that the rest of the company uses to build whatever AI features or products they want.

To build such a service, they need traditional API Gateway features like rate limiting, access rules, and also AI-specific features like universal API to multiple LLM providers, universal routing, central guardrails, AI-native observability + central dashboard for other stakeholders, and more.

It can absolutely be a plugin on top of existing Gateways.. like we've explored putting Portkey on Kong, but the need for a dedicated AI Gateway still remains, that can do all of these things I described in an easier way.

Probably, solutions like Langchain/Llamaindex etc. also fit in somewhere here, but a dedicated service for "ops" related issues for LLM APIs is something that we're seeing orgs adopt as a good practice.

Two things

1/ Arch builds on Envoy so that we don't re:invent all the HTTP(s)/TCP level capabilities needed in a modern gateway for applications. So in that sense, we agree with you that it should "extend" something vs. rewriting the whole stack. There are several security and robustness guarantees that we borrow from Envoy as a result of this. To be more specific, a lot of Arch's core implementation today is an Envoy filter written in Rust.

1/ Arch's core design point is around the handling and processing of prompts, which we believe are nuanced and opaque user request that require secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic. This requires the use of models and LLMs that are fast, cheap and capable to help developers stay focused on application features. For example, Arch uses (fast) purpose-built LLMs for jailbreak detection, converts prompts into API semantics for personalization, and (eventually) automatically routing to the best outbound LLM based on the complexity of a prompt to improve the cost/speed of an app.

We believe #2 will continue to be different and evolve further away from traditional API/HTTP routing that it will require constant invention and work to make the lives of developers easy.

Hope this helps!