Show HN: Arch – an intelligent prompt gateway built on Envoy

5 days ago (github.com)

Hi HN! My name is Adil Hafeez, and I am the Co-Founder at Katanemo and the lead developer behind Arch - an open source project for developers to build faster, generative AI apps. Previously I worked on Envoy at Lyft.

Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling “backend” APIs to fulfill the user’s request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way - all outside business logic.

Here are some additional key details of the project,

* Built on top of Envoy and is written in rust. It runs alongside application servers, and uses Envoy's proven HTTP management and scalability features to handle traffic related to prompts and LLMs.

* Function calling for fast agentic and RAG apps. Engineered with purpose-built fast LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function/API calling, and parameter extraction from prompts.

* Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.

* Manages LLM calls, offering smart retries, automatic cutover, and resilient upstream connections for continuous availability.

* Uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance.

This is our first release, and would love to build alongside the community. We are just getting started on reinventing what we could do at the networking layer for prompts.

Do check it out on GitHub at https://github.com/katanemo/arch/.

Please leave a comment or feedback here and I will be happy to answer!

Hi, I'm curious how preventing jailbreaks protects the user?

> Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions [...]

  • That's a fair point - technically it protects the application from malicious attempts to subvert the desired LLM experience. The more specific language (and I think we could do better here) would be that Arch ensures users remain within the bounds of an intended LLM experience. That at least was the intention behind "ensure safe user interactions"...

  • Jailbreak ensures a smooth developer experience by controlling what traffic from user make its way to the model. With jailbreak (and other guardrails soon to be added) developers can short-circuit response and with observability developers can get insights on how users are interacting with their APIs.

Tetrate and Bloomberg want to contribute their code to Envoy to create "Envoy AI Gateway", similarly to how there is an "Envoy Gateway" spec. Do you see this as being complementary or competitive with your work?

https://tetrate.io/press/tetrate-and-bloomberg-collaborate-o...

  • It's early days, so while there might be some overlap, I am sure there is a lot that we can do together to build complimentary products.

    Based on the press release, its kinda hard to tell exactly how different/alike we will be, but Arch will always be "designed-first" for prompts and LLM application workloads without exposing all Envoy related features. And Envoy is "designed-first" for micro-services application workloads. So there will be some overlap but our design principles will deviate over time I feel. But we are very open to collaborating with the community here...

Lately, I have seen few gateways around LLM. Namely, openrouter, portkey.ai, etc.

My key question is, who would be the ideal customer who would need a proxy or a gateway like this? Why couldn't it be an extension or plugin of existing LBs, proxies etc.

  • Two things

    1/ Arch builds on Envoy so that we don't re:invent all the HTTP(s)/TCP level capabilities needed in a modern gateway for applications. So in that sense, we agree with you that it should "extend" something vs. rewriting the whole stack. There are several security and robustness guarantees that we borrow from Envoy as a result of this. To be more specific, a lot of Arch's core implementation today is an Envoy filter written in Rust.

    1/ Arch's core design point is around the handling and processing of prompts, which we believe are nuanced and opaque user request that require secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic. This requires the use of models and LLMs that are fast, cheap and capable to help developers stay focused on application features. For example, Arch uses (fast) purpose-built LLMs for jailbreak detection, converts prompts into API semantics for personalization, and (eventually) automatically routing to the best outbound LLM based on the complexity of a prompt to improve the cost/speed of an app.

    We believe #2 will continue to be different and evolve further away from traditional API/HTTP routing that it will require constant invention and work to make the lives of developers easy.

    Hope this helps!

Envoy is legendary in (dev)ops circles, but I don't understand what it lends to the AI space. I feel like building a separate backend service that runs behind envoy would make more sense but that's just me.

  • We agree Envoy is legendary - and per se doesn't lend anything to the AI space. That's essentially what we are doing here, building on top of Envoy to add capabilities specifically for AI and prompts. For instance, we use Envoy's filtering capabilities to handle and process prompts - this was get to keep all the robustness and security features for TCP/HTTP from Envoy and solve the critical but undifferentiated tasks related to prompts like safety, observability, routing, function calling, etc.