I am thrilled about our latest release: Arch 0.2.8. Initially we handled calls made to LLMs – to unify key management, track spending consistently, improve resiliency and improve model choice – but we just added support for an ingress listener (on the same running process) to handle both ingress an egress functionality that is common and repeated in application code today – now managed by an intelligent local proxy (in a framework and language agnostic way) that makes building AI applications faster, safer and more consistently between teams.
What’s new in 0.2.8.
- Added support for bi-directional traffic as a first step to support Google’s A2A
- Improved Arch-Function-Chat 3B LLM for fast routing and common tool calling scenarios
- Support for LLMs hosted on Groq
Core Features:
π¦ R
outing. Engineered with purpose-built LLMs for fast (<100ms) agent routing and hand-off
β‘ Tools Use
: For common agentic scenarios Arch clarifies prompts and makes tools calls
β¨ Guardrails
: Centrally configure and prevent harmful outcomes and enable safe interactions
π Access
to LLMs: Centralize access and traffic to LLMs with smart retries
π΅ Observa
bility: W3C compatible request tracing and LLM metrics
π§± Built o
n Envoy: Arch runs alongside app servers as a containerized process, and builds on top of Envoy’s proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs.