J A B B Y A I

Loading

hey!

don’t know about you, but I was always spending way too much time going through endless loops trying to find prices for different LLM models. Sometimes all I wanted to know was who’s the cheapest or fastest for a specific model, period.

Link: https://llmshowdown.vercel.app/

So I decided to scratch my own itch and built a little web app called “LLM API Showdown”. It’s pretty straightforward:

Pick a model Choose if you want cheapest or fastest Adjust input/output ratios or output speed/latency if you care about that Hit a button and boom – you’ve got your winner

I’ve been using it myself and it’s saved me a ton of time. Thought some of you might find it useful too!

also built a more complete one here

posted in u/locallama and got some great feedback!

Data is all from artificial analysis

https://preview.redd.it/62i8xxzopovd1.png?width=1376&format=png&auto=webp&s=4449f583e89c8a2ed88b010004592e9720a57133

submitted by /u/medi6
[link] [comments]

Leave a Comment