-
Notifications
You must be signed in to change notification settings - Fork 438
Using LLMs other than OpenAI for AI command search #3779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If this is a feature that will be looked at, please include APIs for kobold, text-generation-webui, text-generation-inference, and aphrodite. They are the major backends that are used in the OSS AI world. |
Thanks for this feature request! To anyone else interested in this feature, please add a 👍 to the original post at the top to signal that you want this feature, and subscribe if you'd like to be notified. (Please avoid adding spam comments like 👍 or +1) Also, there is a related request here to set your own OpenAI API key #2788 |
👍 |
Hey Folks, the "bring your own LLM" feature is now an Enterprise tier. Please see our pricing page for the most updated info. https://www.warp.dev/pricing |
It's disappointing this is the solution for those of us using it personally but care about privacy. I was hoping warp was calling openai directly so I could just redirect it to a local model... It is just calling While it is cool and there are features I like, I am not really keen on having my data going through two companies. Their model prompt is pretty simple. Just tells the LLM what it can answer questions about, what format to answer in and passes that minor machine information to it. Pretty standard stuff. I think the only solution for someone like me who wants a smart terminal but cares about privacy would be to use a regular terminal and something like ask.sh and open interpreter. I'm actually surprised they didn't integrate open interpreter into warp. Seems like a missed opportunity. |
Why not include support for litellm for a fixed one time price? |
localllama or bust for me. Sorry. Love the product otherwise. |
There are some countries, in which You are not able to subscribe, because VISA and MasterCard banned those countries. Willing or not - You just can't subscribe or not able to do one-time payment(if it will become possible). There are some companies, in which it's strongly advised not to leak any info to any online services and the only safe way to use any AI-kind instruments is to host them locally or inside company environment. |
Why is this not a standard feature anyways? And to limit it behind enterprise payment wall — can you think of another way to say "f*ck you" to the user? Seriously unfortunate that this otherwise amazing product is severely gimped in functionality by this short sightedness |
It'd be cool to have the ability to self host the llm for everyone. enterprise feels like a massive middle finger |
Feels like it because it is, everyone asking for it on this thread are personal users. |
I'm disabled, on a fixed income, and simply can't afford this--like, on top of all the other stuff that I really shouldn't even be spending on? Really? REALLY? I'm trying to #!@$&%$ develop more skills so that I can do some contract work from home and maybe have a life. Ever try having a life without disposable income? It's like trying to bite your teeth or tie your fingers in a knot. I'd be willing to pay a fee for a non-gimped version. That's fair (it goes without saying that I'm not expecting you to actually answer queries for free, just allow the user to leverage a different backend). You certainly deserve to be able to recoup your development costs and profit. Your business model is not my prerogative, of course--just as e.g. OpenAI's business model is not yours. Don't you see the paradox you're enmeshed in now? The more lucrative your business, the greater the incentive for a large firm to out-compete you. I don't get off on harboring feelings of illusory superiority, so I suspect you've thought all this through already. Yeah--it's so obvious! If you guys are bright enough to develop this software and run a successful business, you must have realized this long ago! OK, so you want an income stream instead of a one-time payment? You could offer a subscription license at a substantially reduced cost that doesn't access your AI, instead allowing users to leverage their own backend. It would take some effort and possibly cut into your profit a little bit during price discovery, but I think that ultimately it would be more lucrative for you. Wouldn't you rather make some money off of all the people who are posting here (and the others who haven't bothered) instead of nothing? Seems like a win-win to me ... |
Because answering AI queries costs money, and writing the code to integrate such slick functionality into a terminal application is work! That's why. Wait, you realize that your computer isn't running the actual AI, right?
Short-sightedness? Should they just work for free or something? You might as well ask, "Why is wheat so gimped? Doesn't nature realize we're hungry? It's such an awesome plant, why the f do we have to go through all this B.S. of harvesting it and milling it and baking it ..." |
I didn't see this before. Oh, well, I don't have a need for everything else in that tier. I do have a role in an enterprise, though, so I'll contact you to work out a mutually agreeable price and terms of service. |
A lot of us do run our own LLMs and adding support for that just offloads cost and processing from warp. Well it would if they were running their own models but we know they are just proxying over to OpenAI so really it'd just reduce cost. Anywho, I've done a lot of looking for an AI replacement to just ditch Warp and integrate into zsh directly. So far the best I have found is: https://github.com/charmbracelet/mods which supports paid options as well as local ones running on your machine. |
In the spirit of this, as a fish shell user, I have just found an ai plugin for it: fish ai. I'm going to connect it to my local Ollama instance and give it a go 🤞🏼. Hope this helps others as well. |
You don't need an AI for this. If you really want one, the source code is freely available. Why don't you try to add this feature yourself. |
What would happen if you intercept traffic to this endpoint within your local environment and proxy it instead back to your Ollama endpoint? pretty simple with Caddy . . maybe folks will eventually reverse-engineer the graphql layer. Because there are hundreds of thousands of folks who do not want to hand over the keys to their entire system and all of their clients to unaccountable AI, so if you want to make more money with the application . . |
Hi Folks, as mentioned before, we've added the option to choose different models when working with Agent Mode. I will close this request, but we will be tracking the following requests for other features related to alternative LLMs:
If you could please give it a 👍 and subscribe to those requests, it would help us gauge interest. We'll post any updates there, so stay tuned! |
Discord username (optional)
No response
Describe the solution you'd like?
Is your feature request related to a problem? Please describe.
No response
Additional context
No response
How important is this feature to you?
2
Warp Internal (ignore) - linear-label:770f6576-d6c0-4e4f-a259-fc64b5156087
None
The text was updated successfully, but these errors were encountered: