ContextForce

ContextForce
Contextforce uses a smart semantic router that picks the best AI model to fulfill your request at runtime. We run a lab to determine what model is best for your task based on a set of criteria like user level, types of tasks, speed, cost, context size and more.
For free users, your request is likely to be handled by one of the following models: gpt-4o-mini, deepseek-v3, gemini-1.5-flash, llama 3.3 70B, Llama 3.1+, gpt-3.5-turbo, etc. It costs you minimum 1 credit per request.
For paid users, more advanced models are included like gpt-o1, gpt-o3-mini, deepseek-r1, gemini-1.5-pro, etc.
For developers that have their own way to access frontier models, you can use our service to bring the web data to your model for free. Simple follow the instruction below to set it up.
For free users, your request is likely to be handled by one of the following models: gpt-4o-mini, deepseek-v3, gemini-1.5-flash, llama 3.3 70B, Llama 3.1+, gpt-3.5-turbo, etc. It costs you minimum 1 credit per request.
For paid users, more advanced models are included like gpt-o1, gpt-o3-mini, deepseek-r1, gemini-1.5-pro, etc.
For developers that have their own way to access frontier models, you can use our service to bring the web data to your model for free. Simple follow the instruction below to set it up.
OpenAI

OpenAI
Go to OpenAI to get your API key (Link: https://platform.openai.com)
Obtain OpenAI API Key

Gemini

Gemini
Go to Gemini to get your API key. (Link: https://aistudio.google.com/)
Obtain Gemini API Key

OpenAI Compatible

OpenAI Compatible API
There are many models online have provided OpenAI Compatible API for developers to test their modelswithout writing custom integration code. Contextforce has integrated many of them thru this.
Local LLM
You can now use LLMStudio to download and run the distrilled DeepSeek R1 or other open source models locally. When your model is live, we can communicate with it via the OpenAI compatible API. Please follow the instruction below to obtain the connection info.Download DeepSeek R1 distrilled model and run it locally
