Download the App Reader Counter

Custom Proxies for Janitor AI

A Simple Guide to Using Deepseek & Other LLMs Via Openrouter

December 20, 2025

⚠️ Important Update: Deepseek Models & Chutes

Many of Deepseek's models are provided by Chutes. As of mid 2025, users were experiencing frequent 429 rate-limited errors when attempting to use most Deepseek models. This is because Chutes is prioritizing their paid tier customers over Openrouter users (whether or not you have the free/paid tier through Openrouter).

You can visit the Models tab to view free and working Deepseek models that will be updated monthly. My suggestion would be to use those models (or a model of your choice) unless you would rather subscribe to Chutes / get directly setup with Deepseek.

If you have charged the $10 in credits to your Openrouter account, it is NOT wasted money! You get 1000 requests per day (free or paid models) and have access to so many LLMs. You can use the $10 to maintain your 1000 requests or you can spend it on paid models (Deepseek paid models still receive 429 errors but aren't entirely disruptive).

To those of you who are unfamiliar with Openrouter, the 10$ in credits is NOT required! You can still use the free models.

Please Note: As of this day, 12.11.2025, this is a current and up-to-date guide. I do not anticipate any changes that could potentially alter the required steps and/or render this guide inaccurate or irrelevant. I will keep this guide as up-to-date as possible.

⚙️ Getting Started

Because of the recent changes regarding Deepseek, we will be using TNG: Deepseek R1T2 Chimera (free) as our example model. Currently, as of December 2025, the only other free model is Deepseek's R1 0528 (free) which is provided by ModelRun and has been throwing several 400 errors due to the model's token limit being exceeded after so many messages. This model is one of the few free and reliable Deepseek related options available. Follow these steps to set up your custom proxy for Janitor AI using OpenRouter:

  1. Type openrouter.ai in your browser or click here and register for an account.
  2. Click on settings (upper right corner) THEN scroll down to Default Model and select TNG: DeepSeek R1T2 Chimera (free) or your preferred model.
  3. Click the Privacy tab (left menu) and enable Model Training. If you do not enable this, you will be thrown an error when generating a response.
  4. Open a new tab and click on the API Keys tab (left menu) and create a new API key. SAVE the key because once you copy it, you will not be able to see it again.
    • If you lose your key, you can always create a new one.
  5. Return to Janitor.ai and find a proxy compatible bot.
    • Proxy compatible bots will show "proxy allowed" text above the persona selector/start chat feature. Compatible bots usually have visible character definitions.
  6. After choosing a bot, open the API Settings from the menu (top right corner) and choose Proxy.
  7. Under the Proxy Model settings, choose 'Add Configuration' and choose your Config Name (anything you want it to be).
    • You can create multiple configurations which is great for experimenting without changing your current configuration.
  8. Under 'Model Name' paste tngtech/deepseek-r1t2-chimera:free in the box (MUST be all lowercase).
    • When you determine the model you want to use, there is a clipboard icon you can press to copy your model name.
    • For example, mistralai/mistral-medium-3.1 would be the model name for Mistral: Mistral Medium 3.1
  9. Under Other API/Proxy URL type/paste this EXACT link: https://openrouter.ai/api/v1/chat/completions
    • This WILL NOT change regardless of the model you use. Only if you use a different API provider.
  10. Finally, under API Key, paste the API key you generated and saved in step 4; Otherwise create and paste a NEW API key.
    • Note that while API Keys are listed as 'OPTIONAL,' failure to provide an API key will only throw you errors.
  11. If you do not have any custom prompts you would like to add, click Save Settings. When/if the pop up asks if you would like to set generation settings to openai's default, click yes.
  12. Open Generation Settings from the in chat menu (top right corner) and set your temperature and tokens to your preference.
  13. Close all Janitor.ai tabs and reopen them (or simply refresh). Now you can start chatting!

⚙️ Understanding Models & Settings

Model Selection

Different models will yield different results. You may prefer certain models based on their ability to produce responses that match your style and deliver an amazing roleplay experience. Janitor.ai is known for its NSFW content, and some models won't support NSFW without jailbreaking and custom prompts—even then, decent responses aren't guaranteed. From personal experience, Qwen and Deepseek are solid NSFW-friendly models.

Key Settings Explained

Temperature: Controls the randomness and creativity of responses. Where you set your temperature determines how logical or creative your response will be.

  • Low temp (0.1-0.5) = More logical, focused, and consistent responses
  • High temp (0.8-1.5+) = More creative, varied, and unpredictable responses

Tokens & Context Window: Tokens are the fundamental units (words, sub-words, characters) LLMs process, while the context window is the limit on how many tokens (input prompt + generated output) the model can handle at once.

Recommended Settings

I normally set tokens at 0 for Deepseek and preferred 1000 (max) when using Qwen. Each time you change your model, you'll be reverted to JLLM's default settings (temperature: 1.1, tokens: 260). You can use defaults, but I recommend starting with a temperature of at least 0.1 and adjusting for each new model. Access these settings in chat under Generation Settings.

💡 Pro Tip: Experiment with different models and settings to find what works best for your roleplay style. Remember to adjust settings when switching models!

🌟 Popular Examples

Note:

With the loss of free Deepseek models and the many changes regarding providers, this is currently the only model that I recommend until further notice. I have not had time to test many models in a while. This model currently does haev any errors or strict limitations (memory wise) and can handle NSFW content.