Back to blog
Janitor AI writes for me: 5 settings and phrases to make the bot stop responding for the user.

Janitor AI writes for me: 5 settings and phrases to make the bot stop responding for the user.

8 min read
blog.tags.janitorAIroleplay
Janitor AI Writes for Me: 5 Settings and Phrasings to Stop the Bot from Responding for the User
In Brief: When the bot writes lines for you, the issue lies in the character's system prompt or the lack of explicit instructions. This can be fixed through OOC commands, editing the character card, and prefixes in your messages—without restarting the dialogue.

This article is not about how to choose a platform for role-playing. If you need an overview of alternatives and a comparison of features, read our guide to choosing an AI platform.

The bot starts describing your actions, emotions, and even dialogues—a classic problem in role-playing chats based on large language models. The language model is trained to continue text, and if the context looks like a third-person narrative, it automatically fills in for both participants. The solution lies in three areas: the character's system instructions, the format of your lines, and explicit prohibitions in the prompt.

Why Janitor AI and Other Bots Write for the User

The language model does not understand the boundaries of roles. It sees a sequence of tokens and predicts the next one. If the dialogue history has a pattern of "character acts → user acts → character acts," the model will continue it automatically.

Three technical reasons:

  • The character's system prompt contains examples where the bot describes the actions of both participants. This trains the model to copy the structure.
  • Your initial messages were in a narrative format ("I enter the room and sit down"). The model remembers the style and reproduces it.
  • There is no explicit prohibition in the character card. Without the instruction "Never write actions or dialogue for {{user}}," the model considers any text a permissible continuation.

The problem is exacerbated if you are using models like LLaMA or older versions of GPT, which are worse at maintaining the context of instructions in long dialogues. But even GPT-4 can falter if the prompt is contradictory or too short.

Step 1: Check and Edit the Character Card

Open the character settings (usually the "Edit" button or a gear icon next to the avatar). Find the "System Prompt," "Character Definition," or "Personality" field. Read through the entire text and remove any fragments where the bot describes the user's actions.

A typical example of a problematic prompt:

“{{char}} is a detective. {{user}} enters the office, looking nervous. {{char}} offers a seat and starts asking questions.”

The corrected version:

“{{char}} is a detective. {{char}} observes {{user}}'s body language and responds accordingly. {{char}} never assumes {{user}}'s emotions, thoughts, or actions.”

Add an explicit prohibition at the end of the system prompt:

[IMPORTANT: {{char}} must never write dialogue, actions, thoughts, or feelings for {{user}}. Always stop after {{char}}'s own action or speech and wait for {{user}}'s response.]

Save the changes. If the platform does not allow editing others' characters, create a fork (copy) and make changes there. On platforms with an open catalog, this is usually a "Remix" or "Duplicate" button.

Step 2: Use OOC Commands for Immediate Correction

smartphone keyboard

OOC (Out Of Character) commands are instructions outside of role-playing that the model processes as direct instructions. Format: text in double round or square brackets.

When the bot has written for you, do not continue the dialogue. Immediately send:

((Stop. You just described my character's actions. Rewrite the last message, leaving only your actions and words. Don’t make anything up for me.))

Or shorter:

[[Rewrite your last message. Do not include any actions or dialogue for my character. Stop after your own response.]]

The model regenerates the response correctly 80–90% of the time. If not, use the "Regenerate" or "Edit" button and manually delete the extra text, then send the OOC reminder again.

This method works in any chat based on LLM, including Character AI, Janitor AI, romantic bots, and custom solutions on the OpenAI API.

Step 3: Format Your Lines with Clear Role Separation

If you write “I nod and say: hello,” the model sees a mixed narrative and may continue it from the bot's perspective. Visually separate action and speech.

Poor format:

I enter the café, sit at a table by the window, and order a cappuccino. Then I take out my phone.

Good format:

*I enter the café and sit at a table by the window.*
— A cappuccino, please.

Or:

**Action:** I sit at the table, take out my phone.
**Line:** “Hi. Have you been waiting long?”

Asterisks, dashes, and labels like “Action/Line” create a visual boundary. The model is less likely to cross it because it is trained on texts where such markers indicate a change of speaker.

An alternative is a prefix with a name:

{{user}}: *Nod.* Got it, thanks.

This is especially effective on platforms where the prompt uses the template “{{char}}: ... {{user}}: ...”. The model will automatically continue from {{char}}'s perspective, not yours.

Step 4: Add a "Waiting Anchor" to Each Message

An anchor is the last phrase or action that clearly conveys initiative to the bot. The model understands better where to stop if your text ends with a question, an unfinished gesture, or an expectation.

Examples of anchors:

  • *I look at you and wait for a response.*
  • — So? What do you say?
  • *I extend my hand for a handshake.*
  • — ...your turn.

Compare two messages:

Without an anchor: *I open the door, enter, take off my jacket, and hang it on the hook.*

With an anchor: *I open the door and enter.* — Hi. Are you home?

In the second case, the model understands: the user has finished and is waiting for a reaction. In the first, it sees an unfinished scene and might add “Then {{user}} goes to the kitchen and pours water.”

This technique works on all platforms with girlfriend bots, where dialogues are built on emotional exchanges and pauses.

Step 5: Adjust Generation Parameters (Temperature, Frequency Penalty)

chat conversation

If you have access to advanced API settings (Janitor AI allows you to connect your own OpenAI or KoboldAI key), change two parameters:

ParameterDefault ValueRecommendedEffect
Temperature0.8–1.00.6–0.7Reduces "creativity"—the model is less likely to add unnecessary details
Frequency Penalty0.00.3–0.5Punishes repetitive constructions, including “{{user}} does X”
Presence Penalty0.00.2–0.4Encourages new tokens, reduces the likelihood of copying the user's style

Where to find these settings: in Janitor AI—section "API Settings" → "Advanced". In SillyTavern (if you are using a local frontend)—tab "AI Response Config".

Do not drop the Temperature below 0.5: responses will become mechanical and repetitive. The optimum for role-playing is 0.65.

If you are not using your own API key, try switching to another model in the platform settings. For example, from Llama-2-13B to OpenAI GPT-3.5-turbo: the latter better maintains role boundaries.

Common Mistakes That Worsen the Problem

Error 1: You only edit the bot's last message. The model considers the entire chat history (context window). If in the previous 10 messages the bot wrote for you three times, it has remembered this pattern. Solution: delete or edit all problematic lines in the history, or start a new chat with the corrected prompt.

Error 2: You write too long messages without structure. A 150-word paragraph describing the interior, your thoughts, and three actions in a row is an invitation for the model to continue the narrative. Break it down into short lines: one action or one phrase at a time.

Error 3: You do not use the Regenerate button. If the bot wrote for you, do not try to "replay" the situation with your next message. The model has already fixed the error in context. Regenerate the response or delete it manually.

Error 4: The character prompt contains contradictions. For example: “{{char}} is proactive and takes initiative” + “{{char}} never assumes {{user}}'s actions.” The model does not understand where the boundary is between “initiative” and “usurping the role.” Rewrite the prompt specifically: “{{char}} takes initiative in conversation and decision-making, but always waits for {{user}} to describe their own actions and feelings.”

Error 5: You are using an outdated model. If the platform offers a choice between GPT-3.5-turbo (or newer) and Llama-1, Pygmalion-6B, choose the former. Older models are worse at distinguishing roles in dialogues.

How to Check if the Problem is Resolved

anime character

Start a new chat with the edited character. Send three test messages:

  1. A short action without speech: *I nod.*
  2. A question without action: — Are you serious?
  3. An action + speech with an anchor: *I extend the cup.* — Here you go. What’s next?

If the bot did not write any action or line for you at all—settings are working. If it faltered on the third message—return to step 1 and strengthen the prohibition in the system prompt: add it at the beginning and end, use caps (“NEVER WRITE FOR USER”).

Save the working version of the prompt as a template. On platforms with a character editor, you can create a basic card with the correct instructions and copy it for new bots.

Alternative Approach: Prefixes and Suffixes in Platform Settings

Some platforms (SillyTavern, Agnai) allow you to add a permanent prefix to each model message. This is text that the model sees before generating a response, but the user does not see in the interface.

An example of a prefix:

[{{char}}'s response only. Do not write or assume any actions, thoughts, or dialogue for {{user}}.]

Or a suffix to your messages:

[Wait for {{char}} to respond. Do not continue {{user}}'s actions.]

In Janitor AI, this function is not natively available, but if you are using an API proxy (for example, SillyTavern + Janitor API), set up the prefix there.

This method is especially useful if you frequently switch between characters and do not want to edit each one manually.

Frequently Asked Questions

The bot stopped writing for me, but now its responses are shorter and duller. How can I restore detail?

Add an instruction about length and style to the system prompt: “{{char}} writes detailed responses (150–250 words), describing surroundings, body language, and inner thoughts—but only for {{char}}, never for {{user}}.” Increase Max Tokens in the API settings to 300–400. Ask the bot in OOC: ((Write in more detail, describe your emotions and surroundings, but don’t make anything up for me.)) The problem is usually that the prohibition is formulated too rigidly, and the model is being overly cautious.

I’ve set everything up, but in the middle of a long dialogue, the bot starts writing for me again. Why?

The model's context window is limited (usually 4000–8000 tokens). When the chat history gets long, old messages drop out of memory, including the system prompt. Solution: use the “Pin Message” or “Memory” function if the platform supports it to fix key instructions. Or periodically (every 20–30 messages) send an OOC reminder: ((Remember: you don’t write for my character.))

Can I completely prohibit the bot from describing the environment if it concerns my character?

No, that’s too strict a limitation. The bot can write “The room is empty, only {{user}} stands by the window”—that’s a scene description, not an action for you. Allow the bot to describe static observations (“{{user}} looks thoughtful,” “There’s a book on the table in front of {{user}}”), but prohibit dynamic actions (“{{user}} takes the book and opens it”). The phrasing: “{{char}} may describe what {{char}} observes about {{user}}, but never decides {{user}}'s actions or words.”

I’m using a character from someone else's card, and there’s no prohibition on actions for the user. How can I fix it without editing?

Send an OOC instruction in the very first message of the new chat, before starting the role-play: ((Rule: you never write actions, thoughts, or words for my character. Always stop after your line and wait for my response. Confirm that you understand.)) The model will respond with confirmation, and this will be fixed in context. Repeat every 30–40 messages. This is less reliable than editing the prompt, but works in 60–70% of cases.

By using the service, you agree to the use of cookies and Yandex.Metrica (including Webvisor). Learn more