(2025-11-06) Torres Stop Repeating Yourself Give Claude Code A Memory

Teresa Torres: Stop Repeating Yourself: Give Claude Code a Memory.

"Can you critique the landing page for my new Story-Based Customer Interviews course?" I used to waste hours trying to get ChatGPT or Claude to adequately critique my work. I'd get frustrated by the generic feedback, the poor writing, and the suggestions that just wouldn't work for my audience or my products.

But not anymore. Not only does Claude critique my work; it also helps me produce the work.

I learned how to give Claude Code a memory. Claude knows who my target customer is, the key value propositions I focus on, the specific opportunities each product addresses, my revenue model, my marketing channels, and so much more.

So now when I ask Claude for help, it has all the right context it needs to be an expert helper. I get high-quality output tailored to my audience that works for my products and services. Every time.

The challenge with large language models (LLMs) is that by default every conversation starts from scratch. The LLM only knows what you tell it—in that specific conversation.

If you, like me, were working on a new landing page, you'd have to upload information about your target customer, the product itself, and the primary and secondary value propositions. You'd have to upload the questions and answers to add to the FAQ. And the testimonials and logos for social proof.

And for my web-based ChatGPT Projects or Claude Project fans, yes you can upload all of this information to a Project and use it across multiple chats. But what happens when you work on the next landing page?

Imagine the next one is for the same target customer but for a different product with a different value proposition. Do you start a new Project? Or do you just add to your existing Project? The former is tedious. The latter muddies the context window (which leads to deteriorating output quality).

When I ask Claude to critique my home page, it fetches the home page, but it also reads my business profile and target audience context files.

Files can be mixed and matched. So you can give Claude exactly what it needs for the task at hand—and nothing more. When you are working on your first landing page, you can reference your target customer and the relevant product. When you are working on your second landing page, you can simply reference the same target customer, but reference the new product.

If we structure our memory files properly, we can give Claude exactly what it needs and nothing more.

Giving Claude a memory takes a little bit of setup.

If your target customer changes, you simply need to update your text file

Same with your products—you can easily add and remove information as your products evolve. Odds are much of this information already lives in your file system. It's just a matter of making it easy for Claude to use

Design a Three-Layer Memory System

Claude Code already encourages you to create two types of context files: global preferences and Project-specific instructions. But there's a third layer that most people miss—and it's where the real power lives.

Layer 1: Global Preferences (Always on)

The first time you launch Claude Code, it encourages you to create a CLAUDE.md file in your root directory (~/.claude/CLAUDE.md).

about how you like to work together, no matter what type of project you are working on.

Mine includes things like:

  • Always create a plan for me to review before you start any work
  • Give me direct feedback (no hedging, no gentle suggestions)
  • Use bullet points for summaries
  • Ask clarifying questions one at a time so I can give complete answers
  • No emojis unless I explicitly ask for them

Layer 2: Project-Specific Instructions

Different Projects have different rules. My task management system works differently from my writing workspace, which works differently from my code projects.

For my writing workspace, my Project CLAUDE.md tells Claude:

  • I'm the primary writer; Claude is my thought partner and editor
  • Multiple review rounds work well: content → structure → accuracy → typos
  • Always prioritize human readability over SEO
  • Reference the writing style guide when relevant

For my task management system, the project file covers:

  • How my Trello integration works
  • File naming conventions for tasks
  • How to process research papers into summaries

For my coding projects, the project file specifies:

  • Technology stack (Node.js vs. Python)
  • Testing framework (Jest for Node.js, pytest for Python)
  • Code style and conventions
  • Project architecture and directory structure
  • Which dependencies and libraries to use

These files live in each project directory as CLAUDE.md. When I'm working in that directory, Claude automatically loads those instructions.

Layer 3: Reference Context (Pull as Needed)—The Real Power

let's revisit why managing context matters.
LLMs have a context window—a limit to how much information they can process at once. Even when you stay within that limit, research shows that loading too much context degrades performance

Your CLAUDE.md files get loaded in every relevant session, so keep them concise. You don't want them filling up the context window.

For more detailed context, create separate context files. You can then reference these as you need them. You can even describe in your CLAUDE.md files what context files exist, and Claude will automatically know to use them.

You don't put everything in your global or Project files. Instead, you create separate reference files that Claude only loads when you need them.

Okay, let's get into how I structure my context files. This rest of this article is for paid subscribers. I'll be sharing:

  • Exactly which context files I created and why.
  • How I got Claude Code to help me create them so that this wasn't a tedious task.
  • How I broke them up into small, reusable files so that Claude only gets exactly what it needs for the task at hand.
  • How I keep it all up to date.
  • Step-by-step instructions for how you can set up a similar memory system.

Edited:    |       |    Search Twitter for discussion