From Tool to Teammate - The Real Reason Your Custom GPTs, Artifacts, and Digital Twins Feel So Collaborative
- Justin Parnell
- Jun 19
- 5 min read

How Your Data and Feedback Create Smarter AI and Why It Feels So Collaborative
You've probably noticed it. The more you interact with a Large Language Model (LLM), feed it your documents, or give it feedback, the better it seems to get at giving you what you want. If you upload your company's Google Drive, provide a specific style guide, or use those "thumbs up/thumbs down" buttons (a form of Reinforcement Learning from Human Feedback, or RLHF, the LLM's responses become more relevant, adopt the right tone, and generally feel more… aligned. It might even start to feel like the AI is a new colleague, intuitively understanding your needs.
I’ve seen many examples of folks creating Custom GPTs, AI Teammates, Digital Twins, Gems and Artifacts that are meant for specific Jobs to Be Done. Part of that process involves prompting, data uploads and strategic consideration to align your new AI partner to your desired output, which works incredibly well. But have you stopped to think about how these instructions and data inputs are influencing the outputs you get from your customized AIs?
While the sense of collaboration with your AI is a powerful outcome, understanding the "magic" behind it is key to creating this alignment intentionally. This process is rooted in how your actions refine the LLM's actual core task: Next Token Prediction (NTP). Let’s take a deep dive into what’s really happening and learn how to influence your custom AIs even more effectively.
The LLM's Core Mission of Predicting the Next Word
At its heart, an LLM is a sophisticated prediction engine. Its fundamental training teaches it to predict the next "token" (a word or part of a word) in a sequence, given the preceding tokens. When this is done on a massive scale with vast amounts of text data, the LLM learns grammar, context, and even a degree of world knowledge9. This initial training is general, however, resulting in a system that is more "grown" than "built"; its complex cognitive algorithms emerge organically rather than being explicitly programmed. To make it truly useful for your specific needs, it needs further guidance, and understanding its core mission is the first step to learning how to provide it.
Finding "Concepts" Within the AI
To understand how your data helps, we need to look at how LLMs represent information. It's not just about individual "neurons" in its network. In fact, trying to interpret single neurons is often frustrating due to a phenomenon called polysemanticity, where one neuron might activate for multiple, unrelated ideas. This makes precise control difficult.
The goal of much AI research, like the work done by Chris Olah and researchers at Anthropic, is to find or encourage monosemantic features. Think of a monosemantic feature as a specialized component in the LLM that consistently represents one single, clear concept. Researchers use techniques like Sparse Autoencoders (SAEs) to systematically identify these interpretable "features". For a user, knowing these features exist is the difference between talking to a black box and collaborating with a system you can thoughtfully guide.
How Your Data and Feedback Sharpen the LLM's Focus
This is where your understanding becomes practical. By knowing that the LLM operates on these internal concepts, you can consciously provide data and feedback to help it learn which "features," are most relevant for the outputs you desire.
Uploading Your Google Drive (or other document troves):
What you're doing: Providing a rich dataset reflecting your specific domain and communication style.
What's happening in the LLM: The LLM processes this data. It's not just memorizing; it's the model learning which causal circuits (pathways connecting different features) lead to outputs that look like your data, refining its Next Token Prediction to match your context.
Providing a Style Guide or Alignment Document
What you're doing: Explicitly telling the LLM the rules for its output.
What's happening in the LLM: Your style guide acts as a set of instructions for which conceptual "knobs" to turn up or down. Through a technique known as feature steering, the LLM can be guided to enhance the activation of features associated with "formality" and suppress those linked to "informal tone".
Using RLHF
What you're doing: Giving direct feedback on the quality of the LLM's generated text with either a reward function mechanism, or with additional alignment prompting or constitution documents.
What's happening in the LLM: This feedback helps refine the LLM's prediction strategy. A "thumbs up" reinforces the pattern of feature activations that led to the good response, effectively creating new, more refined pathways for its Next Token Prediction.
Leveraging These Principles Today: Custom GPTs, Gems, and Artifacts
Understanding the theory is one thing, but you can apply these powerful principles of data-driven guidance and feature steering in your daily interactions with AI. When you grasp the principles of interpretability, you can use today's tools not just as a user, but as a sophisticated operator.
Custom GPTs: When you create a custom GPT, you are directly applying these concepts. The specific instructions you write act like a sophisticated style guide, steering the model to adopt a certain persona. By uploading your own documents as a knowledge base, you are providing the rich, specific data that helps the LLM learn the causal circuits relevant to your world, making its answers more accurate and context-aware.
Gems: Similarly, features like "Gems" are a user-friendly form of feature steering. When you select a Gem—whether it's a "Creative Writing Coach" or a "Meticulous Proofreader"—you are choosing a pre-packaged set of instructions that guide the LLM to enhance the activation of specific features related to that skill, instantly aligning its tone and focus.
Artifacts: Tools like "Artifacts" provide a tangible workspace for the feedback loop to happen. When you ask an AI to generate code or a business plan and it appears in an interactive window, you are creating a space for iterative refinement. Each time you provide feedback and the model updates the artifact, you are essentially providing the same kind of direct feedback as a 'thumbs up/thumbs down' signal, helping the LLM refine its output to better match your goal, similar to RLHF.
Creating New Effective Habits for Next Token Prediction
Crucially, these interventions don't usually mean the LLM is rewriting its fundamental code. Instead, they modify how the LLM uses its learned knowledge. They help assign new "semantic values" or importance to certain features and patterns in the context of your requests. Your awareness of this process allows you to provide clearer signals, making you a more effective "teacher" for the model. Some advanced techniques can even use gradient-based optimization at inference time to guide a single generation towards a desired conceptual target, creating temporary, on-the-fly optimization trajectories within the model's activation space. The LLM learns to prioritize certain internal "concepts" and "circuits" that are more likely to produce the output you've signaled is good.
The Feeling of Understanding is Deep Alignment
I’ve often seen other leaders in this space writing about thought partners, digital teammates and digital twins in ways that speak to a deeper feeling of connection with an LLM, and rightfully so. These folks have put in the work to understand how their techniques are generating this kind of tangible depth to the technology. One of my co collaborators Liza Adams has a great piece out right now about how she advises her clients to create truly special alignment with their “AI teammates,” which I highly recommend reading.
Ultimately, by understanding that your prompts and data are directly influencing the model's internal "debate" for its next token prediction, you become a participant in its reasoning process. The feeling of the LLM "getting you" is no longer just a magical sensation; it becomes the direct result of your informed guidance. This deep alignment is what transforms the tool into a true collaborator, and it’s a partnership that becomes more powerful the better you understand what’s happening under the hood.
Kommentare