At Bot Research we spend a lot of time observing how people actually use AI systems, rather than how they are described in product demos or policy papers. Over the past two years, one pattern has become impossible to ignore. People are not struggling with models. They are struggling with repetition.
The quiet problem nobody talks about
Most AI conversations start the same way. You explain who you are, what you are doing, your constraints, preferences, background, and tone. You correct misunderstandings and restate assumptions. Then you do it again, and again, across different tools, different models, different days.
The industry response to this has mostly been to talk about memory, whether that is vendor memory, chat history, or long context windows. That sounds useful, but in practice it creates new problems. Memory lives inside platforms you do not control. It is opaque and not portable. And when something goes wrong, you cannot easily show what the system knew at the time.
From a research perspective, this is not a capability issue. It is a context ownership issue.
Context is the real asset
When we looked closely at how experienced users work with AI, we noticed something important. The valuable part was not the prompt and it was not the output. It was the way people learned to explain things to machines, how they framed problems, what they chose to include, what they deliberately left out, and the language they refined over time.
That context was being recreated endlessly and then discarded. There was no good place for it to live.
The tool we kept rebuilding
Internally, we started saving fragments: short explanations, background notes, cleaned summaries, and reusable instructions written in plain language. We moved them between tools by copying and pasting, stored them in documents, and versioned them by hand.
Over time, a simple realisation emerged. If you have explained something once, you should not have to explain it again. That idea kept resurfacing regardless of the project or the model we were using. Eventually, we stopped treating it as a workaround and started treating it as a product.
Why we built Packs
Packs is our attempt to formalise that behaviour. At its core, Packs lets you take anything useful, whether that is an AI answer, a file, a link, or a note, and turn it into a reusable block of context written in markdown. Those blocks can be edited, combined, and reused across different AI tools. Nothing is hidden, nothing is locked to a single model, and nothing relies on a platform remembering things on your behalf.
It is deliberately simple. A Pack is just something you do not want to explain twice.
Why we decided to release it
We did not originally plan to release Packs as a public product, but the same problem kept appearing in conversations with researchers, builders, consultants, and creatives. Everyone had their own version of the same workaround and no one had a clean solution. The problem is only going to get worse.
As AI systems become more embedded into daily work, the question of what context they operate under becomes more important, not less. Who controls it, how it is reused, how it moves between systems. We believe people should own that layer. Releasing Packs is less about launching a product and more about making that layer explicit.
A small, deliberate release
Packs is not designed to replace AI tools. It sits upstream of them. It does not try to be clever and it does not promise intelligence. It simply gives people a place where useful context can accumulate instead of evaporate.
We are releasing it quietly and intentionally. If it helps people stop repeating themselves, it is doing its job. If it helps people realise that their interaction with AI is an asset worth keeping, even better.
That is why Packs exists.