Rethinking data architecture: Replayability, decoupling, continuous improvement
Hey, I work at a new company called Flowcore, and we think we've found a way to make data systems less tightly wound, scary, and a lot more accessible for AI-powered querying. I'd love to get your opinion. we have documentation too.
What is Flowcore?
Flowcore is a platform that captures all your data events as they happen, and keeps them independent from your main database. Instead of locking your data into fixed tables and rigid schemas, Flowcore gives you an event history that stays flexible as your systems evolve. You still have your database, but Flowcore becomes the place where you shape, improve, and rethink your data without limitations.
Why use Flowcore?
With Flowcore, your data architecture works like well-designed code. Each flow listens to your events, transforms them, and sends them where they need to go — fully decoupled from your database or downstream systems. Flowcore uses a special event sourcing approach and also cold-storage to retain a complete history of your data, so when you change your flow logic, you don’t have to rewrite scripts or run risky migrations — you just update your flow and replay your event history, at the click of a button. That means you’re not only fixing mistakes quickly, you’re continuously improving your automations over time. It’s like applying SOLID principles to your data: safe to change, easy to extend, and naturally resilient. And you'll find, your data feels a lot less scary when it’s decoupled from your database as the single source of truth.
But what really unlocks the power is when you bring AI into the mix. With Flowcore MCP Server, you explore and understand your data just by asking questions in plain language. And because Flowcore keeps all your data in one place, not scattered across disconnected systems, you can instantly gain insights even from messy or complex data. The best part? You don’t have to design the perfect schema from day one. Flowcore’s replayability means you can figure out what you care about later, spin up a new table in minutes, and replay your history straight into it. So when you plug in AI, you’re not limited to whatever data you had at the time — you can continuously shape your data into exactly what your AI needs to answer even your most abstract questions.
So the feature that we like the most is the replayability. Seeing as all of your data changes have been captured in the event source like deleting, updating, or creating an entry you can, at the push of a button, derive entirely new read models, backfill missing data, or apply improved logic to your entire historical dataset. Whether you realize you forgot a field, need to enrich your data for AI use, or simply want to reshape how your data is stored, you don’t have to start from scratch. You just update your flow and replay your events through it.
here are the documentation https://docs.flowcore.io and the website https://flowcore.com
Hey, just want to chime in as a colleague working on Flowcore — I’ve been super excited about what we’re building, especially how the platform flips the usual data modeling process on its head.
In most setups, you're stuck making big up-front decisions about schema, pipelines, and integrations, which usually gets brittle fast. Flowcore is about pushing those decisions downstream, so you can iterate safely — the event history is the source of truth, not the current state in your DB.
What that unlocks is wild: you can reshape your data or change your business logic after the fact, and just replay events into new read models or AI workflows. And honestly, it’s refreshing to stop treating data pipelines like fragile glass sculptures and start treating them more like software — testable, modular, composable.
We’re still early, so feedback is gold — if anything seems confusing, missing, or off, we’d really love to hear it. And if you're working with messy or evolving data, we’d love to know what your biggest pain points are.
Happy to answer any technical questions too!
Somehow I feel this is the data modelling tool but looks like it can also modify source data? What is the use case of modifying the source data? I'd prefer to keep everything as is and do transformation afterwards to clean up.
I'd also recommend brushing up the doc. There is not a lot of things there (but we already reach v2?) and the Flowcathon seems to be the first thing a developer needs but it needs an account.
It’s basically immutable event sourcing with built-in replay and read model flexibility, without needing to build the whole infra stack yourself. You just send an event via a webhook or filehook to the "Data Core", and then via our cli command or the ui you can stream the data from the "Data Core", in the order it came in, anywhere you want.
But in the case of an actual application you would need events to be acted on instantly, like creating a user you would want it to be stored in the db immediately. This is where our "Transformers" come in.
Hey thank you for taking the time to look at our docs :heart. The event history is append-only, so we keep the raw data immutable. When we talk about replaying it's about reprocessing the events through improved flow logic or to build new read models, but not changing the original events themselves.
You mentioned Flowcathons, these are actually just little workshops that we had for some local tech companies to try out the platform :)