Build A Streaming Data Mesh
DataSQRL empowers your domain teams to autonomously build decentralized data products by eliminating data plumbing and provides the tooling for a self-service data platform.
IMPORT datasqrl.tutorials.seedshop.Orders; -- Import orders stream
IMPORT time.endOfWeek; -- Import time function
/* Create new table of unique customers */
Users := SELECT DISTINCT customerid AS id FROM Orders;
/* Create relationship between customers and orders */
Users.purchases := JOIN Orders ON Orders.customerid = @.id;
/* Aggregate the purchase history for each user by week */
Users.spending := SELECT endOfWeek(p.time) AS week,
sum(i.quantity * i.unit_price) AS spend
FROM @.purchases p JOIN p.items i
GROUP BY week ORDER BY week DESC;
No Streaming Data Expertise Required
With DataSQRL, domain teams can quickly build streaming data products with their existing SQL knowledge and without having to learn the intricacies of complex data technologies.
Try this ExampleHow DataSQRL Works
Implement your data processing in SQL and define your data API in GraphQL.
DataSQRL compiles optimized data pipelines that are robust, scalable, and easy to maintain.
Decentralized
DataSQRL can consume data from data streams and external data systems enabling domain teams to build data products without a central data warehouse or lake.
Self-Serve
DataSQRL builds data pipelines that expose user-friendly GraphQL APIs for easy consumption in a self-serve data platform.
True Autonomy
DataSQRL eliminates the data plumbing that requires dedicated data technology expertise, thereby empowering your domain teams to build data products independently.
Why DataSQRL?
The foundational principle of data mesh architectures is domain ownership. But your domain teams don't have the data technology expertise to implement all the data plumbing that streaming data products require. DataSQRL eliminates data plumbing so domain teams can build successful data products autonomously.
Let's Build a Data Mesh TogetherSaves You Time
DataSQRL allows you to focus on your data processing by eliminating the data plumbing that strangles your data pipeline implementation with busywork: data mapping, schema management, data modeling, error handling, data serving, API generation, and so on.
Easy to Use
Implement your data processing with the SQL you already know. DataSQRL allows you to focus on the "what" and worry less about the "how". Import your functions when SQL is not enough - DataSQRL makes custom code integration easy.
Fast & Efficient
DataSQRL builds efficient data pipelines that optimize data processing, partitioning, index selection, view materialization, denormalization, and scalability. There actually is some neat technology behind this buzzword bingo.