It is definitely useful to be able to consume a lot of data quickly, especially high-cardinality data. Inevitably, an infinite flood of data will eventually consume any finite space limitations. I'm wondering what QuestDB's story for data aggregation and cleanup looks like?
Aggregation is also optimised quite a bit via SIMD and map-reduce. They are as fast as the “where” predicates. Multiple field keyed aggregation is not as optimal yet. I would also suggest our demo site (free and fully open) to see how queries that you use work.
Cleanup is semi manual for now. Time partitions can be removed or detached via SQL. We’re working on automating that.
> Time partitions can be removed or detached via SQL. We’re working on automating that.
Cool! Will that be continuous queries that can be used for downsampling?
I'm working on load testing and monitoring tools. Since either can produce enough metrics to overflow available storage, the downsampling story ends up as important as write speed for me. I imagine that's true of a lot of metric database scenarios--what happens if they go on...forever?