You’d have a hard time finding a service at scale that wouldn’t kick you off for the kind of things cloudflare does. All the big players remove certain types of content from their platforms.
It is a postgresql extension that you install on top of a normal postgresql server, so it is not worse in any way.
Timescale works by creating a 'hypertable', which is an aggregate of a lot of smaller 'chunk' tables. These chunk tables are automatically split by date or incrementing id. This means that for queries that specify IDs or a date range within a certain range, you only have to query results within a few chunks, instead of looking through all the contents of the entire 'hypertable.' [1]
Timescale also offers some other things like compression which can save you up to ~96% disk space while also improving query performance in some cases. [2][3]
It also has something they call 'continuous aggregates' [4], which are similar to postgresql's materialized views, but do not require manual refreshing - they instead update periodically through an automatic background job. There is also a feature which builds on this called 'realtime aggregates' that allows you to combine the data within a continuous aggregate with the raw data in the tables that has yet to be materialized.
There are a lot more things besides that, but I think that's a decent overview of the major features it brings to the table. From a dev perspective these things all make the data and the database easier to work with (especially targeting timeseries data). There is an api reference [5] that has some of the other commands timescale adds, if you want to see some of the other things it can help you do.
The two main things most developers will benefit from is how we manage the automatic partitioning of your incoming data (hypertables), something which is non-trivial to do yourself even though other tools exist for it. And because we do it with a time-based focus, we can be really efficient and smart about it.
Second, we've improved the query planner in PostgreSQL around the parts that relate to querying time-based, partitioned data, and provided special time-based functions. These improvements help you efficiently query data that time-series applications most often need. A quick example is something like "LAST()", which retrieves the most recent value for a given time-range. There are ways in SQL to do something similar (LATERAL JOINs or CTEs for instance), but they're usually slower and bulkier to maintain. When dealing with time-series data, getting the most recent value for an object is usually what you're doing the most often.
When you add those two foundational features, everything else that @drpebcak mentioned become amazing value-adds that you just can't get elsewhere.
Back in 2015, I'd architected and deployed a system for a AAA game that handled 24B events/day on launch without breaking a sweat, and supported 200ms round-trip ingestion-to-aggregation SLAs with no windowing (the protocol and ingestion layer did most of the heavy lifting: sequentially ordered _guarantees_ on events even when loadbalanced/connection migration meant no need for windowed batch ordering)... but the scenario for which it was designed was cut and we ended up using it for just 15m slices. :eyeroll:
Still, it was used by a dozen+ games, including a few more AAA titles, and still in use today, and portions of the tech have been cannibalized into other products. I still get the occasional inquiry about memory fencing or memory boundaries on Console X for the 5-15μs event generation API (improperly aligned memory could cause interlocked increment corruption!).
Annnyways:
I had an opportunity to chat with one of the founders at Snowflake in 2017? 2018? for a few hours. I tried to convey how imperative I felt true-realtime time series engines would be critical moving forward, an the reception was rather lukewarm. If they had been as excited as I, it'd have been one of the few opportunities to pull me away from my dream job.
I still feel the world will need this architecture, as we start moving towards more ML/AI driven decision making, and that the company which can get traction will be in a pivotal position moving forward.
Sometimes I wonder about feeling pressured to shift into Data & Applied Science to stay at that org (there just didn't seem to be vertical opportunities in the dev track). I excel in this job too, and I love what I work on... but dang sometimes I feel that the architect career path had even bigger impact potential. It was a fun couple decades. :P
The distinction they try to make is whether or not timescale is a ‘value added’ thing or not. You can’t provide a customer with a hosted timescaledb for them to use as the service, but you can provide a dashboard that stores data in timescale.
They were hoping to discourage him from asking for more. It’s always negotiable if the company wants to hire you, they just typically don’t want you to feel that way.
It's only negotiable because he got an offer from the next FAANG.
Recruiters are not going to assume that you can enter any FAANG at the highest possible level. It was incredibly lucky from the OP to get 2 offers and at the same time.
You’re in charge of scheduling your job search and have influence over when that search wraps up. It probably wasn’t random that they interviewed at two FAANG companies in the same job search.
> It was incredibly lucky from the OP to get 2 offers and at the same time.
Not sure I agree there. I think it could be a deliberate way to engineer the process to create a leverage which most savvy candidates learn to do every time they're on the market. I've been doing this for the past 6 years. It just takes discipline, casting a wide net with a lot of firms in the initial interview process, and being thorough.
Yes and no. I wanted to go to company A, and went to interview for company B as a dress rehearsal or mock process, thinking it will increase my chances at company A and maybe even provide me with some leverage, but to be honest, I didn't think I'll need to negotiate or that I could squeeze more than a couple of grand out of it. So there you have it, lesson learned.
Honestly, yes. According to semver, a major version change is for when you make breaking api changes. If a project is backwards compatible, it wouldn’t need to increment the major version.
I agree with this to some extent.. but I think a text editor is a little different than gitlab. Even within the realm of text editors or IDEs, there are still successful paid options (not to mention that the author is conflating ‘open source’ with ‘free’).
With something like gitlab, people may pay for some ‘enterprise’ features, but they are also paying for support. Open Core and totally open source projects can be monetized by offering support.
Other projects are successful just by limiting who can use the open source software for free via different licenses.
This is kind of a weird statement to make.. if I have photos, important documents, etc.. why would I not pay 5 extra dollars a month to host them on a service that offers more durability than I can guarantee doing it myself? It’s not like it’s so prohibitively expensive that only a business could do it.
The problem with group DMs in slack is that they are almost always better off as a channel - you can’t really add new people to a group DM. So you’re forced to decide if you should make a channel or not right at the beginning.
Group DMs can also have usability problems, and they tend to clog up the UI of slack. I currently have 5 or 6 group DMs going that all have at least one of the same people in all of them, so now when I use the quick switcher I see all of those. Some of them even have 2 of the same people in them - it makes it really easy to select the wrong group.
They certainly CAN monitor all of this.. but a lot of them don’t. I don’t think it really matters if the mouse or something is yours, it’s really about the computer itself. The policies and practices vary between companies. Are you suggesting there might be an issue because they are ‘using your resources’ to do this?
Would be interesting to see the liability a company opens themselves up to by monitoring an unaware employee with a webcam while they work from home though!