Hi HN! I created Later to solve a problem I've had every day since joining a remote team: I want to keep our conversations in Slack, but don't want to intrude on my coworkers after hours when they're in different time zones.
Later has a few tricks that I continue to enjoy daily:
- Later does the time zone math for you, automatically determining when 9:00 AM is in your recipient's time zone
- Later watches for when your recipient comes online and delivers your message early if they go green before 9:00 AM (but you can disable early delivery for a message, too)
- Messages are delivered as if you sent them (as opposed to Slack's built-in /remind, which sends messages out of context from Slackbot)
- Later works in DMs and group DMs as well as public and private channels - handy for making morning announcements to a distributed team
After months of use with my team, I'm excited to launch Later for anyone to try. I'd love to answer your questions
And so ZEIT, my favorite serverless provider, keeps getting better. Highlights:
- "sub-second cold boot (full round trip) for most workloads"
- HTTP/2.0 and websocket support
- Tune CPU and memory usage, which means even smoother scaling
And all that for any service you can fit in a Docker container - which is also how you get near-perfect dev/prod parity, an often overlooked issue with other serverless deployment techniques.
On top of all that, ZEIT has one of the best developer experiences out there today. Highly recommend trying it out.
And for the perpetual serverless haters out there: this is not a product for you, FANG developer. I think people underestimate how well serverless fits for the majority of web applications in the world today. Most small-medium businesses would be better off exploring serverless than standing up their own k8s cluster to run effectively one or two websites.
I'm not a "serverless hater", but every company I've ever worked with had backend processes that were not tied to HTTP requests. I still keep actual servers around because the HTTP gateway is not the pain point. It's long-running processes, message systems, stream processing, and reporting.
That said, I look forward to the company (or side project) where "serverless" can save me from also assuming the "devops" role.
At my last gig, we were using Firebase, Google's acquired-and-increasingly-integrated serverless solution. It was straightforward to have custom GCP instances that integrated with and extended our regular serverless workflows. In that scenario, it meant the compute instances tended to be extremely simple, as they were essentially just glorified event handlers.
Interestingly, as Firebase evolved during our use, nearly all of our external-instance use cases were obsoleted by more powerful native serverless support, esp. around functions.
All of which is the best of both worlds for serverless: an easy escape hatch to custom instances, and an ever-decreasing need for that escape hatch.
Hello, I'm building a serverless platform. Could you please expand your "It's long-running processes, message systems, stream processing, and reporting" bit?
Not parent, but I have the same question; I worked in adtech and video analytics before, now with social media. It's usually a mix of some REST APIs, which already are very easy to scale and manage without using serverless, with long-running backend processes, such as:
* video encoding;
* ETL processes;
* other analytical workloads;
* long-running websocket connections with Twitter/Facebook/etc APIs.
From my perspective, serverless solves the "boring" part of making REST APIs easier to manage, which were already very easy to manage.
How would serverless be applies to, say, a python script that streams Twitter tweets using websockets?
You would probably use something like a queue[0] that takes in data from the websocket and dishes it out to lambda functions. You might also use something like Kinesis[1] or other alternatives.
Yes of course, or I could send it into Kafka instead (which makes more sense to me). The point is, how would a serverless process looks like which doesn’t have a REST API and does this long term polling of websockets?
It depends what you're doing with that stream, most basically you would create a nano/micro EC2 instance that will just trigger Lambda events on every new tweet. Or you could create some more intricate script that does a lot of pre-processing and then stores it in RDS or S3, and with each new update to either of those sources kick off a Lambda.
Unless the API can stream directly into one of those sources you'd probably need a long-running process, perhaps running on a CaaS like AWS Fargate.
I guess you could argue where to draw the "serverless" line, at functions or containers, but Zeit is calling this container service "serverless" so I think Fargate would fall into the same category. I think it would make sense for Zeit to eventually support long-running containers too (looks like the current max is 30 minutes, I'm not sure how they chose that number)
Serverless is fantastic for ETL and data analysis, especially for workloads that vary in scale (eg cronjobs). Feed data in, get data out with scaling as needed.
but how do you feed data in? Usually, it's some other service on one of the big 3 cloud providers. I'm using google for my projects these days so it's a mix of Google PubSub and Dataflow.
I think this is the issue/risk with serverless. You either get locked into one of the big 3, or you end up doing all of the ops work to run your own stateful systems. As some of the people above you said, managing and scaling the stateless HTTP components is not the hard/expensive part of the job.
Can't you use a queue service that's essentially just managed kafka/activemq/other-standard-system? I mean sure if you wanted to move off the cloud vendor you'd have to run your queues yourself, but if you're programming to the API of well-known open-source queue system then you're never going to be very locked into a particular vendor.
The short answer is yes, you can do that, but it starts to get nuanced rather quickly. The context of this is a desire to go “serverless” and that solutions like this only give you serverless for the relatively easy parts of your stack. If your goal is to go “serverless” I take that to mean a few things listed below.
1) you don’t have to manage infrastructure
2) you don’t have to think about infrastructure (what size cluster do i need to buy?)
3) you pay for what you use at a granular level. (GB stored, queries made, function invocations, etc)
4) scale to zero (when not in use, you don’t pay for much of anything)
Most things don’t hit all of these points, but typical managed services hit very few of these points. Sure, I can use a managed MySQL, but it only satisfies 1 of the 4 points.
How does one get locked in when it’s a simple function in X language? Seriously, serverless is just an endpoint they provide. You write the code and they handle everything else.
Because the function is the stateless easy part. To make any non trivial system, in a serverless way, you have to use their proprietary stateful systems. IN my case, google pubsub, google data flow, google datastore, Spanner, etc. that’s where the lock in happens.
Right, because serverless is actually just a cover for "de-commoditizing" the cloud services that companies like AWS built to commoditize datacenters. You hit the nail on the head. It's not completely useless to help less technical people solve the problems that folks like you and I consider "the easy part" and so people will find a use for it.
But the primary utility of serverless is an attempt at solving Amazon's problem of being commoditized by containers.
I’d say something more nuanced. Serverless is increasing commoditization of one layer of the stack at the cost of de-commoditizing a high layer of the stack. This is what makes it a hard decision to grapple with. You’re getting very real benefits from it, and potentially paying a very real cost sometime down the road when being locked into the propietary system bites you.
I think all that is still possible in Serverless. I'm not a serverless architect or anything, but that's typically handled by various serverless queues and related event systems.
Break your operation into a series of discreet tasks. For 99% of use cases, if you have an discreet task that takes 5+ minutes, there's a problem. In most cases, it can be split up.
I know of 1 large scale web application that is 100% driven by serverless technologies.
The user experience (as an end user using the website) is pretty terrible IMO. It often takes multiple seconds for various areas of the site to load (bound by the network). It's also super Javascript heavy and just doesn't feel good even on a fast desktop workstation.
Authentication is also a nightmare from a UX point of view. Every time I access the site it makes me click through an intermediate auth0 login screen.
I really hope this doesn't become a common trend to run web apps like this. It would make the web nearly unusable.
I'm not a "serverless" hater either. I want technology to move forward to make my life as a developer better. I don't care what tech it is, but right now I just don't get that impression from serverless set ups (both from the developer and end user POV).
Confused by why those issues are tied to serverless.
The only one that seems like it could even be related is the page loads being slower. But I'd be surprised if the issue were the serverless aspect of the architecture and not just that this software is apparently shoddy overall.
Are you by chance referring to A Cloud Guru? I have the same auth0 and loading experience when using their site, and I suspect it's built on Lambda, but I'm not sure.
It is indeed. They can't stop talking about it in their videos. That said, I used their resources, along with some other online resources to get my AWS Associate SA Certificate and I can say I was pretty pleased with the content.
I know of 1 large scale web application that is 100% driven by serverless technologies.
In the mid-90’s it was common to run httpd via inetd. That lasted until HTTP 1.1 came along. I see people running websites out of Lambda now and just get a sense of deja vu. We realised this start-on-demand style didn’t scale for highly interactive use cases 20+ years ago!
Count me in as a over-engineering and selling-things-engineers-do-not-need hater.
The very list of benefits on ZEIT, mentions 4 things: first two of them are clear over-engineering bloat (plus premature optimization), and second two were actually created only because of serverless. They were (and nothing more was mentioned in section benefits ;) :
* Clusters or federations of clusters
* Build nodes or build farms
* Container registries and authentication
* Container image storage, garbage collection and
distributed caching
I don't know why everybody can't see fakeness of argument'you need clusters, farms and hundreds of servers'. You don't. Actually you do only (contrary to your statement), if you're FANG.
Why? Because look at real world HUGE examples. E.G. stackoverflow (and no, your company/startup/whatever, will not reach their level of traffic) can do everything on literally dozen of servers, while they admitted that in some scenarios 1 web server was enough. Source: https://nickcraver.com/blog/2016/02/17/stack-overflow-the-ar...
Our 10x..100x smaller companies would perfectly do on 2..4. There is no need for whole over-engineering.
The ultra funny thing is ZEIT selling 'deployment self-heal' as old known (windows anybody) and ridiculed recipe: it will work after restart. Right. It's better to shut off car engine, go out, go in, and start again. This is XXI engineering :)
Facebook, Amazon, Netflix, Google. Jim Cramer created the acronym to name these four companies as high-performing tech stocks years ago [0]. But now, as in the above usage, it is more of a catch-all name for the big tech companies. And in its new role as a name for a category of companies, people confuse Amazon/Apple, and also implicitly include other equivalent companies.
Hey there - I lead Twilio's documentation team. There's nothing worse than frustrating docs, I'm sorry ours are coming up short for you.
I'd love to talk more about it and hear how we can serve you better. And it looks like you work with Django a lot? So do I - maybe I can help if you've got a specific project in mind.
My email is abaker@twilio.com - would love to talk more about all of this.
Later has a few tricks that I continue to enjoy daily:
- Later does the time zone math for you, automatically determining when 9:00 AM is in your recipient's time zone - Later watches for when your recipient comes online and delivers your message early if they go green before 9:00 AM (but you can disable early delivery for a message, too) - Messages are delivered as if you sent them (as opposed to Slack's built-in /remind, which sends messages out of context from Slackbot) - Later works in DMs and group DMs as well as public and private channels - handy for making morning announcements to a distributed team
After months of use with my team, I'm excited to launch Later for anyone to try. I'd love to answer your questions