Hacker Newsnew | past | comments | ask | show | jobs | submit | tuckerconnelly's commentslogin

Hey I run a little recruiting startup, and we have a few roles at fintech companies in NYC--wanna send me your LinkedIn + resume?


Yeah based on my in-laws, Chilean seems understandable, though they tend to speak fast.

Argentine Spanish is the strange one, due to the heavy Italian influence.


I saw an interesting video showing Argentines in the 90s vs Argentines today, both in BA with more or less the same age and status, and the former group had way more of that "Italian" sounding accent. I think it's going away or softening over time. A shame, because I like it.


I can offer one data point. This is from purely startup-based experience (seed to Series A).

A while ago I moved from microservices to monolith because they were too complicated and had a lot of duplicated code. Without microservices there's less need for a message queue.

For async stuff, I used RabbitMQ for one project, but it just felt...old and over-architected? And a lot of the tooling around it (celery) just wasn't as good as the modern stuff built around redis (bullmq).

For multi-step, DAG-style processes, I prefer to KISS and just do that all in a single, large job if I can, or break it into a small number of jobs.

If I REALLY needed a DAG thing, there are tools out there that are specifically built for that (Airflow). But I hear they're difficult to debug issues in, so would avoid at most costs.

I have run into scaling issues with redis, because their multi-node architectures are just ridiculously over-complicated, and so I stick with single-node. But sharding by hand is fine for me, and works well.


To your comment on Airflow, I’ve been around that block a few times. I’ve found Airflow (and really any orchestration) be the most manageable when it’s nearly devoid of all logic to the point of DAGs being little more than a series of function or API calls, with each of those responsible for managing state transfer to the next call (as opposed to relying on orchestration to do so).

For example, you need some ETL to happen every day. Instead of having your pipeline logic inside an airflow task, you put your logic in a library, where you can test and establish boundaries for this behavior in isolation, and compose this logic portably into any system that can accept your library code. When you need to orchestrate, you just call this function inside an airflow task.

This has a few benefits. You now decouple, to a significant extent, your logic and state transfer from your orchestration. That means if you want to debug your DAG, you don’t need to do it in Airflow. You can take the same series of function calls and run them, for example, sequentially in a notebook and you would achieve the same effect. This also can reveal just how little logic you really need in orchestration.

There are some other tricks to making this work really well, such as reducing dependency injection to primatives only where possible, and focusing on decoupling logic from configuration. Some of this is pretty standard, but I’ve seen teams not have a strong philosophy on this and then struggle with maintaining clean orchestration interfaces.


Helpful comment! If I could pick your brain...

I'm looking at a green field implementation of a task system, for human tasks - people need to do a thing, and then mark that they've done it, and that "unlocks" subsequent human tasks, and near as I can tell the overall task flow is a DAG.

I'm currently considering how (if?) to allow for complex logic about things like which tasks are present in the overall DAG - things like skipping a node based on some criteria (which, it occurs to me in typing this up, can benefit from your above advice, as that can just be a configured function call that returns skip/no-skip) - and, well... thoughts? (:


I think there are some questions to ask that can help drive your system design here. Does each node in the DAG represent an event at which some complex automated logic would happen? If so, then I think the above would be recommended, since most of your logic isn’t the DAG itself, and the DAG is just the means of contextually triggering it.

However, if each node is more of a data check/wait (e.g. we’re on this step until you tell me you completed some task in the real world), then it would seem rather than your DAG orchestrating nodes of logic, the DAG itself is the logic. In this case, i think you have a few options, though Airflow itself is probably not something I would recommend for such a system.

In the case of the latter, there are a lot of specifics to consider in how it’s used. Is this a universal task list, where there is exactly one run of this DAG (e.g. tracking tasks at a company level), or would you have many independent runs of this (e.g. many users use it), are runs of it regularly scheduled (e.g. users run it daily, or as needed).

Without knowing a ton about your specifics, a pattern I might consider could be isolating your logic from your state, such that you have your logical DAG code, baked into a library of reusable components (a la the above), and then allowing those to accept configuration/state inputs that allow them to route logic appropriately. As a task is completed, update your database with the state as it relates to the world, not its place in the DAG. This will keep your state isolated from the logic of the DAG itself, which may or may not be desirable, depending on your objectives and design parameters.


Do you avoid things like task sensors? Based off what you described it sounds like an anti pattern if you’re using them.

Great description of good orchestration design. Airflow is fairly open ended in how you can construct dags, leading to some interesting results.


Yes, I think you could make an argument for them, but in general it means putting your state sensing into orchestration (local truth) rather than something external (universal truth). As with anything, it does depend on your application though. If you were running something like an ETL, I think it’s generally more appropriate to sense the output of that ETL (data artifact, table, partition, etc) than it is to sense the task itself. It does present some challenges for e.g. cascading backfills, but I think it’s a fine tradeoff in most applications.


If you’re already in the Kubernetes system, Argo Workflows has either capabilities designed around what you are describing or can be built using the templates supported (container, script, resource). If you’re not on Kubernetes, then Argo Workflows is not worth it on its own because it does demand expertise there to wield it effectively.

Someone suggested Temporal below and that’s a good suggestion too if you’re fine with a managed service.


Not GP or specifically Airflow user; but my approach is to have a fixed job graph, and unnecessary jobs immediately succeed. And indeed, jobs are external executables, with all the skip/no skip logic executed therein.

If nothing else, it makes it easy to understand what actually happened and when - just look at job logs.


I’m working on similar system. My plan is to have multiple terminal states for the tasks:

Closed - Passed

Closed - Failed

Closed - Waived

When you hit that Waived state, it should include a note explaining why it was waived. This could be “parent transaction dropped below threshold amount, so we don’t need this control” or “Executive X signed off on it”.

I’m not sure about the auto-skip thing you propose, just from a UX perspective. I don’t want my task list cluttered up with unnecessary things. Still, I am struggling with precisely where to store the business logic about which tasks are needed when. I’m leaning towards implementing that in a reporting layer. Validation would happen in the background and raise warnings, rather than hard stopping people.

The theory there is that the people doing the work generally know what’s needed better than the system does. Thus the system just provides gentle reminders about the typical case, which users can make the choice to suppress.


I think of jobs rather as of prerequisites. If a prerequisite is somehow automatically satisfied (dunno, only back up on Mondays, and today is Tuesday) then it succeeds immediately. There is no "skipping". Wfm.

I find embedding logic into DSLs usually quite painful and less portable than having a static job graph and all the logic firmly in my own code.


Tbh that sounds almost like an already built workflow engine like n8n or even Jira would be preferable to reinventing the wheel.


Have you looked into temporal.io? It supports dynamic workflows.


Ok, so question (because I really like the DAG approach in principle but don't have enough experience to have had my fingers burned yet):

The way you use Airflow, what advantage does it have over crontab? Or to put it another way, once you remove the pipeline logic, what's left?


Airflow provides straightforward parallelism and error handling of dependent subtasks. Cron really doesn’t.

With cron you have to be more thoughtful about failover especially when convincing others to write failure safe cron in invoked code. With airflow you shouldn’t be running code locally so you can have a mini framework for failure handling.

Cron doesn’t natively provide singleton locking so if the system bogs down you can end up running N of the same jobs at the same time which slows things down further. Airflow isn’t immune to this by default but it’s easier to setup centralized libraries that everything uses so more junior people avoid this when writing quick one off jobs.


Observability is a huge upside.


Backfilling is also very useful


Thanks to both comments.


This is exactly what we do, but with Spark instead. We develop the functions locally in a package and call necessary functions for the job notebooks, and the job notebooks are very minimalistic because of this


Spark-via-Airflow is also the context we use this, glad of see the pattern also works for you.


Thanks, this was really helpful.


In my experience monoliths don't reduce complexity, they just shift it. The main issue with monoliths is that they don't have clear and explicit separation of concern between domain concerns, therefore it's very easy for your monolith codebase to devolve into a mess of highly interconnected spaghetti code with time. This is especially true if you're building something large with a lot of developers who don't necessarily understand all of the domain complexity of the code they're touching.

Monoliths imo are better for smaller projects with a few devs, but otherwise within a few years most of the time you'll regret building a monolith.

I also disagree with the duplicated code point. I don't understand why that would be a significant problem assuming you're using the same language and sharing packages between projects. This isn't a problem I've ever had while working on microservices anyway. I'd also debate whether they're anymore more complex than monoliths on average. My favourite thing about microservice architecture is how simple individual microservices are to understand and contribute to. The architecture and provisioning of microservices can be more complicated, but from the perspective of a developer working on a microservice it should be much simpler to work on compared to a monolith.


I think lots of microservices can be replaced with a monolith which in turn can be replaced with a set of composable libraries versioned separately.

If anyone doubts that, this very browser used to read and write is built all the way up with dozens of libraries from compression, network, image encoding, decoding, video encoding decoding, encryption, graphics, sound and what not where each library is totally separate and sometimes was never intended to be used to build web browsers by the original authors.

Rest assured, most of the business (or web 2.0 systems, search, curate, recommend, surface etc kind of) systems are a lot more simpler then an A class browser.


If you are using Chrome, it's also a combination of multiple well separated processes talking via RPC with each other, which is pretty similar to microservices, although the separation boundaries are more influenced by attack mitigation requirements than your typical microservice architecture would be.


But that’s due to security, not for any supposed benefit of microservices. Also, both processes are from the same repo sharing code, so I wouldn’t really qualify as microservice.


That‘s literally an example of the decision depending on multiple factors. Separation of concerns -> more isolation -> stronger security of the overall system is exactly one of the possible benefits of microservices.

Scale is just one. There is also fault tolerance, security, organizational separation (which can be, up to a point, also be realized with libraries as you suggest), bigger ecosystem to choose from, …


1. microservices also create security boundaries

2. microservices living in monorepos is common


And even that process separation is spinning up more processes from within the same binary or build artefact. The usual fork() and CreateProcessW() etc and then wait on them.

Unlike in Microservices where each process is possibly a totally different language, runtime and framework spun up individually that too possibly in totally different ways.


These blanked statements about monoliths are what made every junior dev think that microservices are the only solution.

If you cannot make a clean monolith, I have never seen any evidence that the same team can make good microservices. It is just the same crap, but distributed.

The last 2 years I see more and more seasoned devs who think the opposite: monoliths are better for most projects.


> It is just the same crap, but distributed.

Yes, but also - more difficult to refactor, more difficult to debug (good luck tracking a business transaction over multiple async services), slower with network overhead, lack of ACID transactions... Microservices solve problems which few projects have, but adds a huge amount of complexity.


monoliths are the postgres of architectures - keep it simple until you really can't, not until you think you can't.


In my experience, the biggest issue with microservices is that they convert nice errors with a stack trace to network errors. Unless you also invest heavily in observability (usually using expensive tools), running and debugging monoliths generally seems easier


Microservices necessarily add more complexity and overhead when compared to a monolith. Just the fact that you have to orchestrate N services instead of just pressing run on a single project demonstrates some of the additional complexity.


Counterpoint: a monolith usually contains a complex init system which allows multiple ways of running the codebase. Microservices can avoid at least that one complexity.


Another advantage of microservices is that you can avoid the overhead of multiple services by having one really big microservice.


You mean like profiles? Monolith can run like a front service or background worker depending on the config?

In a technical sense it is a complexity, but IME pales in comparison with the alternative of having to manage multiple services.

I actually really like this model, it's pretty flexible. You can still have specialized server instances dedicated for certain tasks, but you have one codebase. What's very sweet is that for local development, you just run a single application which (with all enabled profiles) can fulfill all the roles at the same time.


Thanks no. I rather wait 10ms for a rebuild and 1s on gdb init, than 25m for a monolith rebuild, and 2m on gdb init.

Seperate processes, yes.


Who says monoliths don't have clear and explicit separation of concern between domain concerns? I think that just comes down to how the codebase is organized and how disciplined the team is, or possibly breaking out core parts into separate libraries or other similar strategies - technically it's still a monolith.


Libraries are a great way to manage separation of concerns. Any dependency you add has to be explicit. There's nothing stopping you from adding that dependency but you can't just do it accidentally.

The graph of dependencies between components makes for explicit separation of concerns just like you would have a graph of dependencies between different network services.


Or you just use a language with support for clear module API boundaries (vs something like Ruby where without bolt on hacks, every piece of code can call any other in the same process).


The lower cost of a function call versus any microservice network call is a good performance advantage of a monolith. Monoliths also make refactoring code a lot easier. While in theory I agree about the spaghetti issue, in practice I haven't seen much of a difference. In part because microservices seem to encourage proactive overdesigning, and then the designs don't age well.

I also find monoliths a lot easier to debug. You have the whole call stack, and you get rid of a lot of potential sources of latency problems. You don't have RPC calls that might sometimes get forgotten.

Given the choice, I'd choose monolith every time. Unless, of course, microservices are needed for various other reasons. (Scale, the ability to distribute, etc.)


> In my experience monoliths don't reduce complexity, they just shift it

This is both true and false in a way. Sure, the same business logic is distributed across microservices, but a method call in a monolith can only fail in a couple of ways, while network calls are much more finicky - handling it in every case is pure added complexity in case of a microservice architecture.

Also, don’t forget the observability part - a mainstream language will likely have a sane debugger, profiler, a single log stream, etc. I can easily find bugs, race conditions, slow code paths in a monolith. It’s much more difficult if you have to do it in a whole environment communicating with potentially multiple instances of a single, or multiple microservices.

Lastly, we have programming languages to help us write correct, maintainable code! A monolith != spaghetti code. We have language tools to enforce boundaries, we have static analysis, etc. A refactor will work correctly across the whole codebase. We have nothing of this sort for microservices. You might understand a given microservice better, but does anyone understand the whole graph of them? Sure, monoliths might become spaghettis, but microservices can become spaghettis that are tangled with other plates of spaghettis.


Microservices also introduce the issue of maintaining schema compatibility for messages between services which usually leads to additional code in order to maintain backward compatibility

From a technical POV they are good for horizontally scaling different workloads which have a different resource footprint

From my experience, when a company decides to go the microservice route, it's more for the sake of solving an organizational problem (e.g. making team responsibilities and oncall escalation more clear cut) than it is to solve a technical problem. Sometimes they will retroactively cite the technical benefit as the reason for using them, but it feels like more of an afterthought

But in all honesty: microservices are very good at solving this organizational problem. If microservice x breaks, ping line manager y who manages it. Very straightforward


You could do that with code owners file in a monolith as well.


This presupposes that there is more than one line manager.

I see people trying to apply micro services architectures to a web app with a single developer.

As in literally taking a working monolith written by one person and having that one person split it up into tiny services.

It’s madness.


If your goal is to learn kubernetes instead of developing a product, then go for it IMHO, no better way. Just make sure everyone is on board with the idea.


I call this customer-funded self-training.


>My favourite thing about microservice architecture is how simple individual microservices are to understand and contribute to.

I generally agree with your stance and I would add that I find the whole simplistic „microservices suck“ talking point as nonsensical as viewing them as a panacea. They do solve a few specific (for most companies, mostly organizational/human factors because scale and redundancy don’t matter that much) problems that are harder to solve with monoliths.

I still think this point is a bit misleading, because yes, the components become simpler, but their interaction becomes more complex and that complexity is now less apparent. See:

>The architecture and provisioning of microservices can be more complicated, but from the perspective of a developer working on a microservice it should be much simpler to work on compared to a monolith.

I think that perspective often doesn’t match reality. Maybe most microservice domains I‘ve seen had poorly separated domains, but changes small enough to ignore the interactions with other services have been rare in my experience.


Amazing this was downvoted. The comment starts with "in my experience" and is hardly a controversial perspective. I beg the HN community, stop disincentivizing people from respectively providing a converse opinion, lest this become yet another echo chamber of groupthink.


It relates that there was experience, but not what that experience was - We can read and understand that they're reporting their own experience, but that's about it.

One could say "In my experience, the earth is flat", but there's not much a conversation to be had there.

One could say, "In my experience, the earth is flat - I got in my car, went for a drive in one direction, and eventually hit a cliff at the ocean instead of finding myself back where I started". Now there's something to talk about.

(To be clear: this is the internet, and a limited communication medium: I'd assume OP could relate details about their experience, and it's totally reasonable that instead of taking the time to do that, they went outside and touched grass)


That’s not a reason to downvote. Downvoting is a censorship tool.


It's because it didn't loop back to queues at any point. It's just a tangent on a tired topic.


We've swung back and it's trendy to hate on microservices now, so join in! /s


Patiently waiting for common sense to be fashionable. Alas, sensible people are too busy to advocate on the internet.


> My favourite thing about microservice architecture is how simple individual microservices are to understand and contribute to.

Whether this is good depends on the type of changes you need to make. Just as you mentioned maintaining modularity in a monolith can be difficult with entropy tending to push the code to spaghetti, there is an equivalent risk in microservices where developers duplicate code or hack around things locally versus making the effort to change the interfaces between microservices where it would make sense.

Ultimately microservices add structure that may be useful for large enough teams, but is still overhead that has to earn its keep.


> My favourite thing about microservice architecture is how simple individual microservices are to understand and contribute to.

You can achieve exactly the same with simple individual libraries.


"Cleverness, like complexity, is inevitable. The trick is making sure you're getting something worthwhile in exchange."


What are your gripes with GCP? I've been using it for every project for a while, and am super happy with it, especially GKE.


Maybe you weren't burned by the near periodic massive GCP outages from 2019 - 2023. More power to you, but I'm not signing up for more of that.


My counterpoint: if your product is simply a ChatGPT wrapper, you have no moat. Whatever is complicated enough and is actually making money, and you feel the need to test it and make sure it keeps running, that's your moat, and that's what you're going to want to hire human help for once you actually make some money.


> My counterpoint: if your product is simply a ChatGPT wrapper, you have no moat.

A really low barrier to entry isn't always necessarily a good thing, as someone can usurp your business relatively quickly.


Of course. But that's not what I'm talking about in this article. I'm talking about real solopreneurs building non-AI wrappers


You don't see yourself hiring help for:

* Keeping things up and running

* Building the systems that the AI interfaces with

* Supporting any large contracts

?

In my experience, one enterprise contract with specific requirements basically requires you to hire a dev to support them.


I am talking about getting started, now, in the age of ChatGPT.

You can hire if you need to, but the dynamics in work might shift that it's just a whole new ballgame. "enterprise contract" in ten years might be a meaningless term.


Would you recommend the solopreneur path for someone like me?

I’m a soon to be new grad with a year of experience and no luck getting another entry level job so far. It doesn’t help that I don’t have a drivers license or car, and just recovered from years of health issues. My life circumstances have really closed a lot of doors for me. I need flexibility and preferably WFH (this was a requirement for me before COVID, due to other health issues).

It’s risky, you lose benefits, and it’s probably a terrible idea for a mere junior according to all the advice I hear… Even in your article, your advice is to “find what you’re an expert at” but I’m not an expert of anything :( I’m very much a jack of all trades, master of none. But still, I’m willing to change that, and it feels like the ideal (perhaps the only viable) lifestyle for me. Any thoughts or advice would be much appreciated.


I think the point is chatgpt as a development partner rather than the core product


Exactly. Start using the "Upload file" feature instead of pasting code in the window and screwing up the UX. Game changer!


Message queues are difficult to maintain.

Webhooks is the best solution IMO.


I built an AI-agents tech demo[1], and am now pivoting. A few thoughts:

* I was able to make a simple AI agent that could control my Spotify account, and make playlists based on its world knowledge (rather than Spotify recommendation algos), which was really cool. I used it pretty frequently to guide Spotify into my music tastes, and would say I got value out of it.

* GPT-4 worked quite well actually, GPT-3.5 worked maybe 80% of the time. Mixtral did not work at all, aside from needing hacks/workarounds to get function-calling working in the first place.

* It was very slow and VERY expensive. Needing CoT was a limitation. Could easily rack up $30/day just testing it.

My overall takeaway: it's too early: too expensive, too slow, too unreliable. Unless you somehow have a breakthrough with a custom model.

From the marketing side, people just don't "get it." I've since niched down, and it's very, very promising from a business perspective.

[1] https://konos.ai


I think it's destined to fail because it basically moved AI back into the "rules based" realm. Deep learning is a decent cognitive interface - like making a guess at some structure out of non-structure. That's where the magic happens. But when you take that and start using rules to chain it together, you're basically back to the same idea as parsing semi-structured data with regex and/or if statements. You can get it to work a bit but edge cases keep coming along that kill you, and your rules will never keep up. For simple cognitive tasks, deep learning figures out enough of the edge cases to work pretty well, but that's gone once you start making rules for how to combine predictions.


I totally agree with this. I have been arguing with folks that current Reactflow based agent workflow tools are destined to fail, and more importantly, missing the point. Stop forcing AI into structured work.

I do think AI "agents" (or blocks as I like to think of them) unlock the potential for solving unstructured but well-scoped tasks. But it is a block of unstructured work that is very unique to a problem, and you are very likely to not find another problem where that block fits. So, trying to productize these AI blocks as re-usable agents is not that great of a value prop. And building a node based workflow tool is even less of a value prop.

However, if you can flip it inside out and build an AI agent that takes a question and outputs a node based workflow. But the blocks in the workflow are structured pre-defined blocks with deterministic inputs and outputs, or a custom AI block that you yourself built, then that is something I can find value in. This is almost like the function calling capabilities of GPT.

Building these block reminds me of the early days of cloud computing. Back then the patterns for high availability were not well-established and people that were sold on the scalability aspects of cloud computing and got onboard without accounting for service failure/availability scenarios and the ephemeral nature of EC2 instances were left burned, complaining about the unfeasibility of cloud computing.


> AI agent that takes a question and outputs a node based workflow

That rings useful to me. I find it hard to trust an AI black box to output a good result, especially chained in a sequence of blocks. They may accumulate error.

But AIs are great recommender systems. If it can output a sequence of blocks that are fully deterministic, I can run the sequence once, see it outputs a good result and trust it to output a good result in the future given I more or less understand what each individual box does. There may still be edge cases, and maybe the AI can also suggest when the workflow breaks, but at least I know it outputs the same result given the same input.


what makes it slow? is it because they throttle your api key?


Chain of thought takes time to generate all the characters. If you do a chain-of-thought for every action and every misstep (and you need to for quality + reliability), it adds up.


Is there no way to share that "memory" across chats?

or are we at the mercy of hosted models?


There’s caching but only so much can be cached when small changes in the input can lead to an entirely different space of outputs. Furthermore, even with caching LLM inference can take anywhere from 1-15s using GPT4-Turbo via the API. As was mentioned, the more characters you prefix in the context - the longer this takes. Similarly you have a variable length output from model (up to a fixed context length) and so the time it takes to calculate the “answer” can also take awhile. In particular with CoT you are basically forcing the model to use more characters than it otherwise would (in its answer) by asking it to explain itself in a verbose step by step manner.


Our p99 for gpt4 is 3s. Images take up to 50s.


so how would you go about improving that?


Not using an LLM for it.


we only send 0.5-5% of traffic to gpt4, thanks to smaller faster cheaper models. So not all of our traffic is hit with 50s latencies :-/


so, no?


There's value, but it's too expensive, too slow, and too unreliable right now to be feasible from a business perspective.


While X's main use case works, I just tried to run some ads there.

When trying to upgrade to verified org: "An unexpected error occurred"

When trying to change my display name: "An unexpected error occurred"

When trying to run an ad: "Awaiting verification" for a week

Contacting support through messages: No response

Finally finding a way to submit a ticket: 3 day response time with a standard response that didn't resolve the issue.

Responded to that ticket and haven't gotten a response in days.

I don't think they're operating at the same level as they used to :)


It's sad that you cannot even get through the workflow that gives them money. They must be in absolutely terrible shape.


Yes, like all of Elon Musk’s undertakings; ready for bankruptcy at any moment, if HN is asked.


How does 0% interest rate actually affect this though? Are that many companies actually funded on debt now? Or are are sales down because their customers were purchasing with debt?


Fair question, but there's a pretty direct line.

ZIRP means that huge capital managers (sovereign wealth funds, pension funds, 401k managers, etc.) get very, very little money on the super-safe stuff they like to buy.

They need to make returns somehow, so if a VC is promising them 15% returns, that sounds quite promising compared to T-Bonds that return 1.5%!

But over the last two years, the yield on super-safe investments now looks more like 6, 7, 8, 9%. That makes a high-risk investment like VC much less attractive, by comparison.

If VC is less attractive, less capital flows to their funds; smaller VC funds means much more discerning, stingy startup investment.


You can get debt cheap to fund investment. Other side is that lot of money is always searching for some kind of return. With rates going up that money can go back to boring bonds, either from governments or even big reliable companies that are unlikely going anywhere.

No need to gamble it anymore on tech companies.


It's a knock on effect. Most software companies sell to other software companies and VC funded startups


Companies evaluate ROI for projects against the "risk-free" interest rate. When that interest rate rises, fewer projects are viable.


Cuphead should absolutely add a Mickey Mouse DLC.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: