Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's the part that resonated with me most:

  A particular type of complexity is over-engineering, where developers have made the code more generic than 
  it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be 
  especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be 
  solved now, not the problem that the developer speculates might need to be solved in the future. The future 
  problem should be solved once it arrives and you can see its actual shape and requirements in the physical 
  universe.
Note how Google does NOT say "make sure the code is properly architected". Instead they say "make sure the code is not over-engineered"! At top companies like Google, projects rarely fail because there isn't enough architecture. Instead projects end up costing 10x to 30x because of unnecessary complexity. Over-engineering is a trap that very good developers fall into all too often. I am glad to see Google has cautioned against this, because I can now point my fellow developers to this when they are about to fall into the same trap!


The quote, for people on mobile:

> A particular type of complexity is over-engineering, where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be solved now, not the problem that the developer speculates might need to be solved in the future. The future problem should be solved once it arrives and you can see its actual shape and requirements in the physical universe.


Thanks!

I agree so much. Zero-cost abstraction is a misnomer.


isn't "zero-cost abstraction" shorthand for "zero-performance-cost abstraction"? i.e. computer, not human, performance?


It is! Which is why I don’t like the term. It often masks the fact that it has a cost in complexity.


i've never read it that way! could you give an example of that?


Two examples in C++: virtual method dispatch via vtable (in C you'd implement the same mechanism manually), templated generic code. C++'s generic data structures and algorithms are faster than C's because they don't have to use indirection through pointers. The compiler creates specialized code for each type instead. It causes binary size bloat, but execution time is low.


so what's the "cost in complexity" here? the complexity would be there whether you went with a a hand-rolled or "zero-cost" version (at least that's the idea), except the "zero-cost" version should require less code and thus be easier to maintain.


Preface: I was also very glad to see this called out specifically, and think it's a great rule.

That said...

> Note how Google does NOT say "make sure the code is properly architected".

is not accurate. The very first paragraph on the same page is:

> Design

> The most important thing to cover in a review is the overall design of the CL. Do the interactions of various pieces of code in the CL make sense? Does this change belong in your codebase, or in a library? Does it integrate well with the rest of your system? Is now a good time to add this functionality?

Emphasis mine. But that sure sounds like they care about proper architecture. They just also care about avoiding over-engineering. They're not mutually exclusive.


But that sure sounds like they care about proper architecture.

Of course they care about architecture—I don’t think anyone implied otherwise. But at good companies like Google good architecture is a given. Top developers often fall into the pit of over-engineering, and almost never under-architect. So at top companies code reviewers have to be vigilant about over-engineering and rarely have to worry about under-architecting.


>But at good companies like Google good architecture is a given.

Absolutely not. Google’s interview process and inflow of fresh graduates does not bode well for good architecture. Having spent time at G and FB, I can certainly tell you that employees at both are no better at architecting code in a sane way than SWEs at other companies.

Code architecture requires experience. Google does not.


I think the hiring bar at places like Google and FB is astronomically higher than almost all other places - they definitely have generally higher skilled people there on average.


Checking for memorized algorithms and an MIT degree do not guarantee good code.

I've seen more horrible code at a FAANG company than in many other places. Complete absence of overflow analysis in C++, race conditions, architecture astronauts, exploding code bases that no one understands any more.

Features are being added on a daily basis, basically what matters is a high line count.


You clearly haven't seen code at Google and its "base" library.


The process does not encourage experience but mainly the ability to remember recent academic programming puzzles. The first comment is correct even though they do have many existing experienced developers that would be in a position to provide architectural advice.


The hiring bar at Google does not test for the ability to organize code coherently. The skill is completely unrelated to algorithms and data structures.

It’s like requiring candidates to deadlift 500 lbs and then assuming it means they can all run marathons.


Most teams at Google have senior devs that ensure this. The average level is way higher than other companies.


So for anyone, including me, when looking out for the next job should be:

- Is `good architecture` a given in this team?

- What are they doing to avoid `over engineering`?

What are the specific questions one can ask to find out the answers to the above two?


what does the _CL_ notation indicate?


"changelist" is the term perforce uses for a a single commit before it gets committed. A bit like a PR in git. Google used perforce before rolling their own vcs, and they kept the perforce terminology.


Changelist, perhaps?


Thanks. This would be a nice thing to explain in the document that's open sourced.


The HN submission links directly to https://google.github.io/eng-practices/review/reviewer/ but if you start at the top namely https://google.github.io/eng-practices/ then there's a Terminology section right there on the first page. Maybe the GitHub Pages theme used by this site should be changed to one in which where every page links to its parent page (right now there doesn't seem to be any way to navigate upwards except by editing the URL).


It sounds like walking a tight line between over engineering, and falling into technical debt. If you design code specifically solves the immediate need, that may need to be thrown away or extensively worked on / around when future needs come up. On the other hand, you can write code that solves future needs that never appear, and still fail to solve the actual needs that end up appearing.

For me, I would rather put a bit of additional effort up front gathering enough information about the direction of the project or usage of it to better formulate an attack plan so that it does actually become more future proof. But even that is subject to failure, so what do you do?


I think of it this way: How can I build this so that it only solves today’s problems but doesn’t make it overly difficult to solve tomorrow’s problems?

Loose coupling, dependency injection, composition over inheritance, and similar techniques tend to be good answers to this question in my experience.

In contrast, over engineering attempts to solve tomorrow’s problems before they arrive and, if they arrive differently than predicted, makes them harder to solve, because when you have to change something it’s less clear which parts of the design were necessary to solve the original problem and which were only necessary for the future problem that was incorrectly predicted. Often times you might end up having to rethink the entire architecture rather than having the relatively simple problem of adjusting things to meet new requirements.


Very well said!

My own experience is that effort invested in removing restrictions and handling corner cases is generally well spent. It may not seem too onerous to have to remember that a particular function works only for nonempty inputs or when called after some other function, but in a large system where many operations have such restrictions, keeping track of them quickly overwhelms the capacity of human memory. Sooner or later someone is bound to forget, and it may even be the author of the code in question. I try to ask "what are people going to expect this code to do?" and then, if within reason, to make it do that (and failing that, to protect it with assertions). Alternatively, someone may be forced to make some other part of the system more complicated to work around the restriction, leading to excessive coupling.

I suppose this practice sometimes risks over-engineering, but I have found the risk to be worth taking. As you say, it makes the system easier to extend further.


Yeah absolutely. I think a big part of it is considering possible edge cases or failure scenarios and not necessarily solving them immediately but considering how your design might need to be changed in order to solve them. Many times I have found there’s a solution that requires little or no more work than a naive implementation but is far more robust to future changes.

To use inheritance as an example - if there’s a conceivable possibility that some future requirement might lead you to add the same functionality to other classes, you probably don’t want to have to deal with a long and convoluted inheritance chain when you could compose some set of functionality onto one class as easily as many classes.

Or in the case of dependencies, wrapping some third party mail sending library in your own generic API is barely more work than using the third party library directly and will pay dividends if you change mail providers in the future.

One very specific example that I worked on recently: In most cases there are one of X, but in a handful of cases there are many of X. My initial inclination was to branch off and handle the many case as an exception, but then I realized that you could eliminate that corner case entirely if you handled everything as a collection of X, even if in most cases that collection only has one item in it.


That's a pattern I learned a long time ago. It's much more future-proof to treat items as an array, even if it seems over-engineering at first, because scope-creep dictates it's more likely you are going to handle more items in the future, not fewer. And dealing with one item vs. many can change your architecture pretty drastically.


> Loose coupling, dependency injection, composition over inheritance, and similar techniques tend to be good answers to this question in my experience.

So, adding some extra architecture solves it?


In my experience it’s often just “different” rather than “more” but in some cases, yes, it might be a little more. It depends on where you’re at in a project. Very early on, it’s usually more. In a more established project where these patterns are already in use it’s usually not. Definitely an important thing to consider - it’s not a one size fits all solution.


It obviously depend on the language and your tooling too, but I find that it's generally much easier to refactor an under-designed code to add functionality, than trying to refactor a code that was over-designed but doesn't fit the spec anymore.

For example, in Python, there's no need to go for a class when a simple function does the job. And once you do need a class, it's fairly trivial to swap one for the other, especially in a good IDE.


Your goal should be to get to the destination as fast as possible.

"Lean" tells you that small batches reduce the waste.

As an analogy, think about looking in front at the road that unwinds passing through cities and attractions until finally it becomes too small to see ahead. You feel that if you stay on the road, it will likely bring you to your destination.

Nevertheless you are not sure. As you move ahead it gets clearer were you are and where you will have to go.

So how far should you go before stopping?

It depends. It is all about how far you can see and how confident you are that it is the right road.

If you do not see too far ahead, it is probably a good idea to just reach the next city at hand and spend some time checking if you are on the right road. Otherwise you may waste time going too far on the wrong path. The further you go, the further you will have to backtrack.

But if you can see further ahead, you want to move faster and skip the pitstop.

This analogy only goes so far. But depending what you are building and were you are in process, and more importantly, how sure you are of what you actually need to build, you may less time in planning and just building what is merely necessary vs building a more complex architecture.

So you may say: "let's try to find as much we have to go and where before we start. But even the pathfinding activity takes time...


I can tell you what I do: I allow the total cost to increase by no more than 10% for future proofing. It is all about controlling cost. It is not justifiable to spend more than 10% of the time for future proofing because you have no idea what the future is going to be. Of course if you do have some idea about requirements coming in the near future then it may be justifiable to spend more.


It’s more than a flat cost estimate. It’s more about risk mitigation. Trying to add multi-tenancy to a system that wasn’t designed for it, can mean a major rewrite. Building the system from the beginning with the idea that there could be multiple tenants may cost 10% more, but the opportunity cost of not being able to address the multi-tenant market could be much bigger.


building a multi-tenant system without having at least two tenants from the get go means you're still likely to need that rewrite. The chances that the assumptions you made for your first tenant meet what your next one needs are pretty low


Does Google really practice what they preach though?

Just recently I was looking at Angular, Google's web frontend framework. Services, modules, directives, angular-specific markup. Coming from React, I find this grossly over-engineered.


Googler here.

In general, yes we do practice it. They are guidelines, not rules tho.

Is Angular grossly over-engineered? Depends on how you look at it.

Any code's complexity reflects the complexity of its use case and Angular, for example, needs to fit A LOT of use cases at Google. So yea, sometimes, I think it can be a big hammer for a small nail. In other cases, all that abstraction helps. I guess that's the nature of how code eventually matures over time.


> I guess that's the nature of how code eventually matures over time.

Not really. It’s a sign of too many responsibilities being given to one project. Feature bloat is a real thing.

As code matures it shouldn’t be getting more and more abstraction and APIs, it should be stabilizing with fewer changes. If the former is happening, it’s a sign it needs to be broken into multiple projects.


> If the former is happening, it’s a sign it needs to be broken into multiple projects.

Breaking something in smaller projects also has its own associated costs.


Yeah, based on my experience with android and tensorflow they fail spectacularly at it.


React does must less than Angular...


Defer all decisions until you have to make them.

Leave the code in a state where it's understandable.

ie don't over specify - not just simple things like interfaces, but also things like premature decomposition, that's exposed at the higher level for configurability.

While on the face of it, having everything as small interacting functions means to change any function is quite easy, changing overall behaviour might be very hard - hard to understand all the moving parts, and even worse, if the code split is wrong for future needs, very hard to change.


I've been pushing the idea that if you're getting meaningful feedback on design (and over design) in your PR reviews than you've failed. That stuff should be shaken out long before you have working, complete code.


Is that a productive idea to push?

Not to say you're wrong, but some hard human problems to solve are:

- getting people to not be defensive during code reviews

- getting people willing to be critical (constructively) of their peers work

Emphasizing that code reviews with productive design comments indicates a failing seems more likely to stop the comments, not to improve the design. Most people wont want to do the work multiple times and will quickly learn to do as you desire in the face of repeated code reviews that have comments pointing out design problems.


There are times that I get into the flow while coding and end up solving a few future problems in addition to right now problems. I know that the code is over-engineered, but if it passes all of the tests and there's nothing obviously wrong with it I would probably check it in anyway. Over engineering never comes up in our code reviews but I think it would be a productive conversation if it did - even if the decision was still to accept the code as-is.

The programmers I work with can be defensive of their architecture, but they also love to talk about architecture generally. I think there's a "yes and" [0] way to bring this up that engages the whole team.

[0] https://en.m.wikipedia.org/wiki/Yes,_and...


And then you decide to get another job and next guy to maintain the code is utterly confused and misdirected because let’s be honest, there’s no documentation or there is but is either not up to date or sparse, theres no thorough unit testing (lucky if there is any). Not saying that’s your case but it happened to me personally to be on the other end a few times and let me tell you, it’s quite an effort to ramp up on such projects.


Yes! Engineers should collaborate on the overall approach to a code change/addition with the reviewers before any PR is submitted. Those early discussions are extremely important since teams will converge to an appropriate solution quicker because the process is much more informal. These frequent early discussions also build team spirit, and if you're so lucky, individuals will start to click and the team will start writing code and designing software in similar ways. These early discussion are generally more jovial. The actual review can focus on details, and, if feedback was gathered and put into practice early, nobody will mind the occasional nitpick about punctuation and naming. These will feel earned, even welcome, the PR being a final touch of polish.


Out of curiosity how do those conversations happen before the PR?

Asking because we might have a different process for PRs or a different definition of what it means to submit a PR - on many teams I’ve been on it’s been encouraged to publish a WIP PR in order to facilitate these conversations. GitHub recently added a feature to formalize that but before this we used labels such as “WIP” and “Ready for review”


In person, or for a remote job, with screensharing and online chats. In my current job, it generally starts with the implementer reaching out and saying, I want to achieve xyz and think about doing abc. That's how the conversation gets started. Granted, sometimes you need code examples already ready-made to gather meaningful feedback, but that can be personal branch, one-off patch, gist, WIP PR ... feedback can happen over slack, simple chat, comments on the WIP PR ... As long as the final ready-for-review PR doesn't catch the reviewers off guard in terms of overal design/approach.


WIP PRs are fine. Essentially the PR is used as a mechanism to clearly communicate an idea, rather than containing code that is intended to be merged into the codebase.

There shouldn't really be a PR review where the author is seeking input on naming/testing completeness and major design choices. Those two types of reviews are mutually exclusive.

Alternatives to WIPs include whiteboard discussions, pair programming, and written proposals.


For smaller changes it can happen over a coffee in the hallway. For large changes there are design reviews. For stuff in between you can schedule a meeting with a few concerned people.


> I've been pushing the idea that if you're getting meaningful feedback on design (and over design) in your PR reviews than you've failed. That stuff should be shaken out long before you have working, complete code.

Sometimes a prototype can help shake these things out, though. It's often okay to go an extra stage beyond what's been reviewed/approved (e.g., send out a prototype implementation before design review is complete) if you're willing to throw it away. If you'd be upset by someone saying "try this better design", you need to wait for a design review before writing any code.

Another thing to be careful about with a prototype is keeping it as far away from the normal production serving path as possible to avoid compromising the essentials of reliability, privacy, and security. (Ideas along those lines: separate production role, separate server, flag-enable it only in a test environment, dark-launch it, experiment-gate it, etc.)


The problem comes in when you can anticipate a problem and also anticipate not being given the resources to solve the problem then when you do have those resources now.

For example, had a user who wanted a document produced in one specific format. They promised this was what they wanted and it wouldn't change. It changed 5 times as we were nearing release. So I over-engineered it to allow a template to produce the document that can easily be changed (compared to the release process for a code change). It ended up being used numerous times, despite the problem were given clearly states an unchanging template.


I would argue that as soon as they asked to change the format the first time then the template solution is no long over engineering as it has been demonstrated that the format can and will change. In general I wouldn’t call it over engineering if the problem you are trying to solve occurred in he past and you have reason to suspect it will occur again. The problem with over engineering is solving problems that have never (and may never) occur.


If the user did not have that problem wouldn't you have wasted engineering time?


It was a calculate risk, same as many others, and by far not the most wasteful event of that year even had it not been needed.


When you have very smart people, it may be difficult for those people to write simple stupid code. Smart people like to write delightful, sophisticated and elegant solutions that will address problems and use cases that the code will never go through. A library may be an exception.

I used to be one of them. "Growing up" I realized that striving for simplicity and adding just complexity when necessary is the ultimate sophistication.

Good engineering values and a shared vision of what "doing a good job" means helps out a lot. If we prioritize "development speed" and "quality from a user prospective", extrinsic complexity clearly becomes a burden.

Instead, if these values are not shared, the main motivation for smart people may easily become "ego" and showing off code that may look clever and supporting a lot more use cases but most likely it did not need to be written, wasting time and adding unnecessary complexity with the risk of introducing cognitive overload.


When you have very smart people, it may be difficult for those people to write simple stupid code.

If they can't write idiomatically, are they "very smart?" Code is a communication medium as much as a functional one, and if someone doesn't understand you when you're talking to them, such that you have to say, "well I use a cutting edge grammar, aren't you familiar with Lakoffian generative semantics? Imre Lakatos has a sick presentation on Vimeo," you're not maintaining communication skills. I imagine this could be what "unshared values" means.

The way I see it, it's like driving: people who don't stop at stop signs, who don't use turn signals, who don't know right-of-way rules are not good drivers. These things are a part of driving just as much as knowing where you're going and trying to run the Nurburgring in under 8 minutes. They also demonstrate a concern and respect for the health and wellbeing of your fellow drivers (not to mention pedestrians, cyclists, etc.).

This is not a matter of education, age, or maturity, it's manners. It's saying "please," and "thank you." It's parenting yourself if your actual parents didn't teach you. I'm not sure any of that can be chalked up to surmounting ego. "Ego" is just an excuse and probably not based on actual psychological concepts anyway.

the main motivation for smart people may easily become "ego" and showing off code that may look clever and supporting a lot more use cases but most likely it did not need to be written, wasting time and adding unnecessary complexity with the risk of introducing cognitive overload.

"Very smart?" ;)


> where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system.

I agree with this when it's internal interfaces. When you have public interfaces that you expect to have to support in a backwards-compatible manner for (ideally) years in the future, it's worth taking some time to think about how people might want to extend it in the future.


Within a monorepo like most code at Google, there isn't as much of a bright line between public and internal interfaces. If you don't get a public interface quite right on the first try, but it still has <~100 call sites, it's really easy to just make the change and fix all the callers. If your library has become popular and has substantially more call sites, then it's hard but still doable to fix.

This assumes that the interface is a function or method call within a binary. If it's an RPC, then making changes is much trickier, since you can't assume that both sides of the call were built at the same commit. This requires a lot of thought to make sure any changes to your RPC messages are both backwards- and forwards-compatible.


This only works at Google because most of the code is rewritten every few years [1]. The dynamics between tech debt and over-engineering in such a setting are not really representative of the rest of the industry.

1: https://arxiv.org/pdf/1702.01715.pdf


This is for a code review which would be well after architecture and be the entirely wrong place to systematically question architecture.


My #1,2, and 3 rules are to get the code in the right place. You could vaguely say that is architecture related--is it in the main app, a library, etc... Those decisions can be tough for junior developers and can often be disputed by seniors (we don't need a library yet.)

"Architecture" is very vague and used a lot of different ways.


To be honest, after reading this document, I believe it contain MUCH more information than just how to conduct a code review.


# This is for a code review which would be well after architecture and be the entirely wrong place to systematically question architecture.

I think that is part of the scrum philosophy: management can question whatever it wants and whenever it wants to; if the timeframe explodes that that is the blame of the developer.


That's kinda funny, because their interviews are exactly the opposite. Toy problems you'd never see in the real world, crazy abstracted solutions, O(n) demands, almost encyclopedic knowledge of data structures and algos.

Do as I say, not as I do I guess.


I suppose you just wanted to complain, but in case you're serious: interviews and code reviews fulfill entirely different purposes. Choosing the right data structure may or may not be over-engineering, it depends on the application.

Google absolutely want developers that are capable of over-engineering.


>are capable of over-engineering.

Anyone can over-engineer. It’s not a compliment nor a desirable quality.


> Google absolutely want developers that are capable of over-engineering.

Why, so, when they get on the job, they need to be told not to over-engineer?


Making code execute efficiently is not over-engineering, it is saving many millions of dollars and often a requirement to even launch.


But, we should also mention that Google also has some of the best software designers and architects in the world. You can't get a well-engineered library without someone experienced at design leading it. All of Google's open source projects have significant design. And, if you look at the commit logs, you can see that their designs didn't emerge from thousands of little pushes. Someone led those projects and enforced design criteria from the beginning.

[1]: https://opensource.google.com/


I have worked on a Google project as a contractor and I can confirm that over-engineering was our biggest problem. I was also partially to blame, but at the time I did not know better.


Yes, I don't think I have ever seen a premature optimisation actually ending up being beneficial when the time came to add new features to the project.


Premature optimization rarely helps, but well though out flexibility/decoupling in core system components has had a significant positive effect on velocity down the line, and lack of the latter has all been shown to be disastrous.

I do believe though that's really hard to discuss effectively as there seems to be no good, and common definition on what over engineering actually is, except in retrospect.

I've seen teams where they were so "good" at avoiding over engineering and "architecture astronauts" that thousand line functions with a Byzantine labyrinth of conditional was preferred to even the most basic of design.

With that said, what would you consider over engineering of the kind that never works, and in what kind of systems?


For a proper discussion I'm surprised no one has mentioned this principle from Extreme Programming possibly predates Google as a company.

https://ronjeffries.com/xprog/articles/practices/pracnotneed...


Overall architecture should have been discussed and agreed before writing any code, not at code review.


In an ideal world, yes. In practice this almost never happens. Architecture evolves as the code is written.

I don't think this is a good way to do things, but it's mostly what happens IME.


Too bad the Google developer API's didn't go through that review. 200 lines of code (LOC) for a simple "Hello world" and that is using libraries that are probably a million LOC. Which could probably just be a HTTP GET, and run with a simple curl command.


Any particular APIs you have in mind? I find App Engine’s ‘getting started’ examples quite sane while they are ‘hello world’.


Try for example updating a column in a spreadsheet. Or uploading a file to Google drive.


If your making mistakes like over engineering are you actually a good developer then if that is part of what bad code is. Maybe you fall into the expert begginer at that point


I'll take a noob who doesn't understand basic syntax all day every day vs. the dev who finds a way to make everything complex. At least in the former the damage is limited.


Good dev's understand how to simplify complex problems by breaking it into smaller components. Bad devd add complexity to already complex problems.


Breaking down complex problem does not simplify it, you merely move the complexity to the graph of dependencies between smaller components.


I've seen this a lot. Where it is broken down so much that it involves mental gymnastics to follow whats going on. It also results in Spaghetti-code with fragmented functionality.

In my experience, projects using Java are the worst offenders here.


It can happen when a really good coder gets bored.


Looks like Magento2 could make use of this, every part of it is too over-engineered, adding un-necessary complexity to make it look like enterprise ready application


Sometimes you can generalize the problem a bit to get a shorter solution.


Pretty much by definition you will be covering a wider surface of possible inputs and behaviours and so acquiring complexity


Here's a fun example from math which requires generalizing to get a good solution to.

Suppose you have a 2^n x 2^n sized courtyard. You have one 1x1 statue, and unlimited L pieces (2x2 with a corner missing).

You would like to have a layout which places the statue in one of the centermost tiles, and fill the rest with L pieces.

----------------

Solution

----------------

Define a layout function with allows for the empty tile to be in any corner. This is trivial for 2x2, as it's just an L peice.

By tiling these squares, you can solve for the next size up. Put three with the hole in the center and fill in with an L piece to get the bigger square.

At the end, take 4 2^(n-1) squares and put the holes in the middle. Add one L and you are done.

What was the point of that?

By generalizing your solve function to have more outputs, it lets you build up a structure from it, in which it's easy to solve your actual goal.


And complex analysis informs number theory. An example from software development might be more useful.


> Over-engineering is a trap that very good developers fall into all too often.

I would not call them 'very good developers' in that case.


Over-engineered is not a trap in BigCo, it is simply the by-product of KPI/Goal/Impact driven.

People tend to do it to get promo/bonus.


It’s an old person thing. We learned to program way before the internet, when one only had a language reference, some vague requirements, and a lot of time. Things took much longer, and communication tools didn’t exist. Maintainability was once a big deal.

These days, it’s easier to do a search, grab a snippet, and paste. There’s little need for reusability. Extrapolating a bit, I imagine in a few years, we’ll have one big ‘library’ of functions built into our IDE (built, and copyrighted by Google or Microsoft, of course).

Programming used to be skilled labor, but now it’s kind of dumbed down for higher productivity. A natural evolution.

Personally, I’m trying to re-train myself for the more modern rapid-fire programming. I’m not completely convinced that it results in better products, but it does feel good to be constantly committing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: