What annoys me most about these metrics is that some days zero lines are written. Anything up to a month without results to show.
Where, then, does all this time go? Sometimes it's reading existing code. Sometimes it's learning about a new algorithm by reading blogs and papers. Sometimes it's developing test programs to iron out a bug or test out some new code.
There used to be one chap in the office that got all the hard problems - the seriously hard problems. Some of this was figuring out why USB couldn't transition from low-speed mode to high-speed mode reliably (USB is quite hard to probe due to frequency), or figuring out why the application crashed one in a million boots.
Some of our most valuable developers committed the least amount of code, but saved our arses more times than I can count.
That’s fundamentally a lack of respect for the engineering aspect of software systems and a sort of self-loathing embraced by people in the field.
Many software roles require what I would call Home Depot skill levels. People at Home Depot take semi-finished materials in a kit and fix their toilet, without understanding how it works.
Likewise, some journeyman skilled developer and “code” a sign in page with an API without understanding the engineering process around OAuth.
The problem is many business people don’t understand anything beyond the Home Depot kit... they see stuff on the shelf and don’t understand that at some level that engineering side of the work needs to be done to create something novel. Reinforcing that notion are vendors hawking products.
As someone with Home Depot skills, I 100% agree. I really wish that there was a common distinction. I am not the right person to solve a novel or complex engineering problem. I am the right person to build a product that won't require solving a novel or complex engineering problem. I probably shouldn't be paid like the former, nor should I have to have the qualifications of the former to land a job for the latter.
I think there’s a further subdivision of “hard” which is the fundamental research stuff that pushes the boundaries of CS. Then there’s business problem stuff that’s hard because of scale, surface area and general messiness of the real world. Although the IC salary peaks might not be as high, there is more money overall in the latter, and it’s not as much about raw intellect as it is about moving up and down the abstraction layers, thinking things through and translating technical trade offs to laymen.
I think this an interesting analogy. Taking it further - what's great about the Home Depot level skills is that they are often sufficient for me to be able to do routine basic maintenance. This is in-part because many things have become simpler and designed to be easily/cheaply replaced. I think the same could be said for software. That's generally a good thing and was an intentional movement in the industry
That said, I probably don't want to be building a house from scratch with my level of skills and should hire someone with specialized knowledge. Likewise it's also an important skill to know when you will be in over your head and when you need to hire someone to get a job done correctly
I'm another mostly "Home Depot" coder and can glue all kinds of things together without really having to dig deeper. Maybe I could go deeper if I needed to, but that's not what my job demands or requests of me, and what they need is the Home Depot code that bolts all their existing systems together.
I think those of us in roles like this can actually bang out a lot more LOC than somebody working on lower level problems, because we aren't solving hard problems, we're using basic data structures and tossing them between (usually/hopefully) well documented interfaces. If that's the case, LOC is just about the worst metric you could imagine.
Code is both asset and liability; the asset is the feature set, while the liability has an interest payment in the form of maintenance.
The way you put it, you're optimizing for only one side of the books. The fact is that the value in a company is not in minimal clean code; it's in a recurring revenue stream, and ideally profits. Provide the most value with code which has low interest payments. Everything else being equal, smaller code has lower interest payments, but everything else isn't always equal. And depending on cash flow and market opportunity, maximizing value and to hell with minimal clean code - throwing money & devs at the problem - can make sense.
The distinction here is between code thats clear and concise or code that hacky and confusingly compact. Few people would recommend try and pack a 4-5 line function into a super complex and confusing one liner, but it is reasonable to put a 10k line class into a 20 line function. It's on us, as the developers, to make that tradeoff.
I think the spirit of the comment you replied to was closer to the "clear and concise" methodology rather than the "as short as is humanly possible" methodology.
Only the few people that don’t know anything about the map function or generator expressions and prefer messy imperative code where off by one errors are a given, if you want my opinion..
At one job (porting a colossal legacy UI to Windows), I deleted thousands of LOC every day for months. Coworkers called me "the decoder." 25 years later I'm probably still net negative.
This issue can easily be fixed by switching from delta LOC to the size of the git diff (number of lines changed). The big problem with this strategy is the huge difference between 10 lines of carefully engineered algorithm code and 10,000 lines of blah API calls and boilerplate. I can write API calls and boilerplate as fast as I can type.
Not the GP but... Reduce a cross cutting concern from a system into an aspect and you get this easily.
I once worked on a product and identified an ability to eliminate 100K lines of poorly written, inconsistent tracing code into a robust ~250 line file using AspectJ. Management threw a sh-t fit and thought the risk was untenable.
AOP tools that effectively rewrite the app have incredible amounts of leverage. That can work to your benefit but it's also an enormous footgun if you get your aim wrong. It leverages up both cleverness and stupidity.
The risk of the new one or the risk of keeping the old 100K lines? Half serious question since I would estimate the risk of the latter to be much larger.
To me it sounds like he was introducing a hard dependency on AspectJ, which is as much a risk as any other dependency. I am guessing here, bit it is a scenario where a hissy fit from management has at least some justification.
It is just as much of a liability, a priori neither more nor less. It needs to be evaluated like any other potential new dependency.
Plus, AspectJ is something that you have to be careful with. It injects code at the start or end of methods that can do arbitraty things and the method source code doesn't indicate that this is happening. So it has a great potential for code obfuscation.
Sort of unrelated rant. Maybe it’s because I’m not as well versed in Java idioms as I am with C# idioms, but using code that implements AOP using AspectJ seems much more obtuse than what I’ve done in C# just looking at the examples.
In C#, with the various frameworks - including on the API level with ASP.Net - you can use attributes to decorate the classes/methods with your aspects and it’s pretty easy to see what it’s doing.
You get the runtime binding basically by just using dependency injection as you always do.
C# dev here as well, but from a Java background. When I first moved to C# from Java one of the best AOP usages was transaction management. Database transaction management. You could write all of the code, whether it was dependent upon the db or not, and then decorate the methods with a transaction attribute. This decoration contained all the logic to get a db connection, begin a transaction, become part of an existing one, or create a new isolated one. Any unhandled exception caused the final unwinding to rollback any work that had been done in that transaction. So many try/catch/finally's avoided and so much boilerplate code.
I have yet to find any equivalent to this .NET world. Especially of you're using EF. Either you use ADO and have your try/catch/finally with manual transaction management, or you have the EF context which is just one big blob you hope succeeds at the end.
Yes, this is exactly the type of boilerplate I am talking about. All those usings and try/catch blocks which add needless code. It is possible to compose all of this into an aspect which then decorates your methods. Maybe I'm not being clear, so here is what I mean. Say you have a method that does some work, but calls some other thing to do some auditing. The auditing is nice, but it failing shouldn't halt the world.
The TransactionScope is handled in the aspect. Commit/Rollback is all handled there is well. There are not usings or exception handlings within your methods unless you want to handle those specifically.
There should only be one using/try catch block on the highest level. All of the methods being called within that using block should just throw errors.
You could put the logic in attributes but I don’t consider a transaction a cross cutting concern. It would create a spooky action at a distance situation.
It's obviously highly dependent on the domain in which you work, but I would consider what you're saying to be a "business transaction" more than a "database transaction". If there is a 1:1 between the two then your way works. I tend to have situations where one business transaction is multiple database transactions. And the business transaction can succeed even if some of the underlying database transactions were to fail.
It was a C# file for an API that wrapped thousands of reports with a function call for each report, I moved to a design which was a single function for all reports.
I think what had happened is somebody had designed the file and everybody else followed suit patching stuff on - the entire codebase for that app was well below average. they had front end devs who didn’t know any JavaScript. In 2016. I lasted 6 months before I nope.png’d the fuck out.
It’s still not the worst application I’ve ever worked on though
Not the parent, but I once inherited a bunch of tools that used the same tracing function. Akin to dtrace. 6 people wrote 6 tools over different domains all with their own post-processing, filtering, formatting, etc...
It was a support nightmare, so we built a common library and collapsed the code base by 70%. Each tool was probably in the -8k eslocs range. Thankfully it wasnt c++.
I once had a senior manager who insisted that developers made at least one commit a day (an internal GitHub like tool gamified this: number of lines committed since last month / top committers in the team etc), and those that didn't, had to up their game.
It was frustrating to say the least as this was not the only metric. There were a handful and, frankly, many made a mockery of it by doing just as much or less work than before but achieving or even surpassing the said metrics.
At my last Java job, whenever there was a day I didn't have much or any code to commit (e.g. I was in the middle of going through compsci papers about a nontrivial algorithm I intended to implement), I would open up the IntelliJ's "code inspections" tab. It provided me with a never-ending stream of quick and small fixes to make that not only amounted to a commit, but also occasionally fixed an actual (if not likely) bug.
It's quite normal for it helpdesk teams doing all the users password reset tickets to "outperform" the teams fixing server issues based on "tickets closed" metrics
I know I'm against popular opinion to the extreme here, but I'm not altogether against that (the commit every day bit, not the LOC competition bit).
Few reasons:
1. I find devs (including me) tend to do too few commits instead of too many. Smaller, tighter commits are better, but it's really tempting to try and do an entire feature in one commit.
2. If someone has spent a couple of days working on something without committing it, I'd be concerned that they're stuck, or spinning wheels. I'd check in on them. Not in a bad "you're not working hard enough" way, but in a "do you need help?" way.
3. If someone often spent more than a day without writing anything they could commit, I'd check in on them. Again, like 2. above, not in a "dammit work harder!" way, but maybe it's an indication that they're getting handed the really hard problems, or that they need some more training, or that they're going through some personal stuff, or something.
but measuring lines in each commit is pointless/futile, as is measuring number of commits in a day.
I can sort of see the rationale for this, but it's been a long time since I've worked anywhere where this was feasible even 50% of the time; nobody wants fragments of a feature in trunk, and the codebases tend to be inflexible. "Commit to branch", sure; branches are cheap.
> If someone has spent a couple of days working on something without committing it, I'd be concerned that they're stuck, or spinning wheels. I'd check in on them.
This is more reasonable. Working without review leads to worse fiascoes the longer it goes.
If you do work in progress (wip) branches where you're not concerned with breaking the build and then rewrite the history to not break the build after (counts as new commits) this becomes more viable.
Also encourages testing work to be formally coded and just done interactively. If a test is good enough to compile, commit it as work in progress even if it has a bug and/or incomplete if you've reached a good point you'll like to build upon. Make a tweak, compiles, commit it.
If you've spent most of your day writing a test, but it's complicated and will require even more work before you'd even try to compile and run, commit it WIP before going home.
The only place I've ever been at that had any sort of lines of code measurement also tracked lines removed, This was on a points system and in fact removal of code was considered worth more points as long as it didn't break build (there was also some other stuff about if it was rolled back or the lines you had written were changed in a short time that points were removed)
There was also some points for removing TODOs etc.
It worked pretty good because there was only a team of 4-5 people on any product at any time so someone just removing TODOs and not fixing the issues pointed out by the TODOs would have been caught.
Fundamentally, that’s why SLOC can be useful as an estimating metric, but terrible as a control metric. SLOC, FP and so on all have their limitations, but they demonstrate that most of the effort-time in a project doesn’t go into putting hands on keyboard. Conversely, trying to monitor developer productivity with SLOC simply reintroduces the conceptual error that the estimation effort attempts to prevent.
Goodhart's law - "When a measure becomes a target, it ceases to be a good measure."
I used to work for a company that bills their customers for dev hours spent. The software they put together worked fabulously well - in the production of billable dev hours.
Whenever I spend a week without progress I feel like dying.
In college I was used to be able to churn immense amount of code. Even if most of it was useless, I'm not well adjusted for long productive-less periods.
How did your manager react to these times ? no remarks ? nagging ? trusting ?
College coding is very misleading. It's usually clean-sheet, by yourself, with idealized requirements and few dependencies. It's rarely robust or tested extensively, and its lifespan is short.
It's also a hell of a lot of fun... which is why it's not really what you get paid for. What you get paid for is the long, tedious slog of the real world: maintaining existing business logic, teasing out user requirements in a domain you don't really understand, dealing with other developers who have different preferences and skill levels, doing variations of the same thing instead of exploring new domains and technologies. You spend a lot of days in meetings that should have been emails.
It's not all drudgery, and it's both more fun and better paid than 99% of the jobs in the world, but it's not picking the wondrous low-hanging fruit that you did in college.
This is also why I prefer to use projects inspired by real projects at work to test out new programming languages or technology. It's really easy to make something look good by just ignoring the messy realities of the real world. It's a lot harder if you're doing an experimental rewrite of a system with real world requirements attached to it.
Progress is measured in more things than code written. Define progress using the right metric, i.e. stuff learned, and the feeling of progress and your motivation can be preserved.
For me, it is really a top down approach. I can work on goals that take years to accomplish. But the key is to break them down into smaller and smaller bits until you have work items that show progress on a small enough scale to be easily observable. And part of this is sometimes research, so I can't measure myself in terms of code or features. But each task usually has a way to define progress.
Good point. I do agree with you vastly. But in my few work experience it was never discussed nor shown. Which led .. well lack of leadership. And ultimately deep anxiety.
Do most jobs have a team chat to talk about it before going into actual work ?
Yes. In 'Agile' development parlance, you would have an estimation session, where the group will look at the units of work to be assigned, and determine how easy or hard they might turn out to be.
At their most objective best, everyone on a given development team can gain some insight into the work of others, and how hard it might be.
These session can also be a great way to share knowledge, as developers with different levels of experience and specialisations collectively examine high level goals.
In that, you all have a fair opportunity to either share opinions on the best way to achieve a task, or just merely learn something from someone else about tools or techniques you're unfamiliar with.
And for insightful managers, it's also a great opportunity to communicate high level aims and objective, and occasionally, also break those objectives down, transparently, and explore them.
At worse, estimation sessions can be used as a tool to bully dissenting or inquisitive coders.
Even in it's least positive guise, collective estimation sessions are still valuable. At the very least, you have the opportunity to agree, as a group, on what is, and what isn't going to take 10 or 1000 odd SLOC. You'll also have a better idea (if only slightly better in some cases), of how long that n SLOC will take to write.
It's surprising how few managers value objective estimation. But the problem I suppose, is what it does to the working relationship the rest of the company has with their software development team.
Basically, to allow a team of developers such an 'indulgence', every worker in a given business, including senior managers, have to accept that all interactions with the development team, are led by the development team.
That takes a lot of trust, and you'll rarely find that level of trust outside of a startup.
All metrics can be horrible. To take an obvious example, we used to repeatedly see the temperature on one cold day being quoted as proof that global warming wasn't happening. So clearly the temperature must be a horrible metric for global warming, right?
It is of course the main metric for global warming, but it can be used badly or very well. Just like Lines of Code, it's hard to even get the measurement right. Do you measure it in the sun, or the shade? Do you measure it in a city which is relevant to the where most people feel the effects, or in the country so you get a repeatable environment. Similarly does LOC include comments and blank lines, what about patches - how do you count them? In terms of LOC per day, so you measure a single person who is churning out the code, or the entire team including the designers and documenters, and do you include the time after the project is completed doing support because of bugs?
I don't think you can blame the "temperature metric" for the bad ways it's measured or used. And I don't think you can blame Lines Of Code all of it's bad outcomes either.
Where, then, does all this time go? Sometimes it's reading existing code. Sometimes it's learning about a new algorithm by reading blogs and papers. Sometimes it's developing test programs to iron out a bug or test out some new code.
There used to be one chap in the office that got all the hard problems - the seriously hard problems. Some of this was figuring out why USB couldn't transition from low-speed mode to high-speed mode reliably (USB is quite hard to probe due to frequency), or figuring out why the application crashed one in a million boots.
Some of our most valuable developers committed the least amount of code, but saved our arses more times than I can count.