My team has benefited significantly from diligently reviewing pull requests. It may seem like a waste of time at first, but I can't count how many bugs we've caught this way. Team productivity is also increased due to knowledge-sharing, as we have to understand more parts of the code in order to properly review it. Reviews typically take between 10 mins and 4 hours, depending on the size of the PR. We've also improved our competence with Git, as we have tried to make the series of commits in a PR more understandable. PRs have actually increased work satisfaction on my team, as we get much more feedback on our work this way. Usually feedback is positive, but even when it isn't, we work at it together until the PR looks good.
If there are multiple releases/branches then I also recommend to very carefully review merges between branches, even files/regions where git (or whatever SCM is used) merged automatically with no conflicts. I've personally seen multiple incidents now were through review bugs and security issues where found in merges that ranged from subtle to catastrophic. I even saw one catastrophic security issue introduced into a file git merged with no conflicts.
(However, more issues are typically introduced by manual conflict resolution, since we humans are also easily confused when doing it. Both are a problem relatively independent of language, although some specific instances might be caught by a compiler or tests. If the latter don't catch it, it may be a sign that some tests are missing.)
I have a very similar experience. Another thing we do is to align our style guide (and automated linting) with code review, mostly with the intention of reducing diff noise and therefore mental overhead when reviewing. The ability to align in this way depends on the language, but we've had a fair bit of success doing this in Python.
Coding style (apart from formatting) can be crucial for understandability, so it's important to have a critical look at it when reviewing. For instance, consider the following snippets:
void myMethod(FooBar arg) {
if (arg != null) {
arg.foo(something);
for (int i = 0; i < arg.bars().size(); i++) {
arg.bars().get(i).baz();
}
}
}
Even though they're functionally identical, they look wildly different. These things can hardly be judged by an automatic process like linting, so it's important to manually do that.
For me, a lack of adherence to our conventions is a smell that not all i's have been dotted and t's have been crossed. If the style is poor, I start to think the logic may be suspect, as well.
This is doubly true when delinting and styles are automatic via git hooks.
It's good and nice that there is one linter for a particular language that catches 100 % of these issues in an example, but that doesn't do much to change the fact that most linters for most languages are unable to do this, and that there are quite some instances of these decisions where it is doubtful that any linter would be able to lint it.
Oh, I certainly agree that coding standards are important. Consistency of coding style makes a codebase substantially more approachable. I'm just interested in making that as automated as possible: https://blog.scottnonnenberg.com/eslint-part-3-analysis/
The frontend complexity one rings very true. I know they are largely talking about CSS changes in the article, but I think sometimes experienced developers can just discount the frontend as a whole.
At my job my coworker and I did a major refactor on part of our frontend codebase. Probably 2,000 lines of code were changed and the only comment on the PR was for additional unit tests on one method we had touched on the backend. We brought it up multiple times in standup that this really needed to be reviewed properly and we got radio silence. The PR was just passed through.
Your entire platform really does matter, it's important to make sure all things work. Not just what you understand.
I really try to filter out how the code review / testing works when interviewing for a job. Whenever I get to the "do you have any questions for us" part, I start asking about their workflow, review process, and tests coverage, as well as several more practical questions on how they deal with it ("what would obviously get a PR rejected").
The obvious ones I'd discard are the ones that don't use a VCS (I don't think I'd take a non-git one either, and CSV is definitely a no-go).
Sometimes it's interesting that no mention of tests is made when asking these questions, or style guides maybe. It really give you a taste of the "code culture" for that employer.
I'd be very interested in a tool for diffing that gave some broad overview of how things have moved around. Otherwise, the kind of refactoring where you break things apart just creates a pull request that's harder to get oriented in. Which is a shame because a harder to gloss code review can disincentivize the changes that are needed most.
Conversely, it's just so easy to add an if statement there or any early return here, and over time you've got a 400-line method with like 100 branches and an Avagadro's number of possible paths. But you look at a two-line diff and say, "Hey, this looks reasonable."
One of the problems we have faced when multiple engineers ding exactly similar work (e.g. Create and Update operations of a CRUD module) but in a very different class and method structures. Now whoever makes it to master first becomes the standard and others needs to modify to confirm to that style. However everyone's style is correct in some sense but reviewer needs to make sure all code is written by one team not that one file has factory pattern and another one has switch case .
Not that there isn't value in having everyone follow the same patterns, but I find it gets emphasized over just getting work done a lot of the time. I've seen projects waste so much effort making the code uniform instead of delivering features. If the code works and is readable, who cares if it varies somewhat from similar code.
> I've seen projects waste so much effort making the code uniform
This one is usually an automated task. Everyone agrees on a coding style, configures his/her development environment and first time you work on a non-std file you convert and commit it (most things are done by your editor/IDE, if configured correctly).
> If the code works and is readable, who cares if it varies somewhat from similar code.
Me. I regularly have to clean up after lone warriors who talk like you have "delivered features".
A coding style does not represent some universal truth about code esthetics: On one hand it allows programmers to read code with a well defined set of expectations, on the other hand it helps prevent some of the big annoyances for reviewers.
The point is not to like some specific style but to adhere to it. It takes you 5 minutes to adapt your own code to the project's coding style, it will take others 10 minutes to do so and, unfortunately the most common case, it will take hours over a single year when nobody fixes your coding style and everybody who has to touch your code trips over the same things.
One thing that's improved our results is asking to see the changes actually working. The person has to demo them live to the reviewer. Many times a small bug or hiccup occurs which would never have been seen by comparing diffs.
This sounds like it might work for PRs for huge features, but not every PR. Coordinating a live demo takes more time than it takes to write entire PRs for most cases. Especially if more than one person needs to see it.
What I dream of implementing at work (once our infrastructure is more fully containerized) is an addition to the CI/CD pipeline where each Pull Request doesn't just kick off tests but also spins up a full staging environment of its own, just waiting for reviewers/designers/PMs a click away. If there were no resource constraints you'd get a separate one for each commit/push so you could compare UX from individual changes.
You don't have to wait until you migrate to containerized deploys. Last year my team started deploying our project to folders named after the branch and setup one apache rule to redirect all *.dev. subdomains to the correct folder. We call them virtual development environments, and they worked really well with our existing QA process.
We're not using the apache cgi plugin for python to resolve dependencies, just packages relative the the application root. I think most languages that can import dependencies can import relative dependencies.
Although my specific setup may not work for everyone, I suspect that there is an equally simple solution for most projects. Containerization is definitely a long-term goal, but it is not prerequisite for automated deploys. The things you have to change to get deploys to work on an existing server will also be useful for containerization.
That's pretty much what we provide at Runnable (Full-stack environments for every branch). Every time you create a branch, we create a new environment for you that anyone on the team can access. Definitely helps to be able to play around with the real app! Every time you push a new commit, we update your environment.