Where do you test your code? You probably use one of the following topologies.
The release candidate strategy is used by most software projects, and it is a required part of the iterative release pattern and the mainline CI pattern. Contributors build the latest version of their code into a "release candidate," then test it with manual and automated testing. Then they fix bugs and make another release candidate. They do this until there are no more important bugs. At that time they release the candidate.
Benefits: This is a very efficient pattern. You only have one version to test. Manual testers know where to find it. You only need one automated test system.
Problems: As you add code and have more things to test, it takes longer and longer to test and stabilize the release candidate. This release candidate method will not get you continuous release, because you can assume there will be problems when you merge multiple changes that haven't been tested together.
If you add "centralized" continuous integration to the release candidate, you get this workflow:
Developers can test code and run automated tests on their own workstations. This "pre-commit" testing is often required before committing code to a centralized continuous system.
After testing locally, team members can send their changes to other developers who can test it on their workstations. This is the mainstay of the "maintainer" model of development used by many open source and mobile projects.
Benefits: This is always a first step. It works without an automated test server. It scales with the number of developers. It can be used to review and clean up code that is contributed to a mainline continuous process.
Problems: Developers do not always run automated tests. When they do run tests, it takes time. Maintainers can fall behind in testing and code review, and when there are problems they may not respond promptly to contributors. Maintainers have to do manual work to start automated tests.
A process with multiple test systems is more complicated, but also a lot more scalable. Every contributing team or major feature has its own test system. Developers should be able to get these systems on demand from a cloud platform to test major changes. The test systems run automated tests, and also provide a place for QA and Story Owners to go and help with testing.
Benefits: Multiple test systems can accommodate a large number of contributions without slowing down the final test and release cycle. They can get you to continuous release with an "as late as possible" integration strategy, in which a change is released before it hits problems caused by other changes.
Problems: Multiple test systems are complicated to set up and maintain. Testers need to learn to act as consultants, helping developers who ask them to look at specific test systems.
With Gerrit or Assembla protected branches, you can put each change into a temporary branch with a merge request. The automated test system will see the branch and run tests on it. This pattern automates the process of making a new test environment for each change. It can deliver clean code to a centralized continuous process.
Benefits: Test review branches are very clean. Reviewers do not review code until it passes automated tests. All code that reaches the mainline has been tested.
Problems: Test review branches are complicated to set up and maintain.
I believe that this system of putting changes into branches for test and review obsoletes the older patch-based and "pre-commit" review systems. It is much easier to apply automated testing to real branches. There are fewer steps for developers who want to test, review and comment. And, if reviewers want to fix the contribution, it is easy for them just to put a new change in the review branch.