Software Engineering At Google
Chapter #23 Continuous Integration (2 of 3)
Software Engineering at Google Chapter #23 - Continuous Integration (2 of 3)
Some examples of feedback loops from fastest to slowest...
Edit, compile, debug loop of local development
Automated tests at pre-submit
Integration errors between two projects that are detected post-submit
Bug reports from internal users who opt-in to be early-testers
Bug reports from external users or the press or production outages
deploy is when only a small number of servers are upgraded with the new production code
This allows the code to run in a real production environment but allows for fast rollbacks and limited impact if there are issues
If the canary deploy is successful all servers are then upgraded to the new code
The downside of canary deploys is in large environments where there are multiple versions of the code, data, and/or configuration running at the same time
This can happen when a large number of engineers are working on the same code and it is known as version skew
Feature flags will be discussed in Chapter 24 Continuous Delivery
At Google anyone can see the output of any test run
The system also uses these logs to see when builds started to fail or become flakey
Visibility into test history empowers engineers
Continuous Build (CB) builds the head / master / main branch and reports a green (good) or red (broke) status
In a scenario where an organization uses CB there are two "heads" - the true head (most recent commit, good or not) and the "green head" (last successful build)
At Google most teams release the known good "green head" instead of the constantly moving "true head" that may or may not work
Version skew can be caught during the release candidate promotion process
Continuous Delivery is "the continual assembling and promotion of release candidates through the various environments"
One can do selective CD by using feature flags and/or experiments
As a RC moves from environment to environment it's binaries are not rebuilt each time
During this process one should not only test and verify the code but the configurations as well
You should only run necessary tests during pre-submit, not all tests. This is because engineer time is very valuable
Only "fast, reliable" tests should be run during pre-submit. Scope them to the project itself (not interactions with external interfaces or projects)
Post-submit tests can be more flakey and less reliable
Thank you for your time and attention.
Apply what you've learned here.
Enjoy it all.
© 2021 Josh Turgasen
All product names, logos, and trademarks are property of their respective owners