Software Engineering at Google Chapter #14 - Larger Testing (1 of 3)

  • Large tests make up a significant part of Google's risk mitigation strategy
  • One must ensure large tests provide value and are not resource sinks
  • Large tests may be slow, non-hermetic (it shares resources), and non-deterministic (may go on forever)
  • Unit tests create confidence for objects, modules, and functions while large tests create confidence in the overall system
  • Environment fidelity refers to how like production the testing environment is. It may be exactly like prod or it may be simple test VMs
  • One must balance the fidelity of the test environment as the closer it gets to looking like prod the more expensive it is
  • Creating test data for a new service with no existing real or test data is hard
  • There's no good way to solve that problem because biases will be inherent when a human creates test data
  • Ensure that test doubles stay up to date. The person who writes an API should also write and maintain the test double
  • Ensure your binary's config files are in version control so you know when changes (and breaking changes) happen
  • At Google, configuration changes are the number one cause of major outages
  • Large testing helps find problems that arise under load whereas unit testing cannot find these issues
  • Large tests also help find unanticipated behaviors as unit tests are written by (and limited to) the engineers themselves
  • Unit tests are like theoretical physics, in a vaccum, hidden from the mess of the real world
  • Large tests touch the parts that unit tests deliberately seek to avoid
  • Tests must be reliable (not flakey, with a reliable fail/pass signal), fast (as to not interrupt workflow), and scalable
  • Good unit tests follow the above, good large tests mostly ignore the above
  • Without clear ownership a test rots (think of all the components a large test interacts with)
  • Large tests, unlike unit tests, suffer from a lack of standardization (can't just use a couple language libraries)
  • Large tests can vary from team to team and the engineer who wrote the code doesn't know all of the infrastructure too
  • Consider putting timeouts on your tests to force efficent, small tests
  • If you don't design your code with tests in mind you're instantly creating "legacy code"
  • Manual or no tests for short lived scripts, some unit tests for code that lives for days, large tests for code that lives for years
  • The smallest possible test: Smaller is better
  • Break things down eg: multiple integration tests instead of one large one
  • Break down "chained" tests (web -> app -> data source -> app -> data source) into discreet steps where output from one step is input for the next step
  • Larger tests usually follow the same flow:
    • Obtain system under test (aka the infrastructure to run the tests)
    • Seed necessary test data (load data sources and caches)
    • Perform actions using the system under test (run the tests)
    • Verify behaviors (examine test results)
  • SUT (System Under Test) is the underlying infrastructure used to perform the tests
  • The scope of the SUT can vary based on the test. Unit tests only need libaries where large tests may talk to multiple 3rd party systems
  • A SUT with high hermacity (sealed, isolated, known environment) will have the least issue due to infrastructure flakeyness and concurrency / load
  • A SUT can have variying levels of fiedlity (closeness to production)
  • Some types of SUTs:
    • Single-process (single thread, everything in a single package)
    • Single-machine (multiple processes on the same machine)
    • Multimachine (multiple processes on separate machines)
    • Hybrids (a mix of many)
    • Shared Environment (shared environment for all tests, costs less)
< BACK NEXT >
Tweet


   


   

Thank you for your time and attention.
Apply what you've learned here.
Enjoy it all.