Software Engineering at Google Chapter #14 - Larger Testing (2 of 3)

  • Create hermetic SUTs for your developers (cloud or local)
  • Be prepared for end users to discover test data (or hidden features) that's accessible for engineers via the public API
  • You can reduce the size of your SUT problem boundaries by...
    • Avoid UI tests (async behaviors are hard to test, tests are brittle due to UI changes (but not implementation changes))
    • Avoiding 3rd party dependencies
    • Replace external DBs with small in-memory ones
    • Consider the cost tradeoffs for fidelity and reliability
      • Record and replay proxies are HTTP / HTTPS proxies that record the traffic and can "play it back" for testing purposes
      • Consumer Driven Contracts are tests define the contract for both the API server and client. The client sends input and the "test" is the server processes the input and compares to ensure it matches a known output
      • For more info see Pact Contract Testing and Spring Cloud Contracts
      • Google layers their record and replay tests so that a single large test runs and records the traffic. The large test then creates smaller tests and replays that traffic. This allows them to quickly re-run tests in Replay mode.
      • With record and replay as part of your testing strategy you can avoid re-running tests by using captured data
      • Seeded data is data that comes packaged with and is part of the test
      • Test traffic is what's generated as a result of the tests
      • Be sure to seed domain data such as environment config files (DB connect strings)
      • Make it a realistic (large tests at a social network require a large graph)
      • Seed data via APIs if possible, don't bypass them and see the data sources directly
      • Seed data for tests can be hand crafted, copied (usually from prod and scrubbed / anonymized), or sampled (part of all data, reduced due to volume / size)
      • Assertions are checks about the behavior of the system (assert.that.response(contains) "foodata")
      • A/B testing in a pre-production testing context means sending data to 2x instances and comparing the output (it should be the same)
      • When creating tests balance the following: how much risk it mitigates, how much it costs to run, and toil required to maintain
      • The role of Test Engineer at Google often outlines testing strategies for new software projects
  • Functional testing of binaries:
    • Essentially unit tests in a self-contained package (tests+ code) that runs on a single machine
    • Can be multiple processes
    • Can be used for microservices testing
  • For browser and device testing the UI is the use's interaction point (not an API), thus it must be tested too
  • Performance, load, and stress testing:
    • Diff before and after metrics
    • Use handcrafted data or data multiplexed from production
    • Important to do so that a new release can handle spikes in traffic
    • Make tests as close to prod as possible as certain bugs only show themselves at scale or under load
    • Be aware of which system(s) the binaries run on as CPU speeds, affinity, and noisy neighbors may impact results
  • Deployment configuration testing is smoke testing for config files (httpd.conf, yaml / json, etc)
  • Exploratory testing:
    • Manual testing that finds weirdness by repeating old user scenarios as well as creating new ones
    • Looks for new paths in the system in serach of security vulernabilities and unexpected behavior
    • "manual fuzzing", hopefully people take different paths in order to exercise more code
    • Like other manual testing it does not scale
    • Create a "bug bash" meeting where people who use the product do nothing but try to break it for an hour
< BACK NEXT >
Tweet


   


   

Thank you for your time and attention.
Apply what you've learned here.
Enjoy it all.