Testing Icon

Testing

Testing is the practice of systematically checking if code functions as intended.
59 Stories
All Topics

Katie Hockman blog.golang.org

Go's fuzzing effort now in beta

We first talked fuzzing with Katie Hockman back in August of 2020. Fast-forward 10 months and native fuzzing in Go is ready for beta testing! Here’s Katie explaining fuzzing, for the uninitiated:

Fuzzing is a type of automated testing which continuously manipulates inputs to a program to find issues such as panics or bugs. These semi-random data mutations can discover new code coverage that existing unit tests may miss, and uncover edge case bugs which would otherwise go unnoticed. Since fuzzing can reach these edge cases, fuzz testing is particularly valuable for finding security exploits and vulnerabilities.

It looks like the feature won’t be landing in Go 1.17, but they’re planning on it sometime after that. Either way, you can use fuzzing today on its development branch.

Jonas Lundberg iamjonas.me

The test-plan

The tests are timing out again!”, someone yells. “Alright I’ll bump them”, you instinctively respond. Then you pause and feel uneasy. Is there another way?

In this blog post, I share my growing disconnect with code-coverage and unit-testing. I then detail the method I’ve been using for the greater part of 7 years and how it still allows me to preach at length that being correct is the single most important thing for a developer.

Nikita Sobolev sobolevn.me

Make tests a part of your app

Here’s a pretty useful idea for library authors and their users: there are better ways to test your code!

I give three examples of how user projects can be self-tested without actually writing any real test cases by the end-user. One is hypothetical about django and two examples are real and working: featuring deal and dry-python/returns. A brief example with deal:

import deal

@deal.pre(lambda a, b: a >= 0 and b >= 0)
@deal.raises(ZeroDivisionError)  # this function can raise if `b=0`, it is ok
def div(a: int, b: int) -> float:
    if a > 50:  # Custom, in real life this would be a bug in our logic:
        raise Exception('Oh no! Bug happened!')
    return a / b

This bug can be automatically found by writing a single line of test code: test_div = deal.cases(div). As easy as it gets! From this article you will learn:

  • How to use property-based testing on the next level
  • How a simple decorator @deal.pre(lambda a, b: a >= 0 and b >= 0) can help you to generate hundreds of test cases with almost no effort
  • What “Monad laws as values” is all about and how dry-python/returns helps its users to build their own monads

I really like this idea! And I would appreciate your feedback on it.

Devon C. Estes devonestes.com

Three classes of problems found by mutation testing

Devon C. Estes:

It’s fairly common for folks who haven’t used mutation testing before to not immediately see the value in the practice. Mutation testing is, after all, still a fairly niche and under-used tool in the average software development team’s toolbox. So today I’m going to show a few specific types of very common problems that mutation testing is great at finding for us, and that are hard or impossible to find with other methods

He goes on to detail the “multiple execution paths on a single line” problem, the “untested side effect” problem, and the “missing pin” problem.

Chrome github.com

Headless Recorder

Headless recorder is a Chrome extension that records your browser interactions and generates a Puppeteer or Playwright script. Install it from the Chrome Webstore. Don’t forget to check out our sister project theheadless.dev, the open source knowledge base for Puppeteer and Playwright.

You may have heard of this when it was called Puppeteer Recorder, but its recent addition of Playwright support warranted a rename.

Node.js github.com

An extremely fast and lightweight test runner for Node and the browser

uvu has minimal dependencies and supports both async/await style tests and ES modules, but it’s not immediately clear to me why it benchmarks so well against the likes of Jest and Mocha.

~> "jest"  took  1,630ms  (861  ms)
~> "mocha" took    215ms  (  3  ms)
~> "tape"  took    132ms  (  ???  )
~> "uvu"   took     74ms  (  1.4ms)

The benchmark suites are pretty basic, so it’d be cool to see a “production” grade library or application port their test suite to uvu for comparison.

Lawrence Hecht The New Stack

Few testers have programming skills

Some interesting analysis by Lawrence Hecht for The New Stack:

The 2020 version of JetBrains’ State of the Developer Ecosystem does quantify the extent to which this specialty has disappeared. One finding is that 43% of teams or projects have less than one tester or QA engineer per 10 developers. This is not necessarily a problem if most testing is automated, but that is only true among 38% of those surveyed.

38% is far too low a percentage of folks doing automated testing.

Clojure github.com

A testing library that turns your README into executable tests

This is a very cool idea coming out of the Clojure community. I dig it because the examples in your README are guaranteed to never become stale as your project evolves.

It works by parsing your README and looking for executable code samples with expected outputs. For each one it finds, it generates a test ensuring that executing the code produces the output.

There are, as you might expect, caveats.

Bash github.com

Dead simple testing framework for bash with coverage reporting

critic.sh exposes high level functions for testing consistent with other frameworks and a set of built in assertions. One of my most important goals was to be able to pass in any shell expression to the _test and _assert methods, so that one is not limited to the built-ins.

The coverage reporting is currently rudimentary, but it does indicate which lines haven’t been covered. It works by running the tests with extended debugging, redirecting the trace output to a log file, and then parsing it to determine which functions/lines have been executed. It can definitely be improved!

See a demo of critic.sh in action on asciinema 📽️

Practical AI Practical AI #74

Testing ML systems

Production ML systems include more than just the model. In these complicated systems, how do you ensure quality over time, especially when you are constantly updating your infrastructure, data and models? Tania Allard joins us to discuss the ins and outs of testing ML systems. Among other things, she presents a simple formula that helps you score your progress towards a robust system and identify problem areas.

0:00 / 0:00