Skip to content

Latest commit

 

History

History
151 lines (112 loc) · 6.67 KB

development.md

File metadata and controls

151 lines (112 loc) · 6.67 KB

mdb_v8 Developer's Notes

Contribution guidelines

Contributions are welcome, but please help us review your changes (and keep code quality high) by following these guidelines. If you have any questions, feel free to ask. Don't let these guidelines be a barrier to contributing!

Code review: Changes should be as GitHub pull requests.

Use of GitHub issues: There should be at least one GitHub issue filed for your change (even trivial changes). This issue should explain the change, the impact on users, how you've tested it, and any other information that reviewers might need to know about the change. (Do not put this text in the commit message. See below.) If you're not sure exactly what change you want to make, feel free to create an issue to discuss it.

Commit messages: The commit message should consist of one line per associated issue, each line consisting of the issue number and issue synopsis. See previous commit messages for examples. There should be no other text in the commit message. Other information related to the change should generally be in the GitHub issue comments.

Completeness: Changes should include relevant updates to the documentation, including CHANGES.md. New commands, walkers, and non-private options must have associated documentation, both in the dmod (so that "::help DCMD" works) and in the usage guide.

Testing: All changes should be make prepush clean, but additional testing is probably necessary for most changes. See below.

Testing

The appropriate level of testing depends on the scope of the change. Any non-trivial changes should be tested on:

  • core files from each of the latest several major releases of Node (e.g., Node v4, Node v0.12, and Node v0.10)
  • core files from both 32-bit and 64-bit programs
  • core files generated by abort(3c) and core files generated by gcore(1M)

Depending on the change, you may want to test it manually on an existing collection of representative core files. If you're not sure about test requirements, please ask.

Automated testing

The automated tests are not a comprehensive regression test suite. Think of them more as a smoke test. You can run them by running:

  • make test (which is run by make prepush)
  • the catest tool, which lets you run individual tests separately
  • any of the individual tests by hand (they're standalone programs), which is likely easier for debugging the tests

All of these approaches use whichever Node is on your PATH to run a Node program and generate core files of that program. Your local build of mdb_v8 is used to inspect it.

Automated testing across different Node versions and architectures

For coverage across 32-bit and 64-bit and multiple Node versions, you can use the runtests_node tool:

  1. First, run runtests_node setup SOME_DIRECTORY, which downloads the supported Node versions into "SOME_DIRECTORY".
  2. Then, run runtests_node run SOME_DIRECTORY, which executes the full test suite on each of the versions in the directory.

Because we've made it pretty straightforward, we expect all changes to pass the test suite on all of the versions and architectures used by runtests_node.

Memory leak testing

mdb runs with libumem, which means it's easy to check for memory leaks in C code. The process basically looks like this:

  1. Open up MDB on a core file and load mdb_v8.
  2. Exercise as much mdb_v8 functionality as you can (or whatever functionality you're trying to test for leaks.
  3. On platforms before illumos issue 6338 was fixed (which is all platforms before 2015-10-26), run: ::unload libumem.so. This is the easiest way to work around that issue.
  4. Use gcore to save a core file of your MDB process.
  5. Open up that core file in MDB. (Yes, this gets confusing: you're using MDB to look at a core file of MDB looking at a core file of a Node program.)
  6. Run findleaks -v to look for leaks.

There's a lot more information about using libumem to debug memory leaks elsewhere on the web.

Stress testing

There's a dumpjsobjects tool that takes a core file and an mdb_v8 binary and uses findjsobjects and jsprint to enumerate all of the objects in a deterministic order and print them out. With a representative core file, this is a decent coverage test, since it will likely exercise the paths for interpreting most types of supported objects. At the very least, this should not crash.

Comparing output across mdb_v8 changes

For some kinds of changes, it's worthwhile to spend time comparing output from before the change with output after the change to make sure you haven't introduced any new issues. The mdbv8diff tool can be used to compare the dumpjsobjects output from two different versions of mdb_v8.

Why not just use diff(1)? The challenge is that many changes to mdb_v8 change the output format slightly or fix nitty bugs that affect a large number of objects. diff can easily produce hundreds of thousands of lines of output, but there may be only a handful of different kinds of changes, and they may all be easy to programmatically identify (e.g., the wording of a particular error message changed). For this reason, mdbv8diff provides a small "ignore" module interface, where you define a function that takes two lines of output that look different and decides whether the difference corresponds exactly to an expected change. Sometimes, a diff represents something that's hard to programmatically recognize but you've manually confirmed that it's correct. You can also add specific values as special cases to ignore. (These obviously depend on the specific core file you're debugging.)

The intended workflow is that you run mdbv8diff on output from before and after your change. It likely finds tons of diffs. You create a small "ignore" module. As you go through the diffs one-by-one, if the diff represents a regression, you fix your code. If the diff represents an expected change, you teach your "ignore" module how to recognize it. You keep iterating this process until mdbv8diff produces no more output.

This is obviously pretty crude. Think of this as a tool to assist the otherwise manual effort of exhaustively comparing lots of output, not an efficient regression tester.

The "ignore" modules for past changes are committed to the repo for reference, though they likely wouldn't be reused except to reproduce the original testing work for the change.