"Fossies" - the Fresh Open Source Software Archive

Member "pytorch-1.8.2/benchmarks/functional_autograd_benchmark/README.md" (23 Jul 2021, 2105 Bytes) of package /linux/misc/pytorch-1.8.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the last Fossies "Diffs" side-by-side code changes report for "README.md": 1.11.0_vs_1.12.0.

Benchmarking tool for the autograd API

This folder contain a set of self-contained scripts that allow to benchmark the autograd with different common models. It is designed to run the benchmark before and after your change and will generate a table to share on the PR.

To do so, you can use functional_autograd_benchmark.py to run the benchmarks before your change (using as output before.txt) and after your change (using as output after.txt). You can then use compare.py to get a markdown table comparing the two runs.

The default arguments of functional_autograd_benchmark.py should be used in general. You can change them though to force a given device or force running even the (very) slow settings.

Sample usage

# Make sure you compile pytorch in release mode and with the same flags before/after
export DEBUG=0
# When running on CPU, it might be required to limit the number of cores to avoid oversubscription
export OMP_NUM_THREADS=10

# Compile pytorch with the base revision
git checkout master
python setup.py develop

# Run the benchmark for the base
# This will use the GPU if available.
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output before.txt

# Compile pytorch with your change
popd
git checkout your_feature_branch
python setup.py develop

# Run the benchmark for the new version
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output after.txt

# Get the markdown table that you can paste in your github PR
python compare.py

popd

Files in this folder: