pytorch  1.8.2
About: PyTorch provides Tensor computation (like NumPy) with strong GPU acceleration and Deep Neural Networks (in Python) built on a tape-based autograd system. LTS (Long Term Support) release.
  Fossies Dox: pytorch-1.8.2.tar.gz  ("unofficial" and yet experimental doxygen-generated source code documentation)  

setup.py
Go to the documentation of this file.
1# Welcome to the PyTorch setup.py.
2#
3# Environment variables you are probably interested in:
4#
5# DEBUG
6# build with -O0 and -g (debug symbols)
7#
8# REL_WITH_DEB_INFO
9# build with optimizations and -g (debug symbols)
10#
11# MAX_JOBS
12# maximum number of compile jobs we should use to compile your code
13#
14# USE_CUDA=0
15# disables CUDA build
16#
17# CFLAGS
18# flags to apply to both C and C++ files to be compiled (a quirk of setup.py
19# which we have faithfully adhered to in our build system is that CFLAGS
20# also applies to C++ files (unless CXXFLAGS is set), in contrast to the
21# default behavior of autogoo and cmake build systems.)
22#
23# CC
24# the C/C++ compiler to use (NB: the CXX flag has no effect for distutils
25# compiles, because distutils always uses CC to compile, even for C++
26# files.
27#
28# Environment variables for feature toggles:
29#
30# USE_CUDNN=0
31# disables the cuDNN build
32#
33# USE_FBGEMM=0
34# disables the FBGEMM build
35#
36# USE_KINETO=0
37# disables usage of libkineto library for profiling
38#
39# USE_NUMPY=0
40# disables the NumPy build
41#
42# BUILD_TEST=0
43# disables the test build
44#
45# USE_MKLDNN=0
46# disables use of MKLDNN
47#
48# MKLDNN_CPU_RUNTIME
49# MKL-DNN threading mode: TBB or OMP (default)
50#
51# USE_NNPACK=0
52# disables NNPACK build
53#
54# USE_QNNPACK=0
55# disables QNNPACK build (quantized 8-bit operators)
56#
57# USE_DISTRIBUTED=0
58# disables distributed (c10d, gloo, mpi, etc.) build
59#
60# USE_SYSTEM_NCCL=0
61# disables use of system-wide nccl (we will use our submoduled
62# copy in third_party/nccl)
63#
64# BUILD_CAFFE2_OPS=0
65# disable Caffe2 operators build
66#
67# BUILD_CAFFE2=0
68# disable Caffe2 build
69#
70# USE_IBVERBS
71# toggle features related to distributed support
72#
73# USE_OPENCV
74# enables use of OpenCV for additional operators
75#
76# USE_OPENMP=0
77# disables use of OpenMP for parallelization
78#
79# USE_FFMPEG
80# enables use of ffmpeg for additional operators
81#
82# USE_LEVELDB
83# enables use of LevelDB for storage
84#
85# USE_LMDB
86# enables use of LMDB for storage
87#
88# BUILD_BINARY
89# enables the additional binaries/ build
90#
91# PYTORCH_BUILD_VERSION
92# PYTORCH_BUILD_NUMBER
93# specify the version of PyTorch, rather than the hard-coded version
94# in this file; used when we're building binaries for distribution
95#
96# TORCH_CUDA_ARCH_LIST
97# specify which CUDA architectures to build for.
98# ie `TORCH_CUDA_ARCH_LIST="6.0;7.0"`
99# These are not CUDA versions, instead, they specify what
100# classes of NVIDIA hardware we should generate PTX for.
101#
102# ONNX_NAMESPACE
103# specify a namespace for ONNX built here rather than the hard-coded
104# one in this file; needed to build with other frameworks that share ONNX.
105#
106# BLAS
107# BLAS to be used by Caffe2. Can be MKL, Eigen, ATLAS, or OpenBLAS. If set
108# then the build will fail if the requested BLAS is not found, otherwise
109# the BLAS will be chosen based on what is found on your system.
110#
111# MKL_THREADING
112# MKL threading mode: SEQ, TBB or OMP (default)
113#
114# USE_REDIS
115# Whether to use Redis for distributed workflows (Linux only)
116#
117# USE_ZSTD
118# Enables use of ZSTD, if the libraries are found
119#
120# Environment variables we respect (these environment variables are
121# conventional and are often understood/set by other software.)
122#
123# CUDA_HOME (Linux/OS X)
124# CUDA_PATH (Windows)
125# specify where CUDA is installed; usually /usr/local/cuda or
126# /usr/local/cuda-x.y
127# CUDAHOSTCXX
128# specify a different compiler than the system one to use as the CUDA
129# host compiler for nvcc.
130#
131# CUDA_NVCC_EXECUTABLE
132# Specify a NVCC to use. This is used in our CI to point to a cached nvcc
133#
134# CUDNN_LIB_DIR
135# CUDNN_INCLUDE_DIR
136# CUDNN_LIBRARY
137# specify where cuDNN is installed
138#
139# MIOPEN_LIB_DIR
140# MIOPEN_INCLUDE_DIR
141# MIOPEN_LIBRARY
142# specify where MIOpen is installed
143#
144# NCCL_ROOT
145# NCCL_LIB_DIR
146# NCCL_INCLUDE_DIR
147# specify where nccl is installed
148#
149# NVTOOLSEXT_PATH (Windows only)
150# specify where nvtoolsext is installed
151#
152# LIBRARY_PATH
153# LD_LIBRARY_PATH
154# we will search for libraries in these paths
155#
156# ATEN_THREADING
157# ATen parallel backend to use for intra- and inter-op parallelism
158# possible values:
159# OMP - use OpenMP for intra-op and native backend for inter-op tasks
160# NATIVE - use native thread pool for both intra- and inter-op tasks
161# TBB - using TBB for intra- and native thread pool for inter-op parallelism
162#
163# USE_TBB
164# enable TBB support
165#
166# USE_SYSTEM_LIBS (work in progress)
167# Use system-provided libraries to satisfy the build dependencies.
168# When turned on, the following cmake variables will be toggled as well:
169# USE_SYSTEM_CPUINFO=ON USE_SYSTEM_SLEEF=ON BUILD_CUSTOM_PROTOBUF=OFF
170
171# This future is needed to print Python2 EOL message
172from __future__ import print_function
173import sys
174if sys.version_info < (3,):
175 print("Python 2 has reached end-of-life and is no longer supported by PyTorch.")
176 sys.exit(-1)
177if sys.platform == 'win32' and sys.maxsize.bit_length() == 31:
178 print("32-bit Windows Python runtime is not supported. Please switch to 64-bit Python.")
179 sys.exit(-1)
180
181import platform
182python_min_version = (3, 6, 2)
183python_min_version_str = '.'.join(map(str, python_min_version))
184if sys.version_info < python_min_version:
185 print("You are using Python {}. Python >={} is required.".format(platform.python_version(),
186 python_min_version_str))
187 sys.exit(-1)
188
189from setuptools import setup, Extension, find_packages
190from collections import defaultdict
191from distutils import core
192from distutils.core import Distribution
193from distutils.errors import DistutilsArgError
197import distutils.sysconfig
198import filecmp
199import shutil
200import subprocess
201import os
202import json
203import glob
204import importlib
205
206from tools.build_pytorch_libs import build_caffe2
207from tools.setup_helpers.env import (IS_WINDOWS, IS_DARWIN, IS_LINUX,
208 check_env_flag, build_type)
209from tools.setup_helpers.cmake import CMake
210from tools.generate_torch_version import get_torch_version
211
212################################################################################
213# Parameters parsed from environment
214################################################################################
215
216VERBOSE_SCRIPT = True
217RUN_BUILD_DEPS = True
218# see if the user passed a quiet flag to setup.py arguments and respect
219# that in our parts of the build
220EMIT_BUILD_WARNING = False
221RERUN_CMAKE = False
222CMAKE_ONLY = False
223filtered_args = []
224for i, arg in enumerate(sys.argv):
225 if arg == '--cmake':
226 RERUN_CMAKE = True
227 continue
228 if arg == '--cmake-only':
229 # Stop once cmake terminates. Leave users a chance to adjust build
230 # options.
231 CMAKE_ONLY = True
232 continue
233 if arg == 'rebuild' or arg == 'build':
234 arg = 'build' # rebuild is gone, make it build
235 EMIT_BUILD_WARNING = True
236 if arg == "--":
237 filtered_args += sys.argv[i:]
238 break
239 if arg == '-q' or arg == '--quiet':
240 VERBOSE_SCRIPT = False
241 if arg == 'clean' or arg == 'egg_info':
242 RUN_BUILD_DEPS = False
243 filtered_args.append(arg)
244sys.argv = filtered_args
245
246if VERBOSE_SCRIPT:
247 def report(*args):
248 print(*args)
249else:
250 def report(*args):
251 pass
252
253# Constant known variables used throughout this file
254cwd = os.path.dirname(os.path.abspath(__file__))
255lib_path = os.path.join(cwd, "torch", "lib")
256third_party_path = os.path.join(cwd, "third_party")
257caffe2_build_dir = os.path.join(cwd, "build")
258# lib/pythonx.x/site-packages
259rel_site_packages = distutils.sysconfig.get_python_lib(prefix='')
260# full absolute path to the dir above
261full_site_packages = distutils.sysconfig.get_python_lib()
262# CMAKE: full path to python library
263if IS_WINDOWS:
264 cmake_python_library = "{}/libs/python{}.lib".format(
265 distutils.sysconfig.get_config_var("prefix"),
266 distutils.sysconfig.get_config_var("VERSION"))
267 # Fix virtualenv builds
268 # TODO: Fix for python < 3.3
269 if not os.path.exists(cmake_python_library):
270 cmake_python_library = "{}/libs/python{}.lib".format(
271 sys.base_prefix,
272 distutils.sysconfig.get_config_var("VERSION"))
273else:
274 cmake_python_library = "{}/{}".format(
275 distutils.sysconfig.get_config_var("LIBDIR"),
276 distutils.sysconfig.get_config_var("INSTSONAME"))
277cmake_python_include_dir = distutils.sysconfig.get_python_inc()
278
279
280################################################################################
281# Version, create_version_file, and package_name
282################################################################################
283package_name = os.getenv('TORCH_PACKAGE_NAME', 'torch')
285report("Building wheel {}-{}".format(package_name, version))
286
287cmake = CMake()
288
289# all the work we need to do _before_ setup runs
291 report('-- Building version ' + version)
292
293 def check_file(f):
294 if bool(os.getenv("USE_SYSTEM_LIBS", False)):
295 return
296 if not os.path.exists(f):
297 report("Could not find {}".format(f))
298 report("Did you run 'git submodule update --init --recursive'?")
299 sys.exit(1)
300
301 check_file(os.path.join(third_party_path, "gloo", "CMakeLists.txt"))
302 check_file(os.path.join(third_party_path, 'cpuinfo', 'CMakeLists.txt'))
303 check_file(os.path.join(third_party_path, 'tbb', 'Makefile'))
304 check_file(os.path.join(third_party_path, 'onnx', 'CMakeLists.txt'))
305 check_file(os.path.join(third_party_path, 'foxi', 'CMakeLists.txt'))
306 check_file(os.path.join(third_party_path, 'QNNPACK', 'CMakeLists.txt'))
307 check_file(os.path.join(third_party_path, 'fbgemm', 'CMakeLists.txt'))
308 check_file(os.path.join(third_party_path, 'fbgemm', 'third_party',
309 'asmjit', 'CMakeLists.txt'))
310 check_file(os.path.join(third_party_path, 'onnx', 'third_party',
311 'benchmark', 'CMakeLists.txt'))
312
313 check_pydep('yaml', 'pyyaml')
314
315 build_caffe2(version=version,
316 cmake_python_library=cmake_python_library,
317 build_python=True,
318 rerun_cmake=RERUN_CMAKE,
319 cmake_only=CMAKE_ONLY,
320 cmake=cmake)
321
322 if CMAKE_ONLY:
323 report('Finished running cmake. Run "ccmake build" or '
324 '"cmake-gui build" to adjust build options and '
325 '"python setup.py install" to build.')
326 sys.exit()
327
328 # Use copies instead of symbolic files.
329 # Windows has very poor support for them.
330 sym_files = [
331 'tools/shared/_utils_internal.py',
332 'torch/utils/benchmark/utils/valgrind_wrapper/callgrind.h',
333 'torch/utils/benchmark/utils/valgrind_wrapper/valgrind.h',
334 ]
335 orig_files = [
336 'torch/_utils_internal.py',
337 'third_party/valgrind-headers/callgrind.h',
338 'third_party/valgrind-headers/valgrind.h',
339 ]
340 for sym_file, orig_file in zip(sym_files, orig_files):
341 same = False
342 if os.path.exists(sym_file):
343 if filecmp.cmp(sym_file, orig_file):
344 same = True
345 else:
346 os.remove(sym_file)
347 if not same:
348 shutil.copyfile(orig_file, sym_file)
349
350################################################################################
351# Building dependent libraries
352################################################################################
353
354# the list of runtime dependencies required by this built package
355install_requires = [
356 'typing_extensions',
357 'dataclasses; python_version < "3.7"'
358]
359
360missing_pydep = '''
361Missing build dependency: Unable to `import {importname}`.
362Please install it via `conda install {module}` or `pip install {module}`
363'''.strip()
364
365
366def check_pydep(importname, module):
367 try:
368 importlib.import_module(importname)
369 except ImportError:
370 raise RuntimeError(missing_pydep.format(importname=importname, module=module))
371
372
373class build_ext(setuptools.command.build_ext.build_ext):
374
375 # Copy libiomp5.dylib inside the wheel package on OS X
376 def _embed_libiomp(self):
377 if not IS_DARWIN:
378 return
379 lib_dir = os.path.join(self.build_lib, 'torch', 'lib')
380 libtorch_cpu_path = os.path.join(lib_dir, 'libtorch_cpu.dylib')
381 if not os.path.exists(libtorch_cpu_path):
382 return
383 # Parse libtorch_cpu load commands
384 otool_cmds = subprocess.check_output(['otool', '-l', libtorch_cpu_path]).decode('utf-8').split('\n')
385 rpaths, libs = [], []
386 for idx, line in enumerate(otool_cmds):
387 if line.strip() == 'cmd LC_LOAD_DYLIB':
388 lib_name = otool_cmds[idx + 2].strip()
389 assert lib_name.startswith('name ')
390 libs.append(lib_name.split(' ', 1)[1].rsplit('(', 1)[0][:-1])
391
392 if line.strip() == 'cmd LC_RPATH':
393 rpath = otool_cmds[idx + 2].strip()
394 assert rpath.startswith('path ')
395 rpaths.append(rpath.split(' ', 1)[1].rsplit('(', 1)[0][:-1])
396
397 omp_lib_name = 'libiomp5.dylib'
398 if os.path.join('@rpath', omp_lib_name) not in libs:
399 return
400
401 # Copy libiomp5 from rpath locations
402 for rpath in rpaths:
403 source_lib = os.path.join(rpath, omp_lib_name)
404 if not os.path.exists(source_lib):
405 continue
406 target_lib = os.path.join(self.build_lib, 'torch', 'lib', omp_lib_name)
407 self.copy_file(source_lib, target_lib)
408 break
409
410 def run(self):
411 # Report build options. This is run after the build completes so # `CMakeCache.txt` exists and we can get an
412 # accurate report on what is used and what is not.
413 cmake_cache_vars = defaultdict(lambda: False, cmake.get_cmake_cache_variables())
414 if cmake_cache_vars['USE_NUMPY']:
415 report('-- Building with NumPy bindings')
416 else:
417 report('-- NumPy not found')
418 if cmake_cache_vars['USE_CUDNN']:
419 report('-- Detected cuDNN at ' +
420 cmake_cache_vars['CUDNN_LIBRARY'] + ', ' + cmake_cache_vars['CUDNN_INCLUDE_DIR'])
421 else:
422 report('-- Not using cuDNN')
423 if cmake_cache_vars['USE_CUDA']:
424 report('-- Detected CUDA at ' + cmake_cache_vars['CUDA_TOOLKIT_ROOT_DIR'])
425 else:
426 report('-- Not using CUDA')
427 if cmake_cache_vars['USE_MKLDNN']:
428 report('-- Using MKLDNN')
429 if cmake_cache_vars['USE_MKLDNN_CBLAS']:
430 report('-- Using CBLAS in MKLDNN')
431 else:
432 report('-- Not using CBLAS in MKLDNN')
433 else:
434 report('-- Not using MKLDNN')
435 if cmake_cache_vars['USE_NCCL'] and cmake_cache_vars['USE_SYSTEM_NCCL']:
436 report('-- Using system provided NCCL library at {}, {}'.format(cmake_cache_vars['NCCL_LIBRARIES'],
437 cmake_cache_vars['NCCL_INCLUDE_DIRS']))
438 elif cmake_cache_vars['USE_NCCL']:
439 report('-- Building NCCL library')
440 else:
441 report('-- Not using NCCL')
442 if cmake_cache_vars['USE_DISTRIBUTED']:
443 if IS_WINDOWS:
444 report('-- Building without distributed package')
445 else:
446 report('-- Building with distributed package ')
447 else:
448 report('-- Building without distributed package')
449
450 # Do not use clang to compile extensions if `-fstack-clash-protection` is defined
451 # in system CFLAGS
452 system_c_flags = distutils.sysconfig.get_config_var('CFLAGS')
453 if IS_LINUX and '-fstack-clash-protection' in system_c_flags and 'clang' in os.environ.get('CC', ''):
454 os.environ['CC'] = distutils.sysconfig.get_config_var('CC')
455
456 # It's an old-style class in Python 2.7...
457 setuptools.command.build_ext.build_ext.run(self)
458
459 self._embed_libiomp_embed_libiomp()
460
461 # Copy the essential export library to compile C++ extensions.
462 if IS_WINDOWS:
463 build_temp = self.build_temp
464
465 ext_filename = self.get_ext_filename('_C')
466 lib_filename = '.'.join(ext_filename.split('.')[:-1]) + '.lib'
467
468 export_lib = os.path.join(
469 build_temp, 'torch', 'csrc', lib_filename).replace('\\', '/')
470
471 build_lib = self.build_lib
472
473 target_lib = os.path.join(
474 build_lib, 'torch', 'lib', '_C.lib').replace('\\', '/')
475
476 # Create "torch/lib" directory if not exists.
477 # (It is not created yet in "develop" mode.)
478 target_dir = os.path.dirname(target_lib)
479 if not os.path.exists(target_dir):
480 os.makedirs(target_dir)
481
482 self.copy_file(export_lib, target_lib)
483
485 self.create_compile_commandscreate_compile_commands()
486 # The caffe2 extensions are created in
487 # tmp_install/lib/pythonM.m/site-packages/caffe2/python/
488 # and need to be copied to build/lib.linux.... , which will be a
489 # platform dependent build folder created by the "build" command of
490 # setuptools. Only the contents of this folder are installed in the
491 # "install" command by default.
492 # We only make this copy for Caffe2's pybind extensions
493 caffe2_pybind_exts = [
494 'caffe2.python.caffe2_pybind11_state',
495 'caffe2.python.caffe2_pybind11_state_gpu',
496 'caffe2.python.caffe2_pybind11_state_hip',
497 ]
498 i = 0
499 while i < len(self.extensions):
500 ext = self.extensions[i]
501 if ext.name not in caffe2_pybind_exts:
502 i += 1
503 continue
504 fullname = self.get_ext_fullname(ext.name)
505 filename = self.get_ext_filename(fullname)
506 report("\nCopying extension {}".format(ext.name))
507
508 src = os.path.join("torch", rel_site_packages, filename)
509 if not os.path.exists(src):
510 report("{} does not exist".format(src))
511 del self.extensions[i]
512 else:
513 dst = os.path.join(os.path.realpath(self.build_lib), filename)
514 report("Copying {} from {} to {}".format(ext.name, src, dst))
515 dst_dir = os.path.dirname(dst)
516 if not os.path.exists(dst_dir):
517 os.makedirs(dst_dir)
518 self.copy_file(src, dst)
519 i += 1
520 distutils.command.build_ext.build_ext.build_extensions(self)
521
522 def get_outputs(self):
523 outputs = distutils.command.build_ext.build_ext.get_outputs(self)
524 outputs.append(os.path.join(self.build_lib, "caffe2"))
525 report("setup.py::get_outputs returning {}".format(outputs))
526 return outputs
527
529 def load(filename):
530 with open(filename) as f:
531 return json.load(f)
532 ninja_files = glob.glob('build/*compile_commands.json')
533 cmake_files = glob.glob('torch/lib/build/*/compile_commands.json')
534 all_commands = [entry
535 for f in ninja_files + cmake_files
536 for entry in load(f)]
537
538 # cquery does not like c++ compiles that start with gcc.
539 # It forgets to include the c++ header directories.
540 # We can work around this by replacing the gcc calls that python
541 # setup.py generates with g++ calls instead
542 for command in all_commands:
543 if command['command'].startswith("gcc "):
544 command['command'] = "g++ " + command['command'][4:]
545
546 new_contents = json.dumps(all_commands, indent=2)
547 contents = ''
548 if os.path.exists('compile_commands.json'):
549 with open('compile_commands.json', 'r') as f:
550 contents = f.read()
551 if contents != new_contents:
552 with open('compile_commands.json', 'w') as f:
553 f.write(new_contents)
554
556 """Merge LICENSE and LICENSES_BUNDLED.txt as a context manager
557
558 LICENSE is the main PyTorch license, LICENSES_BUNDLED.txt is auto-generated
559 from all the licenses found in ./third_party/. We concatenate them so there
560 is a single license file in the sdist and wheels with all of the necessary
561 licensing info.
562 """
563 def __init__(self):
564 self.f1f1 = 'LICENSE'
565 self.f2f2 = 'third_party/LICENSES_BUNDLED.txt'
566
567 def __enter__(self):
568 """Concatenate files"""
569 with open(self.f1f1, 'r') as f1:
570 self.bsd_textbsd_text = f1.read()
571
572 with open(self.f1f1, 'a') as f1:
573 with open(self.f2f2, 'r') as f2:
574 self.bundled_textbundled_text = f2.read()
575 f1.write('\n\n')
576 f1.write(self.bundled_textbundled_text)
577
578 def __exit__(self, exception_type, exception_value, traceback):
579 """Restore content of f1"""
580 with open(self.f1f1, 'w') as f:
581 f.write(self.bsd_textbsd_text)
582
583
584try:
585 from wheel.bdist_wheel import bdist_wheel
586except ImportError:
587 # This is useful when wheel is not installed and bdist_wheel is not
588 # specified on the command line. If it _is_ specified, parsing the command
589 # line will fail before wheel_concatenate is needed
590 wheel_concatenate = None
591else:
592 # Need to create the proper LICENSE.txt for the wheel
593 class wheel_concatenate(bdist_wheel):
594 """ check submodules on sdist to prevent incomplete tarballs """
595 def run(self):
597 super().run()
598
599
600class install(setuptools.command.install.install):
601 def run(self):
602 setuptools.command.install.install.run(self)
603
604
605class clean(distutils.command.clean.clean):
606 def run(self):
607 import glob
608 import re
609 with open('.gitignore', 'r') as f:
610 ignores = f.read()
611 pat = re.compile(r'^#( BEGIN NOT-CLEAN-FILES )?')
612 for wildcard in filter(None, ignores.split('\n')):
613 match = pat.match(wildcard)
614 if match:
615 if match.group(1):
616 # Marker is found and stop reading .gitignore.
617 break
618 # Ignore lines which begin with '#'.
619 else:
620 for filename in glob.glob(wildcard):
621 try:
622 os.remove(filename)
623 except OSError:
624 shutil.rmtree(filename, ignore_errors=True)
625
626 # It's an old-style class in Python 2.7...
627 distutils.command.clean.clean.run(self)
628
630 r"""Configures extension build options according to system environment and user's choice.
631
632 Returns:
633 The input to parameters ext_modules, cmdclass, packages, and entry_points as required in setuptools.setup.
634 """
635
636 try:
637 cmake_cache_vars = defaultdict(lambda: False, cmake.get_cmake_cache_variables())
638 except FileNotFoundError:
639 # CMakeCache.txt does not exist. Probably running "python setup.py clean" over a clean directory.
640 cmake_cache_vars = defaultdict(lambda: False)
641
642 ################################################################################
643 # Configure compile flags
644 ################################################################################
645
646 library_dirs = []
647 extra_install_requires = []
648
649 if IS_WINDOWS:
650 # /NODEFAULTLIB makes sure we only link to DLL runtime
651 # and matches the flags set for protobuf and ONNX
652 extra_link_args = ['/NODEFAULTLIB:LIBCMT.LIB']
653 # /MD links against DLL runtime
654 # and matches the flags set for protobuf and ONNX
655 # /EHsc is about standard C++ exception handling
656 # /DNOMINMAX removes builtin min/max functions
657 # /wdXXXX disables warning no. XXXX
658 extra_compile_args = ['/MD', '/EHsc', '/DNOMINMAX',
659 '/wd4267', '/wd4251', '/wd4522', '/wd4522', '/wd4838',
660 '/wd4305', '/wd4244', '/wd4190', '/wd4101', '/wd4996',
661 '/wd4275']
662 else:
663 extra_link_args = []
664 extra_compile_args = [
665 '-Wall',
666 '-Wextra',
667 '-Wno-strict-overflow',
668 '-Wno-unused-parameter',
669 '-Wno-missing-field-initializers',
670 '-Wno-write-strings',
671 '-Wno-unknown-pragmas',
672 # This is required for Python 2 declarations that are deprecated in 3.
673 '-Wno-deprecated-declarations',
674 # Python 2.6 requires -fno-strict-aliasing, see
675 # http://legacy.python.org/dev/peps/pep-3123/
676 # We also depend on it in our code (even Python 3).
677 '-fno-strict-aliasing',
678 # Clang has an unfixed bug leading to spurious missing
679 # braces warnings, see
680 # https://bugs.llvm.org/show_bug.cgi?id=21629
681 '-Wno-missing-braces',
682 ]
683 if check_env_flag('WERROR'):
684 extra_compile_args.append('-Werror')
685
686 library_dirs.append(lib_path)
687
688 main_compile_args = []
689 main_libraries = ['torch_python']
690 main_link_args = []
691 main_sources = ["torch/csrc/stub.c"]
692
693 if cmake_cache_vars['USE_CUDA']:
694 library_dirs.append(
695 os.path.dirname(cmake_cache_vars['CUDA_CUDA_LIB']))
696
697 if cmake_cache_vars['USE_NUMPY']:
698 extra_install_requires += ['numpy']
699
700 if build_type.is_debug():
701 if IS_WINDOWS:
702 extra_compile_args.append('/Z7')
703 extra_link_args.append('/DEBUG:FULL')
704 else:
705 extra_compile_args += ['-O0', '-g']
706 extra_link_args += ['-O0', '-g']
707
708 if build_type.is_rel_with_deb_info():
709 if IS_WINDOWS:
710 extra_compile_args.append('/Z7')
711 extra_link_args.append('/DEBUG:FULL')
712 else:
713 extra_compile_args += ['-g']
714 extra_link_args += ['-g']
715
716
717 def make_relative_rpath_args(path):
718 if IS_DARWIN:
719 return ['-Wl,-rpath,@loader_path/' + path]
720 elif IS_WINDOWS:
721 return []
722 else:
723 return ['-Wl,-rpath,$ORIGIN/' + path]
724
725 ################################################################################
726 # Declare extensions and package
727 ################################################################################
728
729 extensions = []
730 packages = find_packages(exclude=('tools', 'tools.*'))
731 C = Extension("torch._C",
732 libraries=main_libraries,
733 sources=main_sources,
734 language='c',
735 extra_compile_args=main_compile_args + extra_compile_args,
736 include_dirs=[],
737 library_dirs=library_dirs,
738 extra_link_args=extra_link_args + main_link_args + make_relative_rpath_args('lib'))
739 extensions.append(C)
740
741 if not IS_WINDOWS:
742 DL = Extension("torch._dl",
743 sources=["torch/csrc/dl.c"],
744 language='c')
745 extensions.append(DL)
746
747 # These extensions are built by cmake and copied manually in build_extensions()
748 # inside the build_ext implementation
749 extensions.append(
750 Extension(
751 name=str('caffe2.python.caffe2_pybind11_state'),
752 sources=[]),
753 )
754 if cmake_cache_vars['USE_CUDA']:
755 extensions.append(
756 Extension(
757 name=str('caffe2.python.caffe2_pybind11_state_gpu'),
758 sources=[]),
759 )
760 if cmake_cache_vars['USE_ROCM']:
761 extensions.append(
762 Extension(
763 name=str('caffe2.python.caffe2_pybind11_state_hip'),
764 sources=[]),
765 )
766
767 cmdclass = {
768 'build_ext': build_ext,
769 'clean': clean,
770 'install': install,
771 'bdist_wheel': wheel_concatenate,
772 }
773
774 entry_points = {
775 'console_scripts': [
776 'convert-caffe2-to-onnx = caffe2.python.onnx.bin.conversion:caffe2_to_onnx',
777 'convert-onnx-to-caffe2 = caffe2.python.onnx.bin.conversion:onnx_to_caffe2',
778 ]
779 }
780
781 return extensions, cmdclass, packages, entry_points, extra_install_requires
782
783# post run, warnings, printed at the end to make them more visible
784build_update_message = """
785 It is no longer necessary to use the 'build' or 'rebuild' targets
786
787 To install:
788 $ python setup.py install
789 To develop locally:
790 $ python setup.py develop
791 To force cmake to re-generate native build files (off by default):
792 $ python setup.py develop --cmake
793"""
794
795
796def print_box(msg):
797 lines = msg.split('\n')
798 size = max(len(l) + 1 for l in lines)
799 print('-' * (size + 2))
800 for l in lines:
801 print('|{}{}|'.format(l, ' ' * (size - len(l))))
802 print('-' * (size + 2))
803
804if __name__ == '__main__':
805 # Parse the command line and check the arguments
806 # before we proceed with building deps and setup
807 dist = Distribution()
808 dist.script_name = sys.argv[0]
809 dist.script_args = sys.argv[1:]
810 try:
811 ok = dist.parse_command_line()
812 except DistutilsArgError as msg:
813 raise SystemExit(core.gen_usage(dist.script_name) + "\nerror: %s" % msg)
814 if not ok:
815 sys.exit()
816
817 if RUN_BUILD_DEPS:
818 build_deps()
819
820 extensions, cmdclass, packages, entry_points, extra_install_requires = configure_extension_build()
821
822 install_requires += extra_install_requires
823
824 # Read in README.md for our long_description
825 with open(os.path.join(cwd, "README.md"), encoding="utf-8") as f:
826 long_description = f.read()
827
828 version_range_max = max(sys.version_info[1], 8) + 1
829 setup(
830 name=package_name,
831 version=version,
832 description=("Tensors and Dynamic neural networks in "
833 "Python with strong GPU acceleration"),
834 long_description=long_description,
835 long_description_content_type="text/markdown",
836 ext_modules=extensions,
837 cmdclass=cmdclass,
838 packages=packages,
839 entry_points=entry_points,
840 install_requires=install_requires,
841 package_data={
842 'torch': [
843 'py.typed',
844 'bin/*',
845 'test/*',
846 '_C/*.pyi',
847 'cuda/*.pyi',
848 'optim/*.pyi',
849 'autograd/*.pyi',
850 'utils/data/*.pyi',
851 'nn/*.pyi',
852 'nn/modules/*.pyi',
853 'nn/parallel/*.pyi',
854 'lib/*.so*',
855 'lib/*.dylib*',
856 'lib/*.dll',
857 'lib/*.lib',
858 'lib/*.pdb',
859 'lib/torch_shm_manager',
860 'lib/*.h',
861 'include/ATen/*.h',
862 'include/ATen/cpu/*.h',
863 'include/ATen/cpu/vec256/*.h',
864 'include/ATen/core/*.h',
865 'include/ATen/cuda/*.cuh',
866 'include/ATen/cuda/*.h',
867 'include/ATen/cuda/detail/*.cuh',
868 'include/ATen/cuda/detail/*.h',
869 'include/ATen/cudnn/*.h',
870 'include/ATen/hip/*.cuh',
871 'include/ATen/hip/*.h',
872 'include/ATen/hip/detail/*.cuh',
873 'include/ATen/hip/detail/*.h',
874 'include/ATen/hip/impl/*.h',
875 'include/ATen/detail/*.h',
876 'include/ATen/native/*.h',
877 'include/ATen/native/cpu/*.h',
878 'include/ATen/native/cuda/*.h',
879 'include/ATen/native/cuda/*.cuh',
880 'include/ATen/native/hip/*.h',
881 'include/ATen/native/hip/*.cuh',
882 'include/ATen/native/quantized/*.h',
883 'include/ATen/native/quantized/cpu/*.h',
884 'include/ATen/quantized/*.h',
885 'include/caffe2/utils/*.h',
886 'include/caffe2/utils/**/*.h',
887 'include/c10/*.h',
888 'include/c10/macros/*.h',
889 'include/c10/core/*.h',
890 'include/ATen/core/boxing/*.h',
891 'include/ATen/core/boxing/impl/*.h',
892 'include/ATen/core/dispatch/*.h',
893 'include/ATen/core/op_registration/*.h',
894 'include/c10/core/impl/*.h',
895 'include/c10/util/*.h',
896 'include/c10/cuda/*.h',
897 'include/c10/cuda/impl/*.h',
898 'include/c10/hip/*.h',
899 'include/c10/hip/impl/*.h',
900 'include/c10d/*.hpp',
901 'include/caffe2/**/*.h',
902 'include/torch/*.h',
903 'include/torch/csrc/*.h',
904 'include/torch/csrc/api/include/torch/*.h',
905 'include/torch/csrc/api/include/torch/data/*.h',
906 'include/torch/csrc/api/include/torch/data/dataloader/*.h',
907 'include/torch/csrc/api/include/torch/data/datasets/*.h',
908 'include/torch/csrc/api/include/torch/data/detail/*.h',
909 'include/torch/csrc/api/include/torch/data/samplers/*.h',
910 'include/torch/csrc/api/include/torch/data/transforms/*.h',
911 'include/torch/csrc/api/include/torch/detail/*.h',
912 'include/torch/csrc/api/include/torch/detail/ordered_dict.h',
913 'include/torch/csrc/api/include/torch/nn/*.h',
914 'include/torch/csrc/api/include/torch/nn/functional/*.h',
915 'include/torch/csrc/api/include/torch/nn/options/*.h',
916 'include/torch/csrc/api/include/torch/nn/modules/*.h',
917 'include/torch/csrc/api/include/torch/nn/modules/container/*.h',
918 'include/torch/csrc/api/include/torch/nn/parallel/*.h',
919 'include/torch/csrc/api/include/torch/nn/utils/*.h',
920 'include/torch/csrc/api/include/torch/optim/*.h',
921 'include/torch/csrc/api/include/torch/serialize/*.h',
922 'include/torch/csrc/autograd/*.h',
923 'include/torch/csrc/autograd/functions/*.h',
924 'include/torch/csrc/autograd/generated/*.h',
925 'include/torch/csrc/autograd/utils/*.h',
926 'include/torch/csrc/cuda/*.h',
927 'include/torch/csrc/jit/*.h',
928 'include/torch/csrc/jit/backends/*.h',
929 'include/torch/csrc/jit/generated/*.h',
930 'include/torch/csrc/jit/passes/*.h',
931 'include/torch/csrc/jit/passes/quantization/*.h',
932 'include/torch/csrc/jit/passes/utils/*.h',
933 'include/torch/csrc/jit/runtime/*.h',
934 'include/torch/csrc/jit/ir/*.h',
935 'include/torch/csrc/jit/frontend/*.h',
936 'include/torch/csrc/jit/api/*.h',
937 'include/torch/csrc/jit/serialization/*.h',
938 'include/torch/csrc/jit/python/*.h',
939 'include/torch/csrc/jit/testing/*.h',
940 'include/torch/csrc/jit/tensorexpr/*.h',
941 'include/torch/csrc/onnx/*.h',
942 'include/torch/csrc/utils/*.h',
943 'include/pybind11/*.h',
944 'include/pybind11/detail/*.h',
945 'include/TH/*.h*',
946 'include/TH/generic/*.h*',
947 'include/THC/*.cuh',
948 'include/THC/*.h*',
949 'include/THC/generic/*.h',
950 'include/THCUNN/*.cuh',
951 'include/THCUNN/generic/*.h',
952 'include/THH/*.cuh',
953 'include/THH/*.h*',
954 'include/THH/generic/*.h',
955 'share/cmake/ATen/*.cmake',
956 'share/cmake/Caffe2/*.cmake',
957 'share/cmake/Caffe2/public/*.cmake',
958 'share/cmake/Caffe2/Modules_CUDA_fix/*.cmake',
959 'share/cmake/Caffe2/Modules_CUDA_fix/upstream/*.cmake',
960 'share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA/*.cmake',
961 'share/cmake/Gloo/*.cmake',
962 'share/cmake/Tensorpipe/*.cmake',
963 'share/cmake/Torch/*.cmake',
964 'utils/benchmark/utils/*.cpp',
965 'utils/benchmark/utils/valgrind_wrapper/*.cpp',
966 'utils/benchmark/utils/valgrind_wrapper/*.h',
967 ],
968 'caffe2': [
969 'python/serialized_test/data/operator_test/*.zip',
970 ],
971 },
972 url='https://pytorch.org/',
973 download_url='https://github.com/pytorch/pytorch/tags',
974 author='PyTorch Team',
975 author_email='packages@pytorch.org',
976 python_requires='>={}'.format(python_min_version_str),
977 # PyPI package information.
978 classifiers=[
979 'Development Status :: 5 - Production/Stable',
980 'Intended Audience :: Developers',
981 'Intended Audience :: Education',
982 'Intended Audience :: Science/Research',
983 'License :: OSI Approved :: BSD License',
984 'Topic :: Scientific/Engineering',
985 'Topic :: Scientific/Engineering :: Mathematics',
986 'Topic :: Scientific/Engineering :: Artificial Intelligence',
987 'Topic :: Software Development',
988 'Topic :: Software Development :: Libraries',
989 'Topic :: Software Development :: Libraries :: Python Modules',
990 'Programming Language :: C++',
991 'Programming Language :: Python :: 3',
992 ] + ['Programming Language :: Python :: 3.{}'.format(i) for i in range(python_min_version[1], version_range_max)],
993 license='BSD-3',
994 keywords='pytorch machine learning',
995 )
996 if EMIT_BUILD_WARNING:
997 print_box(build_update_message)
uint32_t max
Definition: Resource.cpp:270
def _embed_libiomp(self)
Definition: setup.py:376
def build_extensions(self)
Definition: setup.py:484
def get_outputs(self)
Definition: setup.py:522
def create_compile_commands(self)
Definition: setup.py:528
def run(self)
Definition: setup.py:410
def run(self)
Definition: setup.py:606
def __exit__(self, exception_type, exception_value, traceback)
Definition: setup.py:578
def run(self)
Definition: setup.py:601
void map(const Op &vec_fun, scalar_t *output_data, const scalar_t *input_data, int64_t size)
Definition: functional.h:168
std::ostream & print(std::ostream &stream, const Tensor &tensor_, int64_t linesize)
Definition: Formatting.cpp:230
constexpr Symbol format(static_cast< unique_t >(_keys::aten_format))
constexpr Symbol len(static_cast< unique_t >(_keys::aten_len))
constexpr Symbol enumerate(static_cast< unique_t >(_keys::prim_enumerate))
constexpr Symbol zip(static_cast< unique_t >(_keys::prim_zip))
constexpr Symbol range(static_cast< unique_t >(_keys::prim_range))
std::string replace(std::string line, const std::string &substring, const std::string &target)
Definition: utils.h:12
INT_MAX Subnet with blob bindings Indices of corresponding outer workspace in List of blobs from the forward Do int out bool
Definition: do_op.cc:26
this is done throughout the image data and the output is computed As a side note on the implementation which is why they are separate files DOC filter
def load(path)
Definition: diff.py:12
Definition: setup.py:1
def configure_extension_build()
Definition: setup.py:629
def report(*args)
Definition: setup.py:247
def print_box(msg)
Definition: setup.py:796
def check_pydep(importname, module)
Definition: setup.py:366
def build_deps()
Definition: setup.py:290
Module caffe2.python.layers.split.
def build_caffe2(version, cmake_python_library, build_python, rerun_cmake, cmake_only, cmake)
str get_torch_version(Optional[str] sha=None)
def check_env_flag(name, default='')
Definition: env.py:20