diff --git a/doc/faq.rst b/doc/faq.rst index 0d7688d3cc..d5d58e0d44 100644 --- a/doc/faq.rst +++ b/doc/faq.rst @@ -213,7 +213,7 @@ Auto-sklearn wraps scikit-learn and therefore inherits its parallelism implement scikit-learn uses two modes of parallelizing computations: 1. By using joblib to distribute independent function calls on multiple cores. -2. By using lower level libraries such as OpenML and numpy to distribute more fine-grained +2. By using lower level libraries such as OpenMP and numpy to distribute more fine-grained computation. This means that Auto-sklearn can use more resources than expected by the user. For technical @@ -225,7 +225,7 @@ with the number of requested CPUs). This can be done by setting the following en variables: ``MKL_NUM_THREADS``, ``OPENBLAS_NUM_THREADS``, ``BLIS_NUM_THREADS`` and ``OMP_NUM_THREADS``. -More details can be found in the `scikit-learn docs ` +More details can be found in the `scikit-learn docs `_. Meta-Learning ============= @@ -236,7 +236,7 @@ Which datasets are used for meta-learning? We updated the list of datasets used for meta-learning several times and this list now differs significantly from the original 140 datasets we used in 2015 when the paper and the package were released. An up-to-date list of `OpenML task IDs `_ can be found -on `github `_ +on `github `_. How can datasets from the meta-data be excluded? ------------------------------------------------