![]() predict ( dtrain, pred_contribs = True ) shap_interaction_values = model. The GPU algorithms currently work with CLI, Python, R, and JVM packages. Gpu_id parameter, which defaults to 0 (the first device reported by CUDA runtime). The device ordinal (which GPU to use if you have many of them) can be selected using the This may improve speed, in particular on older architectures. The experimental parameter single_precision_histogram can be set to True to enable building histograms using single precision. ![]() Likewise when using CPU algorithms, GPU accelerated prediction can be enabled by setting predictor to gpu_predictor. This could be useful if you want to conserve GPU memory. GPU accelerated prediction is enabled by default for the above mentioned tree_method parameters but can be switched to CPU prediction by setting predictor to cpu_predictor. ![]() NOTE: May run very slowly on GPUs older than Pascal architecture. Much faster and uses considerably less memory. ![]() Algorithms ¶Įquivalent to the XGBoost fast histogram algorithm. Specify the tree_method parameter as one of the following algorithms. Tree construction (training) and prediction can be accelerated with CUDA-capable GPUs. (See this list to look up compute capability of your GPU card.) CUDA Accelerated Tree Construction Algorithms ¶ The GPU algorithms in XGBoost require a graphics card with compute capability 3.5 or higher, with CUDA 10.0, Compute Capability 3.5 required
0 Comments
Leave a Reply. |