KMeans#

class sklearn.cluster.KMeans(n_clusters=8, *, init='k-means++', n_init='auto', max_iter=300, tol=0.0001, verbose=0, random_state=None, copy_x=True, algorithm='lloyd')[source]#

K-Means 聚类。

Read more in the User Guide.

参数:
n_clustersint, default=8

The number of clusters to form as well as the number of centroids to generate.

For an example of how to choose an optimal value for n_clusters refer to Selecting the number of clusters with silhouette analysis on KMeans clustering.

init{‘k-means++’, ‘random’}, callable or array-like of shape (n_clusters, n_features), default=’k-means++’

Method for initialization

  • ‘k-means++’ : selects initial cluster centroids using sampling based on an empirical probability distribution of the points’ contribution to the overall inertia. This technique speeds up convergence. The algorithm implemented is “greedy k-means++”. It differs from the vanilla k-means++ by making several trials at each sampling step and choosing the best centroid among them.

  • ‘random’: choose n_clusters observations (rows) at random from data for the initial centroids.

  • If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.

  • If a callable is passed, it should take arguments X, n_clusters and a random state and return an initialization.

For an example of how to use the different init strategies, see A demo of K-Means clustering on the handwritten digits data.

For an evaluation of the impact of initialization, see the example Empirical evaluation of the impact of k-means initialization.

n_init‘auto’ or int, default=’auto’

Number of times the k-means algorithm is run with different centroid seeds. The final results is the best output of n_init consecutive runs in terms of inertia. Several runs are recommended for sparse high-dimensional problems (see Clustering sparse data with k-means).

When n_init='auto', the number of runs depends on the value of init: 10 if using init='random' or init is a callable; 1 if using init='k-means++' or init is an array-like.

Added in version 1.2: Added ‘auto’ option for n_init.

Changed in version 1.4: Default value for n_init changed to 'auto'.

max_iterint, default=300

Maximum number of iterations of the k-means algorithm for a single run.

tolfloat, default=1e-4

Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence.

verboseint, default=0

Verbosity mode.

random_stateint, RandomState instance or None, default=None

Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See Glossary.

copy_xbool, default=True

When pre-computing distances it is more numerically accurate to center the data first. If copy_x is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if copy_x is False. If the original data is sparse, but not in CSR format, a copy will be made even if copy_x is False.

algorithm{“lloyd”, “elkan”}, default=”lloyd”

K-means algorithm to use. The classical EM-style algorithm is "lloyd". The "elkan" variation can be more efficient on some datasets with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters).

Changed in version 0.18: Added Elkan algorithm

Changed in version 1.1: Renamed “full” to “lloyd”, and deprecated “auto” and “full”. Changed “auto” to use “lloyd” instead of “elkan”.

属性:
cluster_centers_ndarray of shape (n_clusters, n_features)

Coordinates of cluster centers. If the algorithm stops before fully converging (see tol and max_iter), these will not be consistent with labels_.

labels_ndarray of shape (n_samples,)

Labels of each point

inertia_float

Sum of squared distances of samples to their closest cluster center, weighted by the sample weights if provided.

n_iter_int

运行的迭代次数。

n_features_in_int

拟合 期间看到的特征数。

0.24 版本新增。

feature_names_in_shape 为 (n_features_in_,) 的 ndarray

fit 期间看到的特征名称。仅当 X 具有全部为字符串的特征名称时才定义。

1.0 版本新增。

另请参阅

MiniBatchKMeans

Alternative online implementation that does incremental updates of the centers positions using mini-batches. For large scale learning (say n_samples > 10k) MiniBatchKMeans is probably much faster than the default batch implementation.

注意事项

The k-means problem is solved using either Lloyd’s or Elkan’s algorithm.

The average complexity is given by O(k n T), where n is the number of samples and T is the number of iteration.

The worst case complexity is given by O(n^(k+2/p)) with n = n_samples, p = n_features. Refer to “How slow is the k-means method?” D. Arthur and S. Vassilvitskii - SoCG2006. for more details.

In practice, the k-means algorithm is very fast (one of the fastest clustering algorithms available), but it falls in local minima. That’s why it can be useful to restart it several times.

If the algorithm stops before fully converging (because of tol or max_iter), labels_ and cluster_centers_ will not be consistent, i.e. the cluster_centers_ will not be the means of the points in each cluster. Also, the estimator will reassign labels_ after the last iteration to make labels_ consistent with predict on the training set.

示例

>>> from sklearn.cluster import KMeans
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
...               [10, 2], [10, 4], [10, 0]])
>>> kmeans = KMeans(n_clusters=2, random_state=0, n_init="auto").fit(X)
>>> kmeans.labels_
array([1, 1, 1, 0, 0, 0], dtype=int32)
>>> kmeans.predict([[0, 0], [12, 3]])
array([1, 0], dtype=int32)
>>> kmeans.cluster_centers_
array([[10.,  2.],
       [ 1.,  2.]])

For examples of common problems with K-Means and how to address them see Demonstration of k-means assumptions.

For a demonstration of how K-Means can be used to cluster text documents see Clustering text documents using k-means.

For a comparison between K-Means and MiniBatchKMeans refer to example Comparison of the K-Means and MiniBatchKMeans clustering algorithms.

For a comparison between K-Means and BisectingKMeans refer to example Bisecting K-Means and Regular K-Means Performance Comparison.

fit(X, y=None, sample_weight=None)[source]#

Compute k-means clustering.

参数:
Xshape 为 (n_samples, n_features) 的 {array-like, sparse matrix}

Training instances to cluster. It must be noted that the data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous. If a sparse matrix is passed, a copy will be made if it’s not in CSR format.

y被忽略

Not used, present here for API consistency by convention.

sample_weightshape 为 (n_samples,) 的 array-like, default=None

The weights for each observation in X. If None, all observations are assigned equal weight. sample_weight is not used during initialization if init is a callable or a user provided array.

0.20 版本新增。

返回:
selfobject

拟合的估计器。

fit_predict(X, y=None, sample_weight=None)[source]#

Compute cluster centers and predict cluster index for each sample.

Convenience method; equivalent to calling fit(X) followed by predict(X).

参数:
Xshape 为 (n_samples, n_features) 的 {array-like, sparse matrix}

New data to transform.

y被忽略

Not used, present here for API consistency by convention.

sample_weightshape 为 (n_samples,) 的 array-like, default=None

The weights for each observation in X. If None, all observations are assigned equal weight.

返回:
labelsndarray of shape (n_samples,)

Index of the cluster each sample belongs to.

fit_transform(X, y=None, sample_weight=None)[source]#

Compute clustering and transform X to cluster-distance space.

Equivalent to fit(X).transform(X), but more efficiently implemented.

参数:
Xshape 为 (n_samples, n_features) 的 {array-like, sparse matrix}

New data to transform.

y被忽略

Not used, present here for API consistency by convention.

sample_weightshape 为 (n_samples,) 的 array-like, default=None

The weights for each observation in X. If None, all observations are assigned equal weight.

返回:
X_newndarray of shape (n_samples, n_clusters)

X transformed in the new space.

get_feature_names_out(input_features=None)[source]#

获取转换的输出特征名称。

The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: ["class_name0", "class_name1", "class_name2"].

参数:
input_featuresarray-like of str or None, default=None

Only used to validate feature names with the names seen in fit.

返回:
feature_names_outstr 对象的 ndarray

转换后的特征名称。

get_metadata_routing()[source]#

获取此对象的元数据路由。

请查阅 用户指南,了解路由机制如何工作。

返回:
routingMetadataRequest

封装路由信息的 MetadataRequest

get_params(deep=True)[source]#

获取此估计器的参数。

参数:
deepbool, default=True

如果为 True,将返回此估计器以及包含的子对象(如果它们是估计器)的参数。

返回:
paramsdict

参数名称映射到其值。

predict(X)[source]#

Predict the closest cluster each sample in X belongs to.

In the vector quantization literature, cluster_centers_ is called the code book and each value returned by predict is the index of the closest code in the code book.

参数:
Xshape 为 (n_samples, n_features) 的 {array-like, sparse matrix}

New data to predict.

返回:
labelsndarray of shape (n_samples,)

Index of the cluster each sample belongs to.

score(X, y=None, sample_weight=None)[source]#

Opposite of the value of X on the K-means objective.

参数:
Xshape 为 (n_samples, n_features) 的 {array-like, sparse matrix}

New data.

y被忽略

Not used, present here for API consistency by convention.

sample_weightshape 为 (n_samples,) 的 array-like, default=None

The weights for each observation in X. If None, all observations are assigned equal weight.

返回:
scorefloat

Opposite of the value of X on the K-means objective.

set_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') KMeans[source]#

配置是否应请求元数据以传递给 fit 方法。

请注意,此方法仅在以下情况下相关:此估计器用作 元估计器 中的子估计器,并且通过 enable_metadata_routing=True 启用了元数据路由(请参阅 sklearn.set_config)。请查看 用户指南 以了解路由机制的工作原理。

每个参数的选项如下:

  • True:请求元数据,如果提供则传递给 fit。如果未提供元数据,则忽略该请求。

  • False:不请求元数据,元估计器不会将其传递给 fit

  • None:不请求元数据,如果用户提供元数据,元估计器将引发错误。

  • str:应将元数据以给定别名而不是原始名称传递给元估计器。

默认值 (sklearn.utils.metadata_routing.UNCHANGED) 保留现有请求。这允许您更改某些参数的请求而不更改其他参数。

在版本 1.3 中新增。

参数:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

fit 方法中 sample_weight 参数的元数据路由。

返回:
selfobject

更新后的对象。

set_output(*, transform=None)[source]#

设置输出容器。

有关如何使用 API 的示例,请参阅引入 set_output API

参数:
transform{“default”, “pandas”, “polars”}, default=None

配置 transformfit_transform 的输出。

  • "default": 转换器的默认输出格式

  • "pandas": DataFrame 输出

  • "polars": Polars 输出

  • None: 转换配置保持不变

1.4 版本新增: 添加了 "polars" 选项。

返回:
selfestimator instance

估计器实例。

set_params(**params)[source]#

设置此估计器的参数。

此方法适用于简单的估计器以及嵌套对象(如 Pipeline)。后者具有 <component>__<parameter> 形式的参数,以便可以更新嵌套对象的每个组件。

参数:
**paramsdict

估计器参数。

返回:
selfestimator instance

估计器实例。

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') KMeans[source]#

配置是否应请求元数据以传递给 score 方法。

请注意,此方法仅在以下情况下相关:此估计器用作 元估计器 中的子估计器,并且通过 enable_metadata_routing=True 启用了元数据路由(请参阅 sklearn.set_config)。请查看 用户指南 以了解路由机制的工作原理。

每个参数的选项如下:

  • True:请求元数据,如果提供则传递给 score。如果未提供元数据,则忽略该请求。

  • False:不请求元数据,元估计器不会将其传递给 score

  • None:不请求元数据,如果用户提供元数据,元估计器将引发错误。

  • str:应将元数据以给定别名而不是原始名称传递给元估计器。

默认值 (sklearn.utils.metadata_routing.UNCHANGED) 保留现有请求。这允许您更改某些参数的请求而不更改其他参数。

在版本 1.3 中新增。

参数:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

score 方法中 sample_weight 参数的元数据路由。

返回:
selfobject

更新后的对象。

transform(X)[source]#

Transform X to a cluster-distance space.

In the new space, each dimension is the distance to the cluster centers. Note that even if X is sparse, the array returned by transform will typically be dense.

参数:
Xshape 为 (n_samples, n_features) 的 {array-like, sparse matrix}

New data to transform.

返回:
X_newndarray of shape (n_samples, n_clusters)

X transformed in the new space.