Dimensionality reduction overview
Dimensionality reduction is the common term for a set of mathematical techniques
used to capture the shape and relationships of data in a high-dimensional space
and translate this information into a low-dimensional space.
Reducing dimensionality is important when you are working with large datasets
that can contain thousands of features. In such a large data space, the wider
range of distances between data points can make model output harder to
interpret. For example, it makes it difficult to understand which data points
are more closely situated and therefore represent more similar data.
Dimensionality reduction helps you reduce the number of features while retaining
the most important characteristics of the dataset. Reducing the number of
features also helps reduce the training time of any models that use the data as
input.
BigQuery ML offers the following models for dimensionality reduction:
You can use PCA and autoencoder models with the
ML.PREDICT
or
ML.GENERATE_EMBEDDING
functions to embed data into a lower-dimensional space, and with the
ML.DETECT_ANOMALIES
function
to perform anomaly detection.
You can use the output from dimensionality reduction models for tasks such as
the following:
- Similarity search: Find data points that are similar to each other
based on their embeddings. This is great for finding related products,
recommending similar content, or identifying duplicate or anomalous items.
- Clustering: Use embeddings as input features for k-means models in
order to group data points together based on their similarities.
This can help you discover hidden patterns and insights in your data.
- Machine learning: Use embeddings as input features for classification
or regression models.
Recommended knowledge
By using the default settings in the CREATE MODEL
statements and the
inference functions, you can create and use a dimensionality reduction model
even without much ML knowledge. However, having basic knowledge about
ML development helps you optimize both your data and your model to
deliver better results. We recommend using the following resources to develop
familiarity with ML techniques and processes:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-29 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eDimensionality reduction uses mathematical techniques to translate data from a high-dimensional space to a lower-dimensional space while retaining key characteristics.\u003c/p\u003e\n"],["\u003cp\u003eReducing dimensionality simplifies large datasets with numerous features, making model output more interpretable by showing which data points are most similar.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery ML offers Principal Component Analysis (PCA) and Autoencoder models for dimensionality reduction, which can then be used to perform tasks such as similarity search, clustering, or machine learning.\u003c/p\u003e\n"],["\u003cp\u003eUsing dimensionality reduction models such as PCA and autoencoder can reduce the number of features and significantly reduce model training time.\u003c/p\u003e\n"],["\u003cp\u003eEven without extensive machine learning knowledge, you can create and use dimensionality reduction models with default settings, however, basic knowledge of machine learning will allow you to optimize both the data and model.\u003c/p\u003e\n"]]],[],null,["# Dimensionality reduction overview\n=================================\n\nDimensionality reduction is the common term for a set of mathematical techniques\nused to capture the shape and relationships of data in a high-dimensional space\nand translate this information into a low-dimensional space.\n\nReducing dimensionality is important when you are working with large datasets\nthat can contain thousands of features. In such a large data space, the wider\nrange of distances between data points can make model output harder to\ninterpret. For example, it makes it difficult to understand which data points\nare more closely situated and therefore represent more similar data.\nDimensionality reduction helps you reduce the number of features while retaining\nthe most important characteristics of the dataset. Reducing the number of\nfeatures also helps reduce the training time of any models that use the data as\ninput.\n\nBigQuery ML offers the following models for dimensionality reduction:\n\n- [Principal component analysis (PCA)](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-pca)\n- [Autoencoder](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-autoencoder)\n\nYou can use PCA and autoencoder models with the\n[`ML.PREDICT`](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-predict)\nor\n[`ML.GENERATE_EMBEDDING`](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-generate-embedding)\nfunctions to embed data into a lower-dimensional space, and with the\n[`ML.DETECT_ANOMALIES` function](/bigquery/docs/reference/standard-sql/bigqueryml-syntax-detect-anomalies)\nto perform [anomaly detection](/bigquery/docs/anomaly-detection-overview).\n\nYou can use the output from dimensionality reduction models for tasks such as\nthe following:\n\n- **Similarity search**: Find data points that are similar to each other based on their embeddings. This is great for finding related products, recommending similar content, or identifying duplicate or anomalous items.\n- **Clustering**: Use embeddings as input features for k-means models in order to group data points together based on their similarities. This can help you discover hidden patterns and insights in your data.\n- **Machine learning**: Use embeddings as input features for classification or regression models.\n\nRecommended knowledge\n---------------------\n\nBy using the default settings in the `CREATE MODEL` statements and the\ninference functions, you can create and use a dimensionality reduction model\neven without much ML knowledge. However, having basic knowledge about\nML development helps you optimize both your data and your model to\ndeliver better results. We recommend using the following resources to develop\nfamiliarity with ML techniques and processes:\n\n- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course)\n- [Intro to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning)\n- [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning)"]]