Google Fi SIM Card Kit. Choose between the Simply Unlimited, Unlimited Plus and Flexible plans based on your data usage. 4G LTE and nationwide 5G coverage included for compatible phones.

Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. Google was launched in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Some of Google’s products are Google Docs, Google Sheets, Google Slides, Gmail, Google Search, Google Duo, Google Maps, Google Translate, Google Earth, and Google Photos. Play our Pac-Man videogame.

Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California. The project initially involved an unofficial "third founder", Scott Hassan, the original lead programmer who wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company. Read the full story...
Clothing & Jewelry —— Cellphones —— Microsoft Products —— All Products

Google Blog

Google Ads
Many books were created to help people understand how Google works, its corporate culture and how to use its services and products. The following books are available: Ultimate Guide to Google AdsThe Ridiculously Simple Guide to Google Docs: A Practical Guide to Cloud-Based Word ProcessingMastering Google Adwords: Step-by-Step Instructions for Advertising Your Business (Including Google Analytics)Google Classroom: Definitive Guide for Teachers to Learn Everything About Google Classroom and Its Teaching Apps. Tips and Tricks to Improve Lessons’ Quality.3 Months to No.1: The "No-Nonsense" SEO Playbook for Getting Your Website Found on GoogleUltimate Guide to Google AdsGoogle AdSense Made Easy: Monetize Your Website and Blogs Instantly With These Proven Google Adsense TechniquesUltimate Guide to Google AdWords: How to Access 100 Million People in 10 Minutes (Ultimate Series)

Google Cloud Blog

  • Auditing GKE Clusters across the entire organization Fri, 02 Dec 2022 17:47:00 -0000


    Companies moving to the cloud and running containers are often looking for elasticity. The ability to scale up or down as needed, means paying only for the resources used. Using automation allows engineers to focus on applications rather than on the infrastructure. These are key features of the cloud native and managed container orchestration platforms like Google Kubernetes Engine (GKE).

    GKE clusters leverage Google Cloud to achieve the best in class security and scalability. They come with two modes of operation and a lot of advanced features. In Autopilot mode, clusters use more automation to reduce operational cost. This comes with less configuration options though. For use cases where you need more flexibility, the Standard mode offers greater control and configuration options. Irrespective of the selected operational mode, there are always recommended, GKE specific features and best practices to adopt. The official product documentation provides comprehensive descriptions and enlists these best practices. 

    But how do you ensure that your clusters are following them? Did you consider configuring the Google Groups for RBAC feature to make Kubernetes user management easier ? Or did you remember to set  NodeLocal DNS cacheon standard GKE clusters to improve DNS lookup times?

    Lapses in GKE cluster configuration may lead to reduced scalability or security. Over time, this may decrease the benefits of using the cloud and managed Kubernetes platform. Thus, keeping an eye on cluster configuration is an important task! There are many solutions to enforce policies for resources inside a cluster, but only a few address the clusters themselves. Organizations that implemented the Infrastructure-as-code approach may apply controls there. Yet, this requires change validation processes and code coverage for the entire infrastructure. Also, creation of GKE specific policies will need time investment and product expertise. And even then, there might be often a need to check the configurations of running clusters (i.e. for auditing purposes).

    Automating cluster checks

    The GKE Policy Automation is a tool that will check all clusters in your Google Cloud organization. It comes with a comprehensive library of codified cluster configuration policies. These follow the best practices and recommendations from the Google Product and Professional Services teams. Both the tool and the policy library are free and released as an open source project on Github. Also, the solution does not need any modifications on the clusters to operate. It is simple and secure to use, leverages read-only access to cluster data via Google Cloud APIs.

    You can use GKE Policy Automation to run a manual one time check, or in an automated & serverless way for continuous verification. The second approach will discover your clusters and check if they comply with the defined policies on a regular basis.

    After successful cluster identification, the tool pulls information using the Kubernetes Engine API. In the next releases, the tool will support more data inputs to cover additional cluster validation use cases, like scalability limits check.

    GKE Policy Automation engine evaluates the gathered data against the set of codified policies, originating from Google Github repository by default; but users can specify their own repositories. This is useful for adding custom policies or in cases when public repository access is not allowed.

    The tool supports a variety of ways for storing the policy check results. Besides the console output, it can save the results in JSON format on Cloud Storage or to Pub/Sub. Although those are good cloud integration patterns, they need further JSON data processing. We recommend leveraging the GKE Policy Automation integration with the Security Command Center.

    The Security Command Center is Google Cloud's centralized vulnerability and threat reporting service. The GKE Policy Automation registers itself as an additional source of findings there. Next, for each cluster evaluation, the tool creates new or updates existing findings. This brings all SCC features like finding visualization and management together. Also, the cluster check findings will be subject to the configured SCC notifications.

    In the next chapters we will show how to run GKE Policy Automation in a serverless way. The solution will leverage cluster discovery mechanisms and Security Command Center integration.

    Continuous cluster evaluation

    The GKE Policy Automation comes with a sample Terraform code that creates the infrastructure for serverless operation. The below picture shows the overall architecture of this solution.

    Serverless deployment of GKE Policy Automation tool
    Serverless deployment of GKE Policy Automation tool

    The serverless GKE Policy Automation solution uses a containerized version of the tool. 

    • The Cloud Run Jobs service executes the container as a job to perform cluster checks. This happens in configurable intervals, triggered by the Cloud Scheduler.

    • The solution discovers GKE clusters running in your organization using Cloud Asset Inventory.

    • On each run, the solution gathers cluster data and evaluates configuration policies against them. The policies originate from the Google Github repository by default or from user specified repositories.

    • At the end, the tool sends evaluation results to the Security Command Center as findings.

    GKE Policy Automation container images are available in the Github container registry. To run containers with Cloud Run, the images have to be either built in the Cloud or copied there. The provided Terraform code will provision Artifact Registry repository for this purpose. The following chapters of this post describe how to copy GKE Policy Automation’s image to Google Cloud.


    The operator will also need sufficient IAM roles to create the necessary resources:

    • roles/editor role or equivalent on GKE Policy Automation project

    • roles/iam.securityAdmin role or equivalent on Google Cloud organization - to set IAM policies for Asset Inventory or Security Command Center

    Adjusting variables

    The Terraform code needs inputs to provision the desired infrastructure. In our scenario, GKE Policy Automation will check clusters in the entire organization. The dedicated IAM service account will be created for the purpose of running a tool. The account will need the following IAM roles on the Google Cloud organization level:

    • roles/cloudasset.viewer to detect running GKE clusters

    • roles/container.clusterViewer to get GKE clusters configuration

    • roles/securitycenter.sourcesAdmin to register the tool as SCC source

    • roles/securitycenter.findingsEditor to create findings in SCC

    The .tfvars file below provides all of the necessary inputs to create the above role bindings. Remember to adjust the project_id, region and organization values accordingly. 

    [StructValue([(u'code', u'project_id = "gke-policy-123"\r\nregion = "europe-west2"\r\n\r\ndiscovery = {\r\n organization = "123456789012"\r\n}\r\n\r\noutput_scc = {\r\n enabled = true\r\n organization = "123456789012"\r\n}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e63b6811f50>)])]

    The tool can be also used to check clusters in a given folder or project only. This can be useful when granting organization-wide permissions is not a viable option. In such a case, the discovery parameters in the above example need to be adjusted. Please refer to the input variables README documentation for more details.

    Besides the infrastructure, we need to configure the tool itself. The repository provides an example config.yaml file, with the following content:

    [StructValue([(u'code', u'silent: true\r\nclusterDiscovery:\r\n enabled: true\r\n organization: ${SCC_ORGANIZATION}\r\noutputs:\r\n - securityCommandCenter:\r\n\t provisionSource: true\r\n organization: ${SCC_ORGANIZATION}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e638307d350>)])]

    The Terraform will populate the variable values in the config.yaml file above and copy it to the Secret Manager. The secret will be then mounted as a volume in the Cloud Run job image. The full documentation of the tool's configuration file is available in the GKE Policy Automation user guide.

    Both configuration files should be saved in a terraform subdirectory in the GKE Policy Automation folder.

    Running Terraform

    1. Initialize Terraform by running  terraform init

    2. Create and inspect plan by running terraform plan -out tfplan

    3. Apply plan by running terraform apply tfplan

    Copying container image

    The steps below describe how to copy the GKE Policy Automation container image from Github to Google Artifact Registry. Please 

    1. Set the environment variables. Remember to adjust the values accordingly.

    [StructValue([(u'code', u'export GKE_PA_PROJECT_ID=gke-policy-123\r\nexport GKE_PA_REGION=europe-west2'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e63b6cef390>)])]

    2. Pull the latest image

    [StructValue([(u'code', u'docker pull ghcr.io/google/gke-policy-automation:latest'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e639a8da9d0>)])]

    3. Login

    [StructValue([(u'code', u'gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://${GKE_PA_REGION}-docker.pkg.dev'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e6398a54090>)])]

    4. Tag the container image

    [StructValue([(u'code', u'docker tag ghcr.io/google/gke-policy-automation:latest ${GKE_PA_REGION}-docker.pkg.dev/${GKE_PA_PROJECT_ID}/gke-policy-automation/gke-policy-automation:latest'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e6398a54950>)])]

    5. Push the container image

    [StructValue([(u'code', u'docker push ${GKE_PA_REGION}-docker.pkg.dev/${GKE_PA_PROJECT_ID}/gke-policy-automation/gke-policy-automation:latest'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e6398a54d10>)])]

    Creating Cloud Run job

    As of the moment of writing this article, Google's Terraform provider does not yet support Cloud Run Jobs.

    Therefore, the Cloud Run job creation has to be done manually. The command below uses gcloud command and environment variables defined before.

    [StructValue([(u'code', u'gcloud beta run jobs create gke-policy-automation \\\r\n --image ${GKE_PA_REGION}-docker.pkg.dev/${GKE_PA_PROJECT_ID}/gke-policy-automation/gke-policy-automation:latest\\\r\n --command=/gke-policy,check \\\r\n --args=-c,/etc/secrets/config.yaml \\\r\n --set-secrets /etc/secrets/config.yaml=gke-policy-automation:latest \\ \r\n--service-account=gke-policy-automation@${GKE_PA_PROJECT_ID}.iam.gserviceaccount.com \\\r\n --set-env-vars=GKE_POLICY_LOG=INFO \\\r\n --region=${GKE_PA_REGION} \\\r\n --project=${GKE_PA_PROJECT_ID}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e639b9e9c10>)])]

    Observing the results

    The configured Cloud Scheduler will run the GKE Policy Automation job once per day. To observe results immediately, we advise to run the job manually. The successful Cloud Run job executions can be viewed in the Cloud Console, as in the example below.

    GKE Policy Automation Cloud Run jobs
    GKE Policy Automation Cloud Run jobs

    Security Command Center Integration

    Once the GKE Policy Automation job runs successfully, it will produce findings in the Security Command Center for the discovered GKE clusters. It is possible to view them in the SCC Findings view as shown below. Additionally, findings will be available via the SCC API and subject to configured SCC Pub/Sub notifications. This gives the possibility to leverage any existing SCC integrations i.e. with the systems used by your Security Operations Center teams.

    GKE Policy Automation Findings in Security Command Center
    GKE Policy Automation Findings in Security Command Center

    Selecting the specific finding will navigate to the detailed finding view. In this view, all the finding attributes are shown. Additionally, in the Source Properties tab are the GKE Policy Automation specific properties. Those include:

    The below example shows source properties of a GKE Policy Automation finding in SCC.

    GKE Policy Automation Finding Source Properties
    GKE Policy Automation Finding Source Properties

    GKE Policy Automation will update existing SCC findings during each run. The findings will be set as inactive when the corresponding policy will become valid for a given cluster. Same way, the existing inactive findings will be set to active when policy will violate.

    For the cases when some policies are not relevant for given clusters, we recommend using the Security Command Center mute feature. For example, if the Binary Authorization policy is not relevant for development clusters, a muting rule with a development project identifier can be created.


    In the article we have shown how to establish GKE cluster governance for Google Cloud organization using the GKE Policy Automation, an open-source tool created by the Google Professional Services team. The tool along with Google Cloud serverless solutions, like Cloud Run Jobs allows you to build a fully automated solution; alongside the Security Command Center integration, giving you the possibility to process GKE policy evaluation results in a unified and cloud native way.

  • Improving model quality at scale with Vertex AI Model Evaluation Fri, 02 Dec 2022 17:03:00 -0000

    Typically, data scientists retrain models at regular intervals to keep them fresh and relevant. This practice may turn out to be costly if the model is trained too often or inefficient if the model training isn’t frequent enough to serve the business. Ideally, data scientists prefer to continuously evaluate the models and  intentionally retrain models when the model performance starts to degrade. At scale, continuous model evaluation would require a standard and efficient evaluation process and system.

    In fact, after training a model, data scientists and ML engineers use an offline dataset of historical examples from the production environment to evaluate model performance across several model experiments. If the evaluation metrics meet some predefined thresholds, data scientists and ML engineers can proceed to deploy the model, either manually or by using a  ML pipeline. This process serves to find the best model (approach and configuration) to go to production.  Figure 1 illustrates a basic workflow in which data scientists and ML engineers gather new data and retrain the model at regular intervals.

    Improving model quality at scale Figure 1
    Figure 1: Standard Model Evaluation workflow (click to enlarge)

    Depending on use cases, continuous model evaluation might be required.  Once deployed, the model is monitored in production by detecting skew and drift both in features and target distributions. When a change in those distributions is detected, because the model could start underperforming, you need to evaluate it by using production data. Figure 2 illustrates an advanced workflow where teams gather new data and continuously evaluate the model. Based on the outcome of continuous evaluation, the model is retrained with new data. 

    Improving model quality at scale - Figure 2
    Figure 2: Continuous Model Evaluation workflow (click to enlarge)

    At scale, building this continuous evaluation system can be a challenging task. There are several factors that contribute to making it difficult including getting access to production data, provisioning computational resources, standardizing the model evaluation process, and guaranteeing its reproducibility. With the intent to simplify and accelerate the entire process of defining and running ML evaluations, Vertex AI Model Evaluation enables you to iteratively assess and compare model performance at scale. 

    With Vertex AI Model Evaluation, you define a test dataset, a model, and an evaluation configuration as inputs and it will return model performance metrics whether you are training your model using your notebook, running a training job, or an ML pipeline on Vertex AI.

    Vertex AI Model Evaluation is integrated with the following products:

    • Vertex AI Model Registrywhich provides a new view to get access to different evaluation jobs and the resulting metrics they produce after the model training job completes. 
    • Model Builder SDKwhich introduces a new evaluate method to get classification, regression, and forecasting metrics for a model trained locally.
    • Managed Pipelines with a new evaluation component to generate and visualize metrics results within the Vertex AI Pipelines Console.

    Now that you know the new features of Vertex AI Model Evaluation, let’s see how you can leverage them to improve your model quality at scale. 

    Evaluate performances of different models in Vertex AI Model Registry

    As the decision maker who has to promote the model to production, you need to govern the model launching process. 

    To release the model, you need to easily retrieve, visualize, and compare the offline and online performance and explainability metrics of the trained models. 

    Thanks to the integration between Vertex AI Model Registry and Vertex AI Model evaluation, you can now view all historical evaluations of each model (BQML, AutoML and custom models). For each model version, the Vertex AI Model Registry console shows classification, regression, and forecasting metrics depending on the type of model. 

    Compare Model evaluation across model versions
    Figure 3: Compare Model evaluation across model versions (click to enlarge)

    You can also compare those metrics across different model versions to identify and explain the best model version to deploy as an online experiment or directly to production.

    Compare Model evaluation view
    Figure 4: Compare Model evaluation view (click to enlarge)

    Train and evaluate your model in your notebook using Model Builder SDK 

    During the model development phase, as a data scientist, you experiment with different models and parameters in your notebook. Then, you calculate measurements such as accuracy, precision, and recall, and build performance plots like confusion matrix and ROC on a validation/test dataset. Those indicators allow you and your team to review the candidate model’s performance and compare it with other model(s) to ultimately decide whether the model is ready to be formalized in a component of the production pipeline.

    The new Vertex AI Model Builder SDK allows you to calculate those metrics and plots by leveraging Vertex AI. By providing the testing dataset, the model and the evaluation configuration, you can submit an evaluation job. After the evaluation task is completed, you are able to retrieve and visualize the results of the evaluation locally across different models and compare them side-by-side to decide whether or not to deploy it as an online experiment or directly into production. 

    Below is an example of how to run an evaluation job for a classification model. 

    [StructValue([(u'code', u'# Upload the model as Vertex AI model resource\r\nvertex_model = aiplatform.Model.upload(\r\n display_name="your-model",\r\n artifact_uri="gcs://your-bucket/your-model",\r\n serving_container_image_uri="your-serving-image",\r\n)\r\n# Run an evaluation job\r\neval_job = vertex_model.evaluate(\r\n gcs_source_uris=["gcs://your-bucket/your-evaluate-data.csv"],\r\n prediction_type="classification",\r\n class_labels=["class_1", "class_2"],\r\n target_column_name="target_column",\r\n prediction_label_column="prediction",\r\n prediction_score_column="prediction",\r\n experiment="your-experiment-name"\r\n)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ea79e765f10>)])]

    Notice that all parameters and metrics of the evaluation job are tracked as part of the experiment to guarantee its reproducibility.

    The Model Evaluation in the Vertex SDK is in Experimental release. To get access, please fill out this form

    Operationalize model evaluation with Vertex AI Pipelines

    Once the model has been validated, ML engineers can proceed to deploy the model, either manually or from a  pipeline. When the production pipeline is required, it has to include a formalized evaluation pipeline component that produces the model quality metrics. In this way, the model evaluation process can be replicated at scale and the evaluation metrics can be logged into downstream systems such as Experiments tracking and Model Registry services. At the end, decision makers can use those metrics to validate models and determine which model will be deployed. 

    Currently, building and maintaining an evaluation component requires time and resources. Instead you want to focus on solving new challenges and building new ML products. To simplify and accelerate the process of defining and running evaluations within ML pipelines, we are excited to announce the release of new operators for Vertex AI Pipelines that help make it easier to operationalize model evaluation in a Vertex AI Pipeline. Indeed those components automatically generate and track evaluation results to facilitate easy retrieval and model comparison. Below you have the main evaluation operators: 

    1. GetVertexModelOp to initialize the Vertex Model Artifact to evaluate.
    2. EvaluationDataSamplerOp to create an input dataset randomly with a specified size for computing Vertex XAI feature attributions. 
    3. TargetFieldDataRemoverOp to remove the target field from the input dataset for supporting custom models for Vertex Batch Prediction
    4. ModelBatchPredictOp to run a Google Cloud Vertex BatchPredictionJob and generate predictions for model evaluation
    5. ModelEvaluationClassificationOp to compute evaluation metrics on a trained model’s batch prediction results
    6. ModelEvaluationFeatureAttributionOp to generate feature attribution on a trained model’s batch explanation results. 
    7. ModelImportEvaluationOp to store a model evaluation artifact as a resource of an existing Vertex model with ModelService.

    With these components, you can define a training pipeline that starts from a model resource and generates the evaluation metrics and the feature attributions from a given dataset. Below you have an example of a Vertex AI pipeline using those components in combination with a Vertex AI AutoML model in a classification scenario.

    Vertex AI Model Evaluation Pipeline
    Figure 5: Vertex AI Model Evaluation Pipeline (click to enlarge)


    Vertex AI Model Evaluation enables customers to accelerate and operationalize model performance analysis and validation steps required in an end-to-end MLOps workflow. Thanks to its native integration with other Vertex AI services, Vertex AI Model Evaluation allows you to run model evaluation jobs (measure model performance on a test dataset) regardless of which Vertex service used to train the model (AutoML, Managed Pipelines, Custom Training, etc.) and store and visualize the evaluation results across multiple models in Vertex AI Model Registry. With these capabilities, Vertex AI Model Evaluation enables users to decide which model(s) can progress to online testing or be put into production, and once in production, when models need to be retrained. 

    Now it's your turn. Check out notebooks in the official Github repo and the resources below to get started with Vertex AI Model Evaluation. And remember...Always have fun!

    Want to learn more?



    Thanks to Jing Qi, Kevin Naughton, Marton Balint, Sara Robinson, Soheila Zangeneh, Karen Lin and all Vertex AI Model Evaluation team for their support and feedback. 

  • Google Cloud Biotech Acceleration Tooling Fri, 02 Dec 2022 17:00:00 -0000

    Bio-pharma organizations can now leverage quick start tools and setup scripts to begin running scalable workloads in the cloud today. 

    This capability is a boon for research scientists and organizations in the bio-pharma space, from those developing treatments for diseases to those creating new synthetic biomaterials. Google Cloud’s solutions teams continue to shape products with customer feedback and contribute to platforms on which Google Cloud customers can build. 

    This guide provides a way to get started with simplified cloud architectures for specific workloads. Cutting edge research and biotechnology development organizations are often science first and can therefore save valuable resources by leveraging existing technology infrastructure starting points embedded with Google’s best practices. Biotech Acceleration Tooling frees up scientist and researcher bandwidth, while still enabling flexibility. The majority of the tools outlined in this guide come with quick start Terraform scripts to automate the stand up of environments for biopharma workloads.

    Solution overview

    This deployment creates the underlying infrastructure in accordance with Google’s best practices, configuring appropriate networking including VPC networking, security, data access, and analytics notebooks. All environments are created with Terraform scripts, which define cloud and on-prem resources in configuration files. A consistent workflow can be used to provision infrastructure.

    If beginning from scratch, you will need to first consider security, networking, and  identity access management set up to keep your organization’s computing environment safe. To do this, follow the steps below:

    1. Login to Google Cloud Platform

    2. Use Terraform Automation Repository within Security Foundations Blueprint to deploy your new environment

    Workloads needed can vary, and so should solutions tooling. We offer easy to deploy code and workflows for various biotech use cases including AlphaFold, genomics sequencing, cancer data analysis, clinical trials, and more.


    AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiments. It is useful for researchers doing drug discovery and protein design, often computational biologists and chemists. To get started running AlphaFold batch inference on your own protein sequences, leverage these setup scripts. To better understand the batch inference solution, see this explanation of optimized inference pipeline and video explanation. If your team does not need to run AlphaFold at scale and is comfortable running structures one at a time on less optimized hardware, see the simplified AlphaFold run guide.

    Genomics Tooling

    Researchers today have the ability to generate an incredible amount of biological data. Once you have this data, the next step is to refine and analyze it for meaning. Whether you are developing your own algorithms or running common tools and workflows, you now have a large number of software packages to help you out.

    Here we make a few recommendations for what technologies to consider. Your technology choice should be based on your own needs and experience. There is no “one size fits all” solution.

    Genomics tools that may be of assistance for your organization include generalized genomics sequencing pipelines, Cromwell genomics, Databiosphere dsub genomics, and DeepVariant.


    The Broad Institute has developed the Workflow Definition Language (WDL) and an associated runner called Cromwell. Together these have allowed the Broad to build, run at scale, and publish its recommended practices pipelines. If you want to run the Broad’s published GATK workflows or are interested in using the same technology stack, take a look at this deployment of Cromwell.


    This module is packaged to use databiosphere dsub as a Workflow engine, containerized tools (FastQC) and Google cloud lifescience API to automate execution of pipeline jobs. The function can be easily modified to adopt to other bioinformatic tools out there.

    Dsub is a command-line tool that makes it easy to submit and run batch scripts in the cloud. The cloud function has embedded dsub libraries to execute pipeline jobs in Google cloud.


    DeepVariant is an analysis pipeline that uses a deep neural network to call genetic variants from next-generation DNA sequencing data.

    Cancer Data Analysis

    ISB-CGC (ISB Cancer Gateway in the Cloud) enables researchers to analyze cloud-based cancer data through a collection of powerful web-based tools and Google Cloud technologies. It is one of three National Cancer Institute (NCI) Cloud Resources tasked with bringing cancer data and computation power together through cloud platforms.

    Interactive web-based Cancer Data Analysis & Exploration

    Explore and analyze ISB-CGC cancer data through a suite of graphical user interfaces (GUIs) that allow users to select and filter data from one or more public data sets (such as TCGA, CCLE, and TARGET), combine these with your own uploaded data and analyze using a variety of built-in visualization tools.

    Cancer data analysis using Google BigQuery

    Processed data is consolidated by data type (ex. Clinical, DNA Methylation, RNAseq, Somatic Mutation, Protein Expression, etc.) from sources including the Genomics Data Commons (GDC) and Proteomics Data Commons (PDC) and transformed into ISB-CGC Google BigQuery tables. This allows users to quickly analyze information from thousands of patients in curated BigQuery tables using Structured Query Language (SQL). SQL can be used from the Google BigQuery Console but can also be embedded within Python, R and complex workflows, providing users with flexibility. The easy, yet cost effective, “burstability” of BigQuery allows you to, within minutes (as compared to days or weeks on a non-cloud based system), calculate statistical correlations across millions of combinations of data points.

    Available Cancer Data Sources

    Clinical Trials Studies

    The FDA’s MyStudies platform enables organizations to quickly build and deploy studies that interact with participants through purpose-built apps on iOS and Android. MyStudies apps can be distributed to participants privately or made available through the App Store and Google Play.

    This open-source repository contains the code necessary to run a complete FDA MyStudies instance, inclusive of all web and mobile applications.

    Open-source deployment tools are included for semi-automated deployment to Google Cloud Platform (GCP). These tools can be used to deploy the FDA MyStudies platform in just a few hours. These tools follow compliance guidelines to simplify the end-to-end compliance journey. Deployment to other platforms and on-premise systems can be performed manually.

    Data Science

    For generalized data science pipelines to build custom predictive models or do interactive analysis within notebooks, check out our data science workflow setup scripts to get to work immediately. These include database connections and setup, virtual private cloud enablement, and notebooks.

    Reference material

    RAD Lab - a secure sandbox for innovation

    During research, scientists are often asked to spin up research modules in the cloud to create more flexibility and collaboration opportunities for their projects. However, lacking the necessary cloud skills, many projects never get off the ground.  

    To accelerate innovation, RAD Lab is a Google Cloud-based sandbox environment which can help technology and research teams advance quickly from research and development to production. RAD Lab is a cloud-native research, development, and prototyping solution designed to accelerate the stand-up of cloud environments by encouraging experimentation, without risk to existing infrastructure. It’s also designed to meet public sector and academic organizations’ specific technology and scalability requirements with a predictable subscription model to simplify budgeting and procurement. You canfind the repository here.

    RAD Lab delivers a flexible environment to collect data for analysis, giving teams the liberty to experiment and innovate at their own pace, without the risk of cost overruns. Key features include:

    • Open-source environment that runs on the cloud for faster deployment—with no hardware investment or vendor lock-in.

    • Built on Google Cloud tools that are compliant with regulatory requirements like FedRAMP, HIPAA, and GDPR security policies.

    • Common IT governance, logging, and access controls across all projects.

    • Integration with analytics tools like BigQuery, Vertex AI, and pre-built notebook templates.

    • Best-practice operations guidance, including documentation and code examples, that accelerate training, testing, and building cloud-based environments.

    • Optional onboarding workshops for users, conducted by Google Cloud specialists. 

    The next generation of RAD Lab includes RAD Lab UI, which provides a modern interface for less technical users to deploy Google Cloud resources – in just three steps.

    This guide would not have been possible without the contributions of Alex Burdenko, Emily Du, Joan Kallogjeri, Marshall Worster, Shweta Maniar, and the RAD Lab team.

  • Monitoring GPU workloads on GKE with NVIDIA Data Center GPU Manager (DCGM) Fri, 02 Dec 2022 17:00:00 -0000

    Artificial intelligence (AI) and machine learning (ML) have become an increasingly important enterprise capability, including use cases such as product recommendations, autonomous vehicles, application personalization, and automated conversational platforms. Building and deploying ML models demand high-performance infrastructure. Using NVIDIA GPUs can greatly accelerate the training and inference system. Consequently, monitoring GPU performance metrics to understand workload behavior is critical for optimizing the ML development process.

    Many organizations use Google Kubernetes Engine (GKE) to manage NVIDIA GPUs to run production AI inference and training at scale.NVIDIA Data Center GPU Manager (DCGM) is a set of tools from NVIDIA to manage and monitor NVIDIA GPUs in cluster and datacenter environments. DCGM includes APIs for collecting a detailed view of GPU utilization, memory metrics, and interconnect traffic. It provides the system profiling metrics needed for ML engineers to identify bottlenecks and optimize performance, or for administrators to identify underutilized resources and optimize for cost.

    In this blog post we demonstrate:

    • How to setup NVIDIA DCGM in your GKE cluster, and 

    • How to observe the GPU utilization using either a Cloud Monitoring Dashboard or Grafana with Prometheus.

    NVIDIA Data Center GPU Manager

    NVIDIA DCGM simplifies GPU administration, including setting configuration, performing health checks, and observing detailed GPU utilization metrics. Check out NVIDIA’s DCGM user guide to learn more.

    Here we focus on the gathering and observing of GPU utilization metrics in a GKE cluster. To do so, we also make use of NVIDIA DCGM exporter. This component collects GPU metrics using NVIDIA DCGM and exports them as Prometheus style metrics.

    GPU Monitoring Architecture

    The following diagram describes the high-level architecture of the GPU monitoring setup using NVIDIA DCGM, NVIDIA DCGM Exporter, and Google Managed Prometheus,Google Cloud’s managed offering for Prometheus.

    1 DCGM.jpg

    In the above diagram, the boxes labeled “NVIDIA A100 GPU” represent an example NVIDIA GPU attached to a GCE VM Instance. Dependencies amongst components are traced out by the wire connections.

    The “AI/ML workload” represents a pod that has been assigned one or more GPUs. The boxes “NVIDIA DCGM” and “NVIDIA DCGM exporter” are pods running as privileged daemonset across the GKE cluster. A ConfigMap contains the list of DCGM fields (in particular GPU metrics) to collect. 

    The “Managed Prometheus” box represents managed prometheus components deployed in the GKE cluster. This component is configured to scrape Prometheus style metrics from the “DCGM exporter” endpoint. “Managed Prometheus” exports each metric to Cloud Monitoring as “prometheus.googleapis.com/DCGM_NAME/gauge.” The metrics are accessible through various Cloud Monitoring APIs, including the Metric Explorer page.

    To provide greater flexibility, we also include components that can set up an in-cluster Grafana dashboard. This consists of a “Grafana” pod that accesses the available GPU metrics through a “Prometheus UI” front end as a data source. The Grafana page is then made accessible at a Google hosted endpoint through an “Inverse Proxy” agent.

    All the GPU monitoring components are deployed to a namespace “gpu-monitoring-system.”


    • Google Cloud Project

    • Quota for NVIDIA GPUs (more information at GPU quota)

    • GKE version 1.21.4-gke.300 with “beta” component to install Managed Prometheus.

    • GKE version 1.18.6-gke.3504 or above to support all available cloud GPU types.

    • NVIDIA Datacenter GPU Manager requires NVIDIA Driver R450+.

    Deploy a Cluster with NVIDIA GPUs

    1. Follow the instructions at Run GPUs in GKE Standard node pools to create a GKE cluster with NVIDIA GPUs.

    Here is an example to deploy a cluster with two A2 VMs with 2 x NVIDIA A100 GPUs each. For a list of available GPU platforms by region, see GPU regions and zones.

    [StructValue([(u'code', u'gcloud beta container clusters create CLUSTER_NAME \\\r\n --zone us-central1-f \\\r\n --machine-type=a2-highgpu-2g \\\r\n --num-nodes=2 \\\r\n --enable-managed-prometheus'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3999d55290>)])]

    Note the presence of the “--enable-managed-prometheus” flag. This allows us to skip the next step. By default a cluster will deploy the Container-Optimized OS on each VM.

    2. Enable Managed Prometheus on this cluster. It allows us to collect and export our GPU metrics to Cloud Monitoring. It will also be used as a data source for Grafana.

    [StructValue([(u'code', u'gcloud beta container clusters update CLUSTER_NAME \\\r\n --zone ZONE \\\r\n --enable-managed-prometheus'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3999ec4450>)])]

    3. Before you can use kubectl to interact with your GKE cluster, you need to fetch the cluster credentials.

    [StructValue([(u'code', u'gcloud container clusters get-credentials CLUSTER_NAME \\\r\n --zone us-central1-f'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3999ec4bd0>)])]

    4. Before we can interact with the GPUs, we need to install the NVIDIA drivers. The following installs NVIDIA drivers for VMs running the Contained-Optimised OS.

    [StructValue([(u'code', u'kubectl apply -f \\\r\nhttps://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39841bca90>)])]

    Wait for “nvidia-gpu-device-plugin” to go Running across all nodes. This can take a couple minutes.

    [StructValue([(u'code', u'kubectl get pods -n kube-system | grep nvidia-gpu-device-plugin'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39841bc8d0>)])]

    Download GPU Monitoring System Manifests

    Download the Kubernetes manifest files and dashboards used later in this guide.

    [StructValue([(u'code', u'git clone https://github.com/suffiank/dcgm-on-gke && cd dcgm-on-gke'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39841bc410>)])]

    Configure GPU Monitoring System

    Before we deploy the NVIDIA Data Center GPU manager and related assets, we need to select which GPU metrics we want to emit from the cluster. We also want to set the period at which we sample those GPU metrics. Note that all these steps are optional. You can choose to keep the defaults that we provide.

    1. View and edit the ConfigMap section of quickstart/dcgm_quickstart.yml to select which GPU metrics to emit:

    [StructValue([(u'code', u'apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n name: nvidia-dcgm-exporter-metrics\r\n\u2026\r\ndata:\r\n counters.csv: |\r\n # Utilization (the sample period varies depending on the product),,\r\n DCGM_FI_DEV_GPU_UTIL, gauge, GPU utilization (in %).\r\n DCGM_FI_DEV_MEM_COPY_UTIL, gauge, Memory utilization (in %).\r\n\r\n # Utilization of IP blocks,,\r\n DCGM_FI_PROF_SM_ACTIVE, gauge, \r\n DCGM_FI_PROF_SM_OCCUPANCY, gauge,\r\n DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, \r\n DCGM_FI_PROF_PIPE_FP64_ACTIVE, gauge, \r\n DCGM_FI_PROF_PIPE_FP32_ACTIVE, gauge, \r\n DCGM_FI_PROF_PIPE_FP16_ACTIVE, gauge, \r\n\r\n # Memory usage,,\r\n DCGM_FI_DEV_FB_FREE, gauge,\r\n DCGM_FI_DEV_FB_USED, gauge,\r\n DCGM_FI_DEV_FB_TOTAL, gauge,\r\n\r\n # PCIE,,\r\n DCGM_FI_PROF_PCIE_TX_BYTES, gauge,\r\n DCGM_FI_PROF_PCIE_RX_BYTES, gauge, \r\n\r\n # NVLink,,\r\n DCGM_FI_PROF_NVLINK_TX_BYTES, gauge, \r\n DCGM_FI_PROF_NVLINK_RX_BYTES, gauge,'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3986423a10>)])]

    A complete list of NVIDIA DCGM fields available are at NVIDIA DCGM list of Field IDs. For your benefit, here we briefly outline the GPU metrics set in this default configuration.

    The most important of these is the GPU utilization (“DCGM_FI_DEV_GPU_UTIL”). This metric indicates what fraction of time the GPU is not idle. Next is the GPU used memory (“DCGM_FI_DEV_FB_USED”) and it indicates how many GPU memory bytes have been allocated by the workload. This can let you know how much headroom remains on the GPU memory. For an AI workload you can use this to gauge whether you can run a larger model or increase the batch size. 

    The GPU SM utilization (“DCGM_FI_PROF_SM_ACTIVE”) lets you know what fraction of the GPU SM processors are in use during the workload. If this is low, it indicates there is headroom to submit parallel workloads to the GPU. On an AI workload you might send multiple inference requests. Taken together with the SM occupancy (“DCGM_FI_PROF_SM_OCCUPANCY”) it can let you know if the GPUs are being efficiently and fully utilized.

    The GPU Tensor activity (“DCGM_FI_PROF_PIPE_TENSOR_ACTIVE”) indicates whether your workload is taking advantage of the Tensor Cores on the GPU. The Tensor Cores are specialized IP blocks within an SM processor that enable accelerated matrix multiplication. It can indicate to what extent your workload is bound on dense matrix math.

    The FP64, FP32, and FP16 activity (e.g. “DCGM_FI_PROF_PIPE_FP64_ACTIVE”) indicates to what extent your workload is exercising the GPU engines targeting a specific precision. A scientific application might skew to FP64 calculations and an ML/AI workload might skew to FP16 calculations.

    The GPU NVLink activity (e.g. “DCGM_FI_PROF_NVLINK_TX_BYTES”) indicates the bandwidth (in bytes/sec) of traffic transmitted directly from one GPU to another over high-bandwidth NVLink connections. This can indicate whether the workload requires communicating GPUs; and, if so, what fraction of the time the workload is spending on collective communication.

    The GPU PCIe activity (e.g. “DCGM_FI_PROF_PCIE_TX_BYTES“) indicates the bandwidth (in bytes/sec) of traffic transmitted to or from the host system.

    All the fields with “_PROF_” in the DCGM field identifier are “profiling metrics.” For a detailed technical description of their meaning take a look at NVIDIA DCGM Profiling Metrics. Note that these do have some limitations for NVIDIA hardware before H100. In particular they cannot be used concurrently with profiling tools like NVIDIA Nsight. You can read more about these limitations at DCGM Features, Profiling Sampling Rate.

    2. (Optional:) By default we have configured the scrape interval at 20 sec. You can adjust the period at which NVIDIA DCGM exporter scrapes NVIDIA DCGM and likewise the interval at which GKE Managed Prometheus scrapes the NVIDIA DCGM  exporter:

    [StructValue([(u'code', u'apiVersion: apps/v1\r\nkind: DaemonSet\r\nmetadata:\r\n name: nvidia-dcgm-exporter\r\n\u2026\r\nspec:\r\n\u2026\r\n args:\r\n - hostname $NODE_NAME; dcgm-exporter -k --remote-hostengine-info $(NODE_IP) --collectors /etc/dcgm-exporter/counters.csv --collect-interval 20000\r\n\u2026\r\napiVersion: monitoring.googleapis.com/v1alpha1\r\nkind: PodMonitoring\r\nmetadata:\r\n name: nvidia-dcgm-exporter-gmp-monitor\r\n\u2026\r\nspec:\r\n\u2026\r\n endpoints:\r\n - port: metrics\r\n interval: 20s'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3998248d10>)])]

    Selecting a lower sample period (say 1 sec) will give a high resolution view of the GPU activity and the workload pattern. However selecting a higher sample rate will result in more data being emitted to Cloud Monitoring. This may cause a higher bill from Cloud Monitoring. See “Metrics from Google Cloud Managed Service for Prometheus” on the Cloud Monitoring Pricing page to estimate charges.

    3. (Optional:) In this example we use NVIDIA DCGM 2.3.5. You can adjust the NVIDIA DCGM version by selecting a different image from the NVIDIA container registry. Note that the NVIDIA DCGM exporter version must be compatible with the NVIDIA DCGM version. So be sure to change both when selecting a different version.

    [StructValue([(u'code', u'apiVersion: apps/v1\r\nkind: DaemonSet\r\nmetadata:\r\n name: nvidia-dcgm\r\n\u2026\r\nspec:\r\n\u2026\r\n containers:\r\n - image: "nvcr.io/nvidia/cloud-native/dcgm:2.3.5-1-ubuntu20.04"\r\n\u2026\r\napiVersion: apps/v1\r\nkind: DaemonSet\r\nmetadata:\r\n name: nvidia-dcgm-exporter\r\n\u2026\r\nspec:\r\n\u2026\r\n containers:\r\n - name: nvidia-dcgm-exporter\r\n image: nvcr.io/nvidia/k8s/dcgm-exporter:2.3.5-2.6.5-ubuntu20.04'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3986423890>)])]

    Here we have deployed NVIDIA DCGM and the NVIDIA DCGM Exporter as separate containers. It is possible for the NVIDIA DCGM exporter to launch and run the NVIDIA DCGM process within its own container. For a description of the options available on the DCGM exporter, see the DCGM Exporter page.

    Deploying GPU Monitoring System

    1. Deploy NVIDIA DCGM + NVIDIA DCGM exporter + Managed Prometheus configuration.

    [StructValue([(u'code', u'kubectl create namespace gpu-monitoring-system\r\nkubectl apply -f quickstart/dcgm_quickstart.yml'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39864f2b10>)])]

    If successful, you should see a privileged NVIDIA DCGM and NVIDIA DCGM exporter pod running on every GPU node.

    Set up a Cloud Monitoring Dashboard

    1. Import a custom dashboard to view DCGM metrics emitted to Managed Prometheus

    [StructValue([(u'code', u'gcloud monitoring dashboards create \\\r\n --config-from-file quickstart/gke-dcgm-dashboard.yml'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39864f20d0>)])]

    2. Navigate to Monitoring Dashboards page of the Cloud Console to view the newly added “Example GKE GPU” dashboard.

    2 DCGM.jpg

    3. For a given panel you can expand the legend to include the following fields:

    The following fields are available in the legend:
    “cluster” (GKE cluster name)
    “instance” (GKE node name)
    “gpu” (GPU index on the GKE node)
    “modelName” (whether NVIDIA T4, V100, A100 etc.)
    “exported container” (container that has mapped this GPU)

    “exported namespace” (namespace of the container that has mapped this GPU)

    Because Managed Prometheus monitors the GPU workload through NVIDIA DCGM exporter, it is important to keep in mind that that the container name and namespace are on the labels “exported container” and “exported namespace”

    Stress Test your GPUs for Monitoring

    We have provided an artificial load so you can observe your GPU metrics in action. Or feel free to deploy your own GPU workloads.

    1. Apply an artificial load tester for the NVIDIA GPU metrics.

    [StructValue([(u'code', u'kubectl apply -f quickstart/dcgm_loadtest.yml'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39841bc910>)])]

    This load test creates a container on a single GPU. It then gradually cycles through all the displayed metrics. Note that the NVLink bandwidth will only be utilized if the VM has two NVIDIA GPUs connected by an NVLink connection.

    Set up a Grafana Dashboard

    1. Deploy the Prometheus UI frontend, Grafana, and inverse proxy configuration.

    [StructValue([(u'code', u"cd grafana\r\nsed 's/\\$PROJECT_ID/<YOUR PROJECT ID>/' grafana.yml | kubectl apply -f -"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39981ca1d0>)])]

    <YOUR PROJECT ID> should be replaced with your project ID for the cluster.

    Wait until the inverse proxy config map is populated with an endpoint for Grafana:

    [StructValue([(u'code', u'kubectl get configmap inverse-proxy-config -o jsonpath="{.data}" -n gpu-monitoring-system'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39981ca650>)])]
    [StructValue([(u'code', u'{\r\n \u2026\r\n "Hostname":\r\n "7b530ae5746e0134-dot-us-central1.pipelines.googleusercontent.com",\r\n \u2026\r\n}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e39981ca590>)])]

    Copy and paste this URL into your browser to access the Grafana page. Only users with access to the same GCP project will be authorized to visit the Grafana page.

    The inverse proxy agent deployed to the GKE cluster uses a Docker Hub image hosted at sukha/inverse-proxy-for-grafana. See Building the Inverse Proxy Agent for more info.

    2. On the Grafana page click “Add your first data source,” then select “Prometheus.” Then fill the following Prometheus configuration:

    3 DCGM.jpg

    Note that the full URL should be

    Select “Save and test” at bottom. You should see “Data source is working.”

    3. Import the Grafana dashboard by selecting the “Import” from the “+ Create” widget panel on the left-hand side of the Grafana page.

    Then select the local JSON file “grafana/gke-dcgm-grafana-dashboard.json.”

    You should see the GPU utilization and all other metrics for the fake workload you deployed earlier. Note that the dashboard is configured to only display metrics whose container label is not the empty string. Therefore it does not display metrics for idle GPUs with no attached containers.

    4 DCGM.jpg

    4. You can also explore the available metrics directly from the “Explorer” page. Select the “Explorer” widget along the left-hand panel. Then click “Metrics Browser” to display the list of available metrics and their labels.

    5 DCGM.jpg

    You can use the Metrics Explorer page to explore the available metrics. From this knowledge you can build a custom dashboard based on queries that suit your case.


    In this blog you were able to deploy a GKE cluster with NVIDIA GPUs and emit GPU utilization metrics by workload to Cloud Monitoring. We also set up a Cloud Monitoring dashboard to view GPU utilization by workload. 

    This GPU monitoring system leveraged the NVIDIA Data Center GPU Manager. All of the available NVIDIA DCGM metrics are accessible for monitoring. We also discussed the available GPU metrics and their meaning in the context of application workloads.

    Finally we provided a means to deploy an in-cluster Grafana GPU utilization dashboard accessible from a Google hosted endpoint for users with access to the corresponding Google Cloud project.

  • Overcoming objections and unblocking the road to Zero Trust Fri, 02 Dec 2022 17:00:00 -0000

    Overcoming blockades and potholes that threaten to derail organizational change is key to any IT or security transformation initiative. Many security and risk leaders have made it a priority to adopt Zero Trust access models so they can deliver  better user experiences and strengthen security. Yet before they can even think about change management, they often face pushback from within their organization.

    Earlier this year I had the privilege of chatting twice with Jess Burn, senior analyst at Forrester, on some common challenges CISOs face when planning their Zero Trust journeys. I found our talks enlightening and useful, and wanted to share the key insights with as many organizations as possible who are considering or actively going down this path. Some highlights from my interview with Jess Burn follows.

    Q: When organizations embark on a Zero Trust implementation, what is the biggest difference observed between the benefits they expect to get versus what they actually experience after implementing Zero Trust?

    I think a lot of organizations look at the benefits of Zero Trust from the perspective of improving overall security posture, which is a great goal but one where the goalpost moves constantly. But what we’ve heard from enterprises that embark on Zero Trust journeys is that there are a lot of small victories and surprise benefits to celebrate along the way. For example, Zero Trust can empower employees, enabling them to work from anywhere with any device as long as they authenticate properly on a compliant device. 

    Zero Trust can also empower employees by shifting responsibility for security away from users and instead letting them rely on technical controls to do their work. For example, employees can use a digital certificate and biometrics to establish identity instead of having to remember passwords. 

    Additionally, Zero Trust can help consolidate tech tools by acting as a catalyst for much-needed process changes. For example, a client of ours, as part of their Zero Trust model adoption journey, classified their critical business assets and identified the tools that aligned to the zero trust approach. From there, they were able to reduce the number of point solutions, many of which overlapped in functionality, from 58 to 11 in an 18-month timeframe. There are real cost savings there. 

    How are enterprises measuring success and justifying Zero Trust transformation?

    We advise our clients that measuring the success of Zero Trust efforts and the impact of the transformation should be focused on the ability of their organization to move from network access to granular application-specific access, increase data security through obfuscation, limit the risks associated with excessive user privileges, and dramatically improve security detection and response with analytics and automation. We guide our clients to create outcome-focused metrics that are a good fit for the audiences with whom they are sharing them, whether strategic (board/executives), operational (counterparts in IT/the business), or tactical (security team). Additionally, we think about Zero Trust metrics in the context of three overarching goals:

    1. Protecting customers’ data while preserving their trust. Customers who suffer identity theft or fraud will stop doing business with you if they believe you were negligent in protecting their data. They might also leave you if your post-breach communication is late, vague, or lacks empathy and specific advice. For strategic metrics, exposing changes in customer acquisition, retention, and enrichment rates before and after specific breaches will help you alert business leaders to customer trust issues that could hinder growth. When thinking about tactical metrics, looking at changes in customer adoption of two-factor authentication and the percentage of customer data that is encrypted will help you determine where your security team needs to focus its future efforts.

    2. Recruiting and retaining happy, productive employees who appreciate security. Strategic-level goals should track changes in your organization’s ability to recruit new talent and changes in employee satisfaction, as retention rates indicate morale issues that will affect productivity and customer service. Angry, resentful, or disillusioned employees are more likely to steal data for financial profit or as retaliation for a perceived slight. At a tactical level, employee use of two-factor authentication, implementation of a privileged identity management solution, and strong processes for identity management and governance will help you identify priorities for your security team.

    3. Guarding the organization’s IP and reducing the costs of security incidents. IP may include trade secrets, formulas, designs, and code that differentiate your organization’s products and services from those of competitors. An IP breach threatens your organization’s future revenue and potentially its viability. At a strategic level, executives need to understand if the organization is the target of corporate espionage or nation-state actors and how much IP these actors have already compromised. On the tactical end, the level to which the security team encrypted sensitive data across locations and hosting models tells security staff where they need to concentrate their efforts to discover, classify, and encrypt sensitive data and IP. 

    What is the biggest myth holding back companies from moving to a Zero Trust strategy?

    I think there are several myths about moving to Zero Trust, but one of the most pervasive ones is that it costs too much and will require enterprises to rip and replace their systems and tools. 

    The first thing we say to Forrester clients who come to us with this objection from their peers in IT leadership or from senior executives is that you’re likely not starting from scratch. Look at Forrester’s pillars of Zero Trust — data, workloads, networks, devices, people, visibility and analytics, and automation and orchestration — and then line that up with what your organization already has in place or is in the process of implementing, such as two-factor and privileged access management under the people pillar, cloud security gateways under workload, endpoint security suites under devices, vulnerability management under networks, and data loss prevention (DLP) under data. 

    You probably have endpoint detection and response (EDR) or managed detection and response (MDR) for security analytics, and maybe you’ve started to automate some tasks in your security operations center (SOC). This should be very encouraging to you, your peers in IT operations, and executives from a cost perspective. Zero Trust doesn’t need to be a specific budget line item. 

    You may need to invest in new technology at some point, but you’re likely already doing that as tools become outdated. Where you’ll need some investment, we’ve found, is in process. There may be a fair amount of change management tied to the adoption of the zero trust model. And you should budget for that in people hours.

    What is a common theme you observe across organizations that are able to do this well?

    Executive buy-in, for sure, but also peer buy-in from stakeholders in IT and the business. A lot of the conversations and change management needed to move some Zero Trust initiatives forward — like moving to least privilege — are big ones. Anything that requires business buy-in and then subsequent effort is going to be time consuming and probably frustrating at times. But it’s a necessary effort, and it will increase understanding and collaboration between these groups with frequently competing priorities. 

    Our advice is to first identify who your Zero Trust stakeholders are and bust any Zero Trust myths to lay the groundwork for their participation.

    Once you've identified your stakeholders and addressed their concerns, you need to persuade and influence. Ask questions and actively listen to your stakeholders without judgment. Articulate your strategy well, tell stakeholders what their role is, and let them know what you need from them to be successful. They may feel daunted by the shifts in strategy and architecture that Zero Trust demands. Build a pragmatic, realistic roadmap that clearly articulates how you will use existing security controls and will realize benefits. 

    What is a common theme you observe across organizations that struggle with a Zero Trust implementation? 

    Change is uncomfortable for most people. This discomfort produces detractors who continuously try to impede progress. Security leaders with too many detractors will see their Zero Trust adoption plans and roadmaps fizzle. Security leaders we speak to are often surprised by criticism from stakeholders in IT, and sometimes even on the security team, that portrays change as impossible. 

    If you’re in this situation, you’ll need to step back and spend more time influencing stakeholders and address their concerns. Not everyone is familiar with Zero Trust terminology. You can use Forrester’s The Definition of Modern Zero Trust or NIST’s Zero Trust architecture to create a common lexicon that everyone can understand. 

    This approach allows you to use the network effect as stakeholders become familiar with the model. Additionally, your stakeholders may feel daunted by the fundamental shifts in strategy and architecture that Zero Trust demands. Build a pragmatic, realistic roadmap that clearly articulates how you will use existing security controls and tools and realize benefits.

    From there, develop a hearts-and-minds campaign focusing on the value of Zero Trust. Highlight good news using examples that your stakeholders will relate to, such as how Zero Trust can improve the employee experience — something that most people are interested in both personally and organizationally.

    Lastly, don’t go it alone. Extend your reach by finding Zero Trust champions who act as extra members of the security team and as influencers across the organization. Create a Zero Trust champions program by identifying people who have interest in or enthusiasm for Zero Trust, creating a mandate for them, and motivating and developing your champions by giving them professional development and other opportunities. 

    Next steps

    If you missed the webinar, be sure to view it on-demand here. You can also download a copy of Forrester’s “A Practical Guide To A Zero Trust Implementation” here. This report guides security leaders through a roadmap for implementing Zero Trust using practical building blocks that take advantage of existing technology investments and are aligned to current maturity levels. 

    We also have more Zero Trust content available for you, including multiple sessions from our Google Cloud Security Talks on December 7, available on-demand.

  • Movie Score Prediction with BigQuery, Vertex AI and MongoDB Atlas Thu, 01 Dec 2022 18:13:00 -0000

    Hey there! It’s been a minute since we last wrote about Google Cloud and MongoDB Atlas together. We had an idea for this new genre of experiment that involves BigQuery, BQML, Vertex AI, Cloud Functions, MongoDB Atlas, and Cloud Run and we thought of putting it together in this blog. You will get to learn how we brought these services together in delivering a full stack application and other independent functions and services the application uses. Have you read our last blog about Serverless MEAN stack applications with Cloud Run and MongoDB Atlas? If not, this would be a good time to take a look at that, because some topics we cover in this discussion are designed to reference some steps from that blog. In this experiment, we are going to bring BigQuery, Vertex AI, and MongoDB Atlas to predict a categorical variable using a Supervised Machine Learning Model created with AutoML.

    The experiment

    We all love movies, right? Well, most of us do. Irrespective of language, geography, or culture, we enjoy not only watching movies but also talking about the nuances and qualities that go into making a movie successful. I have often wondered, “If only I could alter a few aspects and create an impactful difference in the outcome in terms of the movie’s rating or success factor.” That would involve predicting the success score of the movie so I can play around with the variables, dialing values up and down to impact the result. That is exactly what we have done in this experiment.

    Summary of architecture

    Today we'll predict a Movie Score using Vertex AI AutoML and have transactionally stored it in MongoDB Atlas. The model is trained with data stored in BigQuery and registered in Vertex AI. The list of services can be composed into three sections:

    1. ML Model Creation
    2. User Interface / Client Application
    3. Trigger to predict using the ML API

    ML Model Creation

    1. Data sourced from CSV to BigQuery
    2. BigQuery data integrated into Vertex AI for AutoML model creation
    3. Model deployed in Vertex AI Model Registry for generating endpoint API

    User Interface Application

    4. MongoDB Atlas for storing transactional data and powering the client application
    5. Angular client application interacting with MongoDB Atlas
    6. Client container deployed in Cloud Run

    Trigger to predict using the ML API

    7. Java Cloud Functions to trigger invocation of the deployed AutoML model’s endpoint that takes in movie details as request from the UI, returns the predicted movie SCORE, and writes the response back to MongoDB

    High-level overview of the architecture
    High-level overview of the architecture

    Preparing training data

    You can use any publicly available dataset, create your own, or use the dataset from CSV in git. I have done basic processing steps for this experiment in the dataset in the link. Feel free to do an elaborate cleansing and preprocessing for your implementation. Below are the independent variables in the dataset:

    Name (String)

    Rating (String)

    Genre (String, Categorical)

    Year (Number)

    Released (Date)

    Director (String)

    Writer (String)

    Star (String)

    Country (String, Categorical)

    Budget (Number)

    Company (String)

    Runtime (Number)

    BigQuery dataset using Cloud Shell

    BigQuery is a serverless, multi-cloud data warehouse that can scale from bytes to petabytes with zero operational overhead. This makes it a great choice for storing ML training data. But there’s more — the built-in machine learning (ML) and analytics capabilities allow you to create no-code predictions using just SQL queries. And you can access data from external sources with federated queries, eliminating the need for complicated ETL pipelines. You can read more about everything BigQuery has to offer in the BigQuery product page.

    BigQuery allows you to focus on analyzing data to find meaningful insights. In this blog, you'll use the bq command-line tool to load a local CSV file into a new BigQuery table. Follow the below steps to enable BigQuery:

    Activate Cloud Shell and create your project

    You will use Cloud Shell, a command-line environment running in Google Cloud. Cloud Shell comes pre-loaded with bq.

    1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
    2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
    3. Navigate to BigQuery to enable the API. You can also open the BigQuery web UI directly by entering the following URL in your browser:
    4. https://console.cloud.google.com/bigquery.
    5. From the Cloud Console, click Activate Cloud Shell. Make sure you navigate to the project and that it’s authenticated. Refer to gcloud config commands.

    Creating and loading the dataset

    A BigQuery dataset is a collection of tables. All tables in a dataset are stored in the same data location. You can also attach custom access controls to limit access to a dataset and its tables.

    1. In Cloud Shell, use the bq mk command to create a dataset called "movies."


    [StructValue([(u'code', u'bash\r\nbq mk \u2013location=<<LOCATION>> movies'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe994c10>)])]

    Use –location=LOCATION to set the location to a region you can remember to set as the region for the VERTEX AI step as well (both instances should be on the same region).

    3. Make sure you have the data file (.csv) ready. The file can be downloaded from GitHub. Execute the following commands in Cloud Shell to clone the repository and navigate to the project:
    [StructValue([(u'code', u'bash\r\ngit clone <<repository link>>\r\ncd movie-score'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe994110>)])]

    You may also use a public dataset of your choice. To open and query the public dataset, follow the documentation.

    4. Use the bq load command to load your CSV file into a BigQuery table (please note that you can also directly upload from the BigQuery UI):
    [StructValue([(u'code', u'bash \r\n bq load --source_format=CSV --skip_leading_rows=1 movies.movies_score \\\r\n ./movies_bq_src.csv \\ \r\nId:numeric,name:string,rating:string,genre:string,year:numeric,released:string,score:string,director:string,writer:string,star:string,country:string,budget:numeric,company:string,runtime:numeric,data_cat:string'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cffb54390>)])]

    --source_format=CSV - uses CSV data format when parsing data file.

    --skip_leading_rows=1 - skips the first line in the CSV file because it is a header row.

    Movies.movies_score - defines the table the data should be loaded into.

    ./movies_bq_src.csv - defines the file to load. bq load command can load files from Cloud. Storage with gs://my_bucket/path/to/file URIs.

    A schema, which can be defined in a JSON schema file or as a comma-separated list. (I’ve used a comma-separated list.)

    Hurray! Our CSV data is now loaded in the table movies.movies. Remember, you can create a view to keep only essential columns that contribute to the model training and ignore the rest.

    5. Let’s query it, quick!

    We can interact with BigQuery in three ways:

    1. BigQuery web UI

    2. The bq command

    3. API

     Your queries can also join your data against any dataset (or datasets, so long as they're in the same location) that you have permission to read. Find a snippet of the sample data below:

    [StructValue([(u'code', u'SELECT name, rating, genre, runtime FROM movies.movies_score limit 3;'), (u'language', u'lang-sql'), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cffb54ed0>)])]

    I have used the BigQuery Web SQL Workspace to run queries. The SQL Workspace looks like this:

    BigQuery SQL Workspace
    Query Result

    Predicting movie success score (user score on a scale of 1-10)

    In this experiment, I am predicting the success score (user score/rating) for the movie as a multi-class classification model on the movie dataset.

    A quick note about the choice of model

    This is an experimental choice of model chosen here, based on the evaluation of results I ran across a few models initially and finally went ahead with LOGISTIC REG to keep it simple and to get results closer to the actual movie rating from several databases. Please note that this should be considered just as a sample for implementing the model and is definitely not the recommended model for this use case. One other way of implementing this is to predict the outcome of the movie as GOOD/BAD using the Logistic Regression model instead of predicting the score. 

    Using BigQuery data in Vertex AI AutoML integration

    Use your data from BigQuery to directly create an AutoML model with Vertex AI. Remember, we can also perform AutoML from BigQuery itself and register the model with VertexAI and expose the endpoint. Refer to the documentation for BigQuery AutoML. In this example, however, we will use Vertex AI AutoML to create our model. 

    Creating a Vertex AI data set

    Go to Vertex AI from Google Cloud Console, enable Vertex AI API if not already done, expand data and select Datasets, click on Create data set, select TABULAR data type and the “Regression / classification” option, and click Create:

    Vertex AI “Create data set” configuration

    Select data source

    On the next page, select a data source:

    Choose the “Select a table or view from BigQuery” option and select the table from BigQuery in the BigQuery path BROWSE field. Click Continue.

    A Note to remember 

    The BigQuery instance and Vertex AI data sets should have the same region in order for the BigQuery table to show up in Vertex AI.

    BigQuery object selection on the Vertex AI Source configuration

    When you are selecting your source table/view, from the browse list, remember to click on the radio button to continue with the below steps. If you accidentally click on the name of the table/view, you will be taken to Dataplex. You just need to browse  back to Vertex AI if this happens to you.

    Train your model 

    Once the dataset is created, you should see the Analyze page with the option to train a new model. Click that:

    Vertex AI Training step - Variable analysis

    Configure training steps 

    Go through the steps in the Training Process.

    Leave Objective as Classification.

    Select AutoML option in first page and click continue:

    Vertex AI Training step configuration

    Give your model a name.

    Select Target Column name as “Score” from the dropdown that shows and click Continue.

    Also note that you can check the “Export test dataset to BigQuery” option, which makes it easy to see the test set with results in the database efficiently without an extra integration layer or having to move data between services.

    Vertex AI Training configuration

    On the next pages, you have the option to select any advanced training options you need and the hours you want to set the model to train. Please note that you might want to be mindful of the pricing before you increase the number of node hours you want to use for training.

    Click Start Training to begin training your new model.

    Vertex AI Training configuration

    Evaluate, deploy, and test your model 

    Once the training is completed, you should be able to click Training (under the Model Development heading in the left-side menu) and see your training listed in the Training Pipelines section. Click that to land on the Model Registry page. You should be able to: 

    1. View and evaluate the training results.
    Vertex AI Model Evaluation

    2. Deploy and test the model with your API endpoint.

    Once you deploy your model, an API endpoint gets created which can be used in your application to send requests and get model prediction results in the response.

    Vertex AI Model Testing
    3. Batch predict movie scores.

    You can integrate batch predictions with BigQuery database objects as well. Read from the BigQuery object (in this case, I have created a view to batch predict movies score) and write into a new BigQuery table. Provide the respective BigQuery paths as shown in the image and click CREATE:

    Vertex AI Batch Prediction configuration

    Once it is complete, you should be able to query your database for the batch prediction results. But before you move on from this section, make sure you take a note of the deployed model’s Endpoint id, location, and other details on your Vertex AI endpoint section.

    We have created a custom ML model for the same use case using BigQuery ML with no code but only SQL, and it’s already detailed in another blog.

    Serverless web application with MongoDB Atlas and Angular

    The user interface for this experiment is using Angular and MongoDB Atlas and is deployed on Cloud Run. Check out the blog post describing how to set up a MongoDB serverless instance to use in a web app and deploy that on Cloud Run.

    In the application, we’re also utilizing Atlas Search, a full-text search capability, integrated into MongoDB Atlas. Atlas Search enables autocomplete when entering information about our movies. For the data, we imported the same dataset we used earlier into Atlas.

    Client User Interface Application

    You can find the source code of the application in the dedicated Github repository

    MongoDB Atlas for transactional data

    In this experiment, MongoDB Atlas is used to record transactions in the form of: 

    1. Real time user requests. 

    2. Prediction result response.

    3. Historical data to facilitate UI fields autocompletion. 

    If instead, you want to configure a pipeline for streaming data from MongoDB to BigQuery and vice-versa, check out the dedicated Dataflow templates.

    Once you provision your cluster and set up your database, make sure to note the below in preparation of our next step, creating the trigger:

    1. Connection String

    2. Database Name

    3. Collection Name

    Please note that this client application uses the Cloud Function Endpoint (which is explained in the below section) that uses user input and predicts movie score and inserts in MongoDB.

    Java Cloud Function to trigger ML invocation from the UI

    Cloud Functions is a lightweight, serverless compute solution for developers to create single-purpose, stand-alone functions that respond to Cloud events without needing to manage a server or runtime environment. In this section, we will prepare the Java Cloud Functions code and dependencies and authorize for it to be executed on triggers.

    Remember how we have the endpoint and other details from the ML deployment step? We are going to use that here, and since we are using Java Cloud Functions, we will use pom.xml for handling dependencies. We use google-cloud-aiplatform library to consume the Vertex AI AutoML endpoint API:

    [StructValue([(u'code', u'<dependency>\r\n <groupId>com.google.cloud</groupId>\r\n <artifactId>google-cloud-aiplatform</artifactId>\r\n <version>3.1.0</version>\r\n </dependency>'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe097510>)])]

    1. Search for Cloud Functions in Google Cloud console and click “Create Function.” 

    2. Enter the configuration details, like Environment, Function name, Region, Trigger (in this case, HTTPS), Authentication of your choice, enable “Require HTTPS,” and click next/save.

    Cloud Functions creation

    3. On the next page, select Runtime (Java 11), Source Code (Inline or upload), and start editing

    Cloud Functions configuration
    4. You can clone the .java source code and pom.xml from the git repository links.

    If you are using Gen2 (recommended), you can use the class name and package as-is. If you use Gen1 Cloud Functions, please change the package name and class name to “Example.”

    5. In the .java file, you will notice the part where we connect to MongoDB instance to write data: (use your credentials)
    [StructValue([(u'code', u'MongoClient client = MongoClients.create(YOUR_CONNECTION_STRING);\r\n\tMongoDatabase database = client.getDatabase("movies");\r\n\tMongoCollection<Document> collection = database.getCollection("movies");'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfd57aad0>)])]

    6. You should also notice the ML model invocation part in the java code (use your endpoint):

    [StructValue([(u'code', u'PredictionServiceSettings predictionServiceSettings = PredictionServiceSettings.newBuilder().setEndpoint("<<location>>-aiplatform.googleapis.com:443")\r\n .build();\r\n int cls = 0;\r\n\u2026\r\n EndpointName endpointName = EndpointName.of(project, location, endpointId);'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe9af990>)])]
    7. Go ahead and deploy the function once all changes are completed. You should see the endpoint URL that will be used in the client application to send requests to this Cloud Function.

    That’s it! Nothing else to do in this section. The endpoint is used in the client application for the user interface to send user parameters to Cloud Functions as a request and receive movie score as a response. The endpoint also writes the response and request to the MongoDB collection.

    What’s next?

    Thank you for following us on this journey! As a reward for your patience, you can check out the predicted score for your favorite movie. 

    1. Analyze and compare the accuracy and other evaluation parameters between the BigQuery ML manually using SQLs and Vertex AI Auto ML model.

    2. Play around with the independent variables and try to increase the accuracy of the prediction result.

    3. Take it one step further and try the same problem as a Linear Regression model by predicting the score as a float/decimal point value instead of rounded integers.

    To learn more about some of the key concepts in this post you can dive in here:

    Linear Regression Tutorial

    AutoML Model Types


    Related Article

    Databases on Google Cloud Part 6: BigQuery and No-code SQL-only ML

    You will read about 1. Creating a BigQuery dataset using Cloud Shell and load data from file 2. BigQuery ML for supervised learnin...

    Read Article

    Related Article

    Serverless MEAN Stack Applications with Cloud Run and MongoDB Atlas

    In this blog, we’re going to see how Cloud Run and MongoDB come together to enable a completely serverless MEAN stack application develop...

    Read Article
  • Built with BigQuery: How Datalaksa provides a unified marketing and customer data warehouse for brands in South East Asia Thu, 01 Dec 2022 17:00:00 -0000

    Editor’s note: The post is part of a series highlighting our partners, and their solutions, that are Built with BigQuery.

    Datalaksa is a unified marketing and customer data warehouse created by Persuasion Technologies, a Big Data Analytics & Digital Marketing consultancy serving clients throughout South East Asia. It enables marketing teams to optimize campaigns by combining data from across their marketing channels and enabling insight driven actions across marketing automation and delivery systems.

    In this post, we explore how they have leveraged Google BigQuery and Google’s other data cloud products to build a solution that is rapid to set-up, highly flexible and able to scale with the needs of their customers. 

    Through close collaboration with their customers, Persuasion Technologies gained first hand experience of the challenges they face trying to optimize campaigns across multiple channels.  “Marketing and CRM teams find it difficult to gain the insights that drive decisions across their marketing channels.” said Tzu Ming Chu, Director, Persuasion Technologies. “An ever-increasing variety of valuable data resides in siloed systems, while the teams that can integrate and analyze that data have never been more in demand. All too frequently this means that campaign planning is incomplete or too slow and campaign execution is less effective, ultimately resulting in lower sales and missed opportunities.”

    Marketing teams of all sizes face similar challenges:

    • Access to technical skills and resources. Integrating data from the various sources requires skilled, and scarce, technical resources to scope out requirements, design solutions, build the pipelines that connect data sources, develop data models and ensure data quality. Machine learning (ML) requires data scientists to develop models to generate advanced insights, and ML Ops engineers to make sure those models are always updated and can be used for scoring at the needed scale.

    • Access to technology. While smaller companies may not have a data warehouse at all, even in large companies that do, gaining access to it and having resources allocated can be a long and difficult process, often with a lack of flexibility to accommodate local needs and with limitations to what can be provided. 

    • Ease of use. Even a well architected data warehouse may see little usage if data or marketing teams can’t figure out how to deep dive into the data. Without an intuitive data model, an easy to use interface that enables business users to query, transform and visualize data and beverage AI models that automate insights and predict outcomes, the full benefits will not be realized. 

    • Flexibility. Each marketing team is different - they each have their own set of requirements, data sources and use cases, and they continue to evolve and scale over time. Many of-the-shelf solutions lack the flexibility to accommodate the unique needs of each business.

    In these challenges, the Persuasion Technologies team saw an opportunity — an opportunity to help their customers in a repeatable way, ensuring they all had easy access to rich data warehouse capabilities, and to enable them to create a new product-centric business and revenue stream. 

    Datalaksa, a unified marketing and customer data warehouse

    Datalaksa is a solution that enables marketing teams to easily, securely and scalably bring together marketing and customer data from multiple channels into a cloud data warehouse and enables them with advanced capabilities to derive actionable insights and take actions that increase campaign efficiency and effectiveness. 

    Out of the box, Datalaksa includes data connectors that enable data to be imported from a wide range of platforms such as Google Marketing Platform, Facebook Ads and eCommerce systems, which means that marketing teams can unify data from across channels quickly and easily without reliance on scarce and costly technical resources to build and maintain integrations.

    To accelerate time-to-insight, Datalaksa provides pre-built data models, machine learning models and analytical templates for key marketing use cases such as cohort analyses, customer clustering, campaign recommendation and lifetime value models, all wrapped within an simple and intuitive user interface that enables marketing teams to easily query, transform, enrich and analyze their data - decreasing the time from data to value. 

    It’s often said that “insight without action is worthless” — to ensure this is not the case for Datalaksa users, the solution prompts action through notifications and enables audience segmentation tools and integrations back to marketing automation systems such as Salesforce Marketing Cloud, Google Ads and eCommerce systems. 

    For example, teams can set thresholds and conditions using SQL queries to send notification emails for ‘out of stock’ or `low stock’ to relevant teams and automatically update product recommendation algorithms to offer in-stock items. Through built-in connectors, customer audience segments can be activated by automatically updating ad buying audiences in platforms including Tik Tok, Google Ads, Linkedin and Facebook or Instagram. These can be scheduled and updated regularly. 

    All of this is built using Google’s BigQuery and data cloud suite of products.

    Why Datalaksa chose Google Cloud and BigQuery

    The decision to use Google Cloud and BigQuery for Datalaksa was an easy one according to Tzu, “Not only did it accelerate our ability to provide our customers with industry leading data warehousing and analytical capabilities, it’s incredibly easy to integrate with many key marketing systems, including those from Google. This equates directly to saved time and cost, not just during the initial design and build, but in the ongoing support and maintenance.”

    Persuasion Technologies story is one of deep expertise, customer empathy and innovative thinking, but BigQuery and Google Cloud’s end to end platform for building data driven applications is also key part of their success:

    Datalaksa Architecture.jpg
    • World class analytics. By leveraging BigQuery as the core of Datalaksa, they were immediately able to provide their customers with a fully-managed, petabyte-scale, world class analytics solution with a 99.99% SLA. Additionally, integrated, fully managed services like Cloud Data Loss Prevention help their users discover, classify, and protect their most sensitive data. This is a huge advantage for a startup, and enables them to focus their time on creating value for their customers by building their expertise into their product.

    • Built-in industry leading ML/AI. To deliver advanced machine learning capabilities to its customers, Datalaksa uses BigQuery ML. As the name suggests, BigQuery ML is built right into BigQuery, so not only does it enable them to easily leverage a wide range of advanced ML models, it further decreases development time and cost by eliminating the need to move data between the data warehouse and separate ML system, while enabling people no coding skills to gain extra insights by developing machine learning models using SQL constructs.

    • Serverless scalability and efficiency. As all of the services that Datalaksa uses are serverless or fully managed services, they offer high levels of resiliency and effortlessly scale up and down with their customers’ needs while keeping the total cost of ownership low by minimizing the operational overheads.    

    • Simplified data integration. Datalaksa is rapidly adding connections to Google data sources such as Google Ads and YouTube, and hundreds of other SaaS services, through BigQuery Data Transfer Service (DTS), and through access to a wide range of 3rd party connectors in the Google Cloud Marketplace including Facebook Ads and eCommerce cart connectors.

    The Built with BigQuery advantage for ISVs

    Through Built with BigQuery, Google is helping tech companies like Persuasion Technologies build innovative applications on Google’s data cloud with simplified access to technology, helpful and dedicated engineering support, and joint go-to-market programs. Participating companies can: 

    • Get started fast with a Google-funded, pre-configured sandbox. 

    • Accelerate product design and architecture through access to designated experts from the ISV Center of Excellence who can provide insight into key use cases, architectural patterns, and best practices. 

    • Amplify success with joint marketing programs to drive awareness, generate demand, and increase adoption.

    BigQuery gives ISVs the advantage of a powerful, highly scalable data warehouse that’s integrated with Google Cloud’s open, secure, sustainable platform. And with a huge partner ecosystem and support for multi cloud, open source tools and APIs, Google provides technology companies the portability and extensibility they need to avoid data lock-in. 

    Click these links to learn more about Datalaksa and Built with BigQuery.

    Related Article

    Built with BigQuery: Retailers drive profitable growth with SoundCommerce

    SoundCommerce uses Analytics Hub to increase the pace of innovation by sharing datasets with its customers in real-time by using the stre...

    Read Article
  • Break down data silos with the new cross-cloud transfer feature of BigQuery Omni Thu, 01 Dec 2022 17:00:00 -0000

    To help customers break down data silos, we launched BigQuery Omni in 2021. Organizations globally are using BigQuery Omni to analyze data across cloud environments. Now, we are excited to launch the next big evolution for multi cloud analytics: cross-cloud analytics. Cross-cloud analytics tools help analysts and data scientists easily, securely, and cost effectively distribute data between clouds to leverage the analytics tools they need. In April 2022, we previewed a SQL supported LOAD statement that allowed AWS/Azure blob data to be brought into BigQuery as a managed table for advanced analysis. We’ve learned a lot in this preview period. A few learnings stand out:

    1. Cross-cloud operations need to meet analysts where they are. In order for analysts to work with distributed data, workspaces should not be siloed. As soon as analysts are asked to leave their SQL workspaces to copy data, set up permissions, or grant permission, workflows break down and insights are lost. Same SQL can be used to periodically copy data using BigQuery scheduled queries. The more of the workflow that can be managed by SQL, the better. 

    2. Networking is an implementation detail, latency should be too. The longer an analyst needs to wait for an operation to complete, the less likely a complete workflow is to be completed end-to-end. BigQuery users expect high performance for a single operation, even if those operations are managed across multiple data centers.

    3. Democratizing data shouldn’t come at the cost of security. In order for data admins to empower data analysts and engineers, they need to be assured there isn’t additional risk in doing so. By default, data admins and security teams are increasingly looking for solutions that don’t persist user credentials between cloud boundaries. 

    4. Cost control comes with cost transparency. Data transfer costs can get costly, and we hear frequently this is the number 1 concern for multi-cloud data organizations. Providing transparency into single operations and invoices in a consolidated way is critical to driving success for cross-cloud operations. Allowing administrators to cap costs for budgeting is a must.

    This feedback is why we’ve spent much of this year improving our cross-cloud transfer product to optimize releases around these core tenants: 

    • Usability: The LOAD SQL experience allows for data filtering and loading within the same editor across clouds. LOAD SQL supports data formats like JSON, CSV, AVRO, ORC and PARQUET. With semantics for both appending and truncating tables, LOAD supports both periodic syncs and refreshing the complete table semantics. We’ve also added SQL support for data lake standards like Hive partitioning and JSON data type.  

    • Security: With a federated identity model, users don’t have to share or store credentials between cloud providers to access and copy their data. We also now support CMEK support for the destination table to help secure data as it’s written in BigQuery and VPC-SC boundaries to mitigate data exfiltration risks. 

    • Latency: With data movement managed by BigQuery Write API, users can effortlessly move just the relevant data without having to wait for complex pipes. We’ve improved job latency significantly for the most common load jobs and are seeing performance improvements with each passing day. 

    • Cost auditability: From one invoice, you can see all your compute and transfer costs for LOADs across clouds. Each job comes with statistics to help admins manage budgets.

    During our preview period, we saw good proof points on how cross-cloud transfer can be used to accelerate time to insight and deliver value to data teams. 

    Getting started with a cross-cloud architecture can be daunting, but cross-cloud transfer has been used to help customers jumpstart proof of concepts because it enables the migration of subsets of data without committing to a full migration. Kargo used cross-cloud transfer to accelerate a performance test of BigQuery. “We tested Cross-Cloud Transfer to assist with a proof of concept on BigQuery earlier this year.  We found the usability and performance useful during the POC,” said Dinesh Anchan, Manager of Engineering at Kargo. 

    We also saw this product being used to combine key datasets across clouds. A common challenge for customers is to manage cross-cloud billing data. CCT is being used to tie files together which have evolving schema on delivery for blob storage. "We liked the experience of using Cross-Cloud transfer to help consolidate our billing files across GCP, AWS, and Azure.  CCT was a nice solution because we could use SQL statements to load our billing files into BigQuery," said the engineering lead of a large research institution. 

    We’re excited to release the first of many cross-cloud features through BigQuery Omni. Check out the Google Cloud Next session to learn about more upcoming launches in the multicloud analytics space including support for Omni tables and local transformations to help supercharge these experiences for analysts and data scientists. We’re investing in cross-cloud because cloud boundaries shouldn’t slow innovation. Watch this space.

    Availability and pricing

    Cross-Cloud Transfer is now available in all BigQuery Omni regions. Check the BigQuery Omni pricing page for data transfer costs.

    Getting Started

    It has never been easier for analysts to move data between clouds. Check out our getting started (AWS/Azure) page to try out this SQL experience. For a limited trial, BigQuery customers can explore BigQuery Omni at no charge using on-demand byte scans from September 15, 2022 to March 31, 2023 (the "trial period") for data scans on AWS/Azure. Note: data transfer fees for Cross-Cloud Transfer will still apply.

  • Deploy a Google Cloud Marketplace VM with Terraform Thu, 01 Dec 2022 17:00:00 -0000

    Google Cloud Marketplace helps scale and simplify procurement for your organization with online discovery, flexible purchasing, and easy fulfillment of top cloud solutions. Today we are proud to announce an additional Marketplace deployment option for select VM products: Terraform. Now with just a few clicks, you can auto-generate a configuration file to use with Terraform, the popular open-source, infrastructure-as-code tool from HashiCorp. Save time and reduce errors by simply copying the code to directly run it in the terminal or use within your product build pipeline.

    We are starting by enabling Terraform deployment on a limited number of frequently deployed images, e.g. WordPress and Deep Learning VM, and we will be expanding support for this deployment option over time. To help you get started, let’s walk through deploying a Marketplace VM product using a Terraform configuration file.

    Deploying a VM via Terraform

    Let’s say you’re interested in deploying a Wordpress VM to your Google Cloud project. You navigate to the Google Cloud Marketplace product page and notice the new Deploy with CLI option in the header.

    1 VM via Terraform.jpg

    This is helpful for you because your organization uses Terraform to manage cloud infrastructure. Having the code generated for you can both save you time and reduce errors.

    Upon clicking Deploy with CLI and agreeing to product terms of service if you hadn’t already, you’ll need to configure a service account. If you don’t have an existing service account for this product, you will need to establish one. You can find all of the steps for this process here.

    Next click Generate code at the bottom of the window.

    2 VM via Terraform.jpg

    You’ll now be presented with the configuration file ready for Terraform. Double check the accuracy of the service account and simply copy it to use directly on the machine where you have installed Terraform or with your favorite CI/CD pipeline.

    3 VM via Terraform.jpg

    The Command Line Deployment view of the product page also makes it easy to check on the VM after deployment. Click the link to Compute Engine at the bottom of the window to view running instances of this product in your project.

    It’s really that simple. And given Terraform’s popularity, we know this is a helpful new deployment option for many of our customers. Learn more about this new VM deployment option in the CLI deployment documentation. Partners interested in learning more about this option for their VM product(s) can reach out to their partner advisor or marketplace contact.

  • Cloud SQL and Powershell working together on linux Thu, 01 Dec 2022 17:00:00 -0000

    PowerShell is a powerful scripting tool often used by database administrators for managing Microsoft SQL Server. This blog will focus on the aspects of using PowerShell for common database tasks and management on a Cloud SQL for SQL Server instance. We will also look at dbatools.io and how this can be used on instances with cross-region replicas, external replication, and other key features enabled. 

    Google Cloud Tools for PowerShell also lets you run various cmdlets from the gcloud CLI - you can learn more in our documentation - but the focus of this post is on running PowerShell from a standalone virtual machine. PowerShell now supports both Windows and Linux, which means you can install it on a Compute Engine Linux Virtual Machine (VM). 

    Initial setup and getting started

    You can install PowerShell on a Compute Engine VM, just as you can install SQL Server Management Studio on a VM for managing a Cloud SQL instance. PowerShell is installed by default and requires no setup on any Windows Compute Engine VM that you create, and you can also install it in a Compute Engine Linux VM. The 7 steps below are needed to get the PowerShell environment set up on a Compute Engine Linux VM with dbatools.io

    1. Create a VM

    2. Connect to the VM

    3. Install PowerShell

    4. Launch PowerShell

    5. Verify PowerShell setup

    6. Install dbatools.io

    7. Verify dbatools.io setup

    Step 1. Create a VM

    Step 2. Connect to the VM

    Connect to your Linux VM following these instructions

    Step 3. Install PowerShell

    Follow the steps from here to install PowerShell

    Step 4. Launch PowerShell

    Now start PowerShell using the command below 
    # Start PowerShell

    You should get a command prompt similar to the one below.


    Step 5. Verify PowerShell setup

    You can verify PowerShell is working by running the command below


    Step 5. Install dbatools.io

    Next install dbatools.io using the command below, this is also documented here 
    # run this command
    Install-Module dbatools

    Step 6. Verify dbatools.io setup

    In these examples I will be using SQL Server authentication to connect to each database. To do this, we need to create a PowerShell credential so that we can authenticate to the database server.

    [StructValue([(u'code', u'$sqlserver = Get-Credential -UserName "sqlserver"'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5cbb110>)])]

    Now, let's run a test query to verify that our setup is working as expected. We can use the Get-DbaDatabase cmdlet to connect to our SQL Server instance and list all the user databases as below. This helps verify connectivity between source and destination.

    [StructValue([(u'code', u'Get-DbaDatabase -SqlInstance -SqlCredential $sqlserver -ExcludeSystem'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c4f6b990>)])]

    dbatools.io has a lot of cmdlets provided out of the box that can be used to manage your Cloud SQL instance. You may even use this to complete a few of the DBA tasks recommended in ourbest practices. The next section will cover the scenarios listed below for TempDB.

    • Viewing the number of TempDB files.

    • Adding/removing more files to TempDB after instance resize.

    Updating TempDB 

    There are certain best practices for TempDB to achieve optimal performance. One of the main recommendations is having an equal number of files for TempDB (up to 8) matching the number of cores available. You can easily review and manage TempDB configurations using powershell.

    Viewing the number of TempDB files

    To review your TempDB files for your Cloud SQL instance, use the Get-DbaDbFile cmdlet like the example below.

    [StructValue([(u'code', u'Get-DbaDbFile -SqlInstance -SqlCredential $sqlserver -Database tempdb |Format-Table -Property FileGroupName, LogicalName, Size, Growth, GrowthType'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c43098d0>)])]

    Adding/removing more files to TempDB after instance resize

    If additional files are needed (for example, after resizing your Cloud SQL instance), you can add more files using the Set-DbaTempDbConfig command shown below. You may also need to add TempDB files based on contention observed in TempDB.

    In this example, we have resized the instance to have 6 vCPUs, so we need to add four more TempDB data files to have 6 data files in total. This step can also be done outside of PowerShell as documented here as well.

    [StructValue([(u'code', u'Set-DbaTempDbConfig -SqlInstance -SqlCredential $sqlserver -DataFileSize 48 -DataFileCount 6'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c45c7a50>)])]

    You may get a warning message like the one above that the logical filename is already in use. This happens because the powershell script tries to use a filename that already exists. To address this warning, you can remove all the TempDB files except the primary files (tempdev and templog). 

    In our case we will use the script below to complete this action.

    [StructValue([(u'code', u"USE [tempdb]\r\nGO\r\nexec msdb.dbo.gcloudsql_tempdb_shrinkfile @filename = 'tempdev2', @empty_file = 1\r\nexec msdb.dbo.gcloudsql_tempdb_shrinkfile @filename = 'tempdev3', @empty_file = 1\r\nexec msdb.dbo.gcloudsql_tempdb_shrinkfile @filename = 'tempdev4', @empty_file = 1\r\nGO\r\nALTER DATABASE [tempdb] REMOVE FILE [tempdev2]\r\nALTER DATABASE [tempdb] REMOVE FILE [tempdev3]\r\nALTER DATABASE [tempdb] REMOVE FILE [tempdev4]\r\nGO"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c4fcca50>)])]

    After the files have been cleared you will have two files remaining.


    Then you can try adding the appropriate amount of TempDB files again. Once that is done you will need to restart your Cloud SQL instance for the changes to take effect.


    Review DB wait statistics

    If you are experiencing performance issues or want to see what your Cloud SQL instance is waiting on, you can use the Get-DbaWaitStatistic cmdlet and check wait stats with a single command.

    [StructValue([(u'code', u'Get-DbaWaitStatistic -SqlInstance -SqlCredential $sqlserver'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5aebd90>)])]

    Sync objects between replicas

    If you are using a Cloud SQL for SQL Server read replica or Cloud SQL as a publisher for transactional replication, there are a few tasks that you should continue to perform, like keeping the SQL agent jobs in sync between instances. In this example, use the steps in Cloud SQL documentation to create a read replica. At the initial creation, objects are in sync on both the primary and secondary. We need to make sure to sync objects created after the replica is set up.

    SQL Agent Jobs

    Let's create a sample job on the primary instance that we will later sync to the replica instance.
    You can use the New-DbaAgentJob cmdlet as below

    [StructValue([(u'code', u"New-DbaAgentJob -SqlInstance $primary -Job 'test-job' -Description 'sample job' -SqlCredential $sqlserver"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c4f62090>)])]

    Now create a job step called test-step using New-DbaAgentJobStep

    [StructValue([(u'code', u"New-DbaAgentJobStep -SqlInstance $primary -Job test-job -StepName get-date -Command 'select getdate()' -SqlCredential $sqlserver"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee69f0d27d0>)])]

    Now let's sync the replica with this new job created In the previous step using Copy-DbaAgentJob

    [StructValue([(u'code', u'Copy-DbaAgentJob -Source c -SourceSqlCredential $sqlserver -Destination $secondary -DestinationSqlCredential $sqlserver'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c4323490>)])]

    We should see a job get copied that was created on the primary. You can use Get-DbaAgentJob to list jobs on the replica if necessary as well.

    [StructValue([(u'code', u'Get-DbaAgentJob -SqlInstance $secondary'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c4fcc550>)])]

    If you made any changes on the primary and want to sync the secondary, you can use the –Force option to sync the changes. To demonstrate this we will make two changes listed below on the primary instance. 

    1. Create a second sql agent job called second-job

    2. Add a second job step called second-step to the job named test-job

    We will review then sync these changes above to the secondary server in these next steps.

    Create a new job

    [StructValue([(u'code', u"New-DbaAgentJob -SqlInstance $primary -Job 'second-job' -Description 'second job' -SqlCredential $sqlserver\r\nNew-DbaAgentJobStep -SqlInstance $primary -Job second-job -StepName get-date -Command 'select @@servername' -SqlCredential"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6b9a63dd0>)])]

    On the primary add another job step to the first job

    [StructValue([(u'code', u"New-DbaAgentJobStep -SqlInstance $primary -Job test-job -StepName second-step -Command 'select current_time' -SqlCredential $sqlserver"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5e1d790>)])]

    Now let's review the jobs steps on the primary

    [StructValue([(u'code', u'Get-DbaAgentJobStep -SqlInstance $primary -SqlCredential $sqlserver | format-table'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5dc6850>)])]

    Now let’s sync the secondary server with the updates we made using the –Force option. You should see the second-job added and the test-job successfully updated as below.

    [StructValue([(u'code', u'Copy-DbaAgentJob -Source $primary -SourceSqlCredential $sqlserver -Destination $secondary -DestinationSqlCredential $sqlserver -Force'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6bb120290>)])]

    Importing data

    You can also use powershell to import data (for example, a CSV file). You can choose your own CSV file or create a sample one using docs.google.com/spreadsheets/ Here is one with a small sample that I created


    Using cat we can see the contents below as well.
    cat ./import/States.csv

    Use Import-DbaCsv to import this file to your Cloud SQL instance as shown below. This can also be used as an alternative to BULK INSERT.

    [StructValue([(u'code', u'Import-DbaCsv -Path ./import/States.csv -SqlInstance -Database test -Table States -SqlCredential $sqlserver'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6b9a63d90>)])]

    Now we can also list the table that was imported using Get-DbaDbTable

    [StructValue([(u'code', u'Get-DbaDbTable -SqlInstance $primary -Database test -Table States -SqlCredential $sqlserver'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6bacb6290>)])]

    You can see a table with 5 rows was created.


    This can also be used to transfer tables between instances as well. For example if you have two databases that are replicating data you can transfer objects between the primary and replica of a publisher setup. This could be used as a way to do an initial sync of objects that do not support replication such as tables without a primary key.

    This can be done using Copy-DbaDbTableData

    We will copy the states table that we imported above from the source to a destination database called newtest.

    [StructValue([(u'code', u'Copy-DbaDbTableData -SqlInstance $primary -SqlCredential $sqlserver -Destination $replica -DestinationSqlCredential $sqlserver -Database test -DestinationDatabase newtest -Table dbo.States -AutoCreateTable'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6bb11c790>)])]

    You can see the table was copied to the destination and 5 rows were copied in 141.02 ms


    Performing common DBA tasks

    There are certain tasks that a DBA/DBE may need to perform to keep their Cloud SQL for SQL Server instance healthy, many of which can be done using PowerShell.

    Unused indexes and Duplicate indexes

    In many cases, having indexes improves the performance of selects, but they also cause some overhead to inserts and updates. It is normally recommended to review unused indexes and duplicated indexes. The two cmdlets listed below can be used to do this.


    Diagnostic queries on Cloud SQL

    There is a common set of diagnostic queries provided by SQL Server MVP Glen Berry here.

    We can use Invoke-DbaDiagnosticQueryto automatically execute and return the results for a specific set of queries or all these queries. There are a lot of queries and information that this returns so it could take a while. It might be a good idea to limit this to specific queries or target certain databases.

    Here is an example of what a partial output looks like.

    [StructValue([(u'code', u'Invoke-DbaDiagnosticQuery -SqlInstance $primary -SqlCredential $sqlserver'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5fefe10>)])]

    Here are some examples of queries you can execute on Cloud SQL to get Cross Region Replica Availability Group status and DB backup status. The output can also be formatted to a table as below for better readability.

    Example Executing Query : AG Status

    [StructValue([(u'code', u"Invoke-DbaDiagnosticQuery -SqlInstance $primary -SqlCredential $sqlserver -QueryNam 'AG Status' | Select -ExpandProperty result | Format-Table -AutoSize"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5fef310>)])]

    Example Executing Query : Last Backup by Database

    [StructValue([(u'code', u"Invoke-DbaDiagnosticQuery -SqlInstance $primary -SqlCredential $sqlserver -QueryNam 'Last Backup By Database' | Select -ExpandProperty result | Format-Table -AutoSize"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c536e210>)])]

    In this blog you learned how to use PowerShell on a Compute Engine Linux VM to manage your Cloud SQL instances. We covered only some of the more common scenarios, but there is much more that can be done using powershell and dbatools.io. To learn more and see the full list of commands available, you can visit https://dbatools.io/commands/.

  • Bridging The Gap: Melina López on Expanding Access to Technology in Latin America Thu, 01 Dec 2022 17:00:00 -0000

    Editor's note: Melina López describes her job as "training the workforce of the future" across much of Latin America. She is a native of Argentina, and now works in Brazil seeking ways to empower underrepresented groups through podcasts, training sessions, skills-building websites, and more. Her quest for greater social justice began years ago, and has expanded in unexpected ways since she joined Google Cloud. 

    What does it mean to be working in Products & Inclusion in Latin America?

    I have two main roles—one is helping folks become trained and certified in Google Cloud, and the other involves broadening access to Google Cloud among underrepresented communities. A great deal of my efforts are focused on working with individuals you don't normally see in tech, including those with different races, genders, and economic backgrounds. I want to help pave pathways for people to build careers as cloud engineers, and learn about tools they can use (check out our Capacita+ website). Besides that, I have a podcast called "Listen Louder," which is about giving more space to marginalized voices. I want to move away from only doing equity work around specific moments in times, such as Black Consciousness Month in Brazil, or International Women's Month, and instead build programmatic initiatives where we can discuss these topics throughout the year.

    Melina Lopez (1).jpg
    Melina López leading an #IamRemarkable workshop for women entrepreneurs in Salvador, Bahia (Brazil), in partnership with Vale do Dendê local startup accelerator.

    How did you come to this?

    I've always worked hard to fuse my passion with my work. I would say that my interest in social justice began while I was at university in Buenos Aires, when me and my classmates started an adult literacy group, teaching marginalized people how to read and write. I then joined a nongovernmental organization in Brazil, where I learned Portuguese. 

    At Google, I started in ad sales and held a few different positions before landing at Google Apps for Work, which eventually became Google Cloud. Brazil is a huge landmark in my story as a human being—the way people show affection, the number of issues they surface around race, gender and justice—it's very personal to me, and has directly influenced my desire to do more with Google Cloud and social justice.

    Do you feel like you are bridging a gap?

    I hope so. People from underrepresented groups don’t often see themselves as creators, but as consumers of technology. We want to help them see the ways they can build their own businesses with technology. However, barriers are different in Brazil than in the U.S., because we don't have things like women's universities and Historically Black Colleges and Universities. So we work with a lot of nongovernmental organizations, spearhead campaigns such as, "Women in Cloud," and offer practical workshops where people can meet with facilitators.

    We also create programs to educate businesses. For example, we did a podcast in Portuguese about "How to combat racism at work," and another one in Spanish about bringing more women into tech. We've also facilitated programs focused on how products like Workspace can be more inclusive—from light enhancement in Meet so people can be seen better, to using the "raised hand" function for those who may be more introverted.

    Has doing this work changed the way you see things in general?

    This work has certainly broadened the way I see Latin America as a whole. Not only because of the new voices and communities we're meeting and introducing innovative technologies to, but because it has allowed me to become "more Brazilian" on a personal level.

    Related Article

    Sales specialist, mentor, and woman in Web3: Anella Bokhari is building community and helping others tell their story along the way

    Sales Specialist, Mentor, and Woman in Web3: Anella Bokhari Wears Many Hats But Has the Same “Why” – Helping Others Find & Tell Their Sto...

    Read Article
  • Cloud Pub/Sub announces General Availability of exactly-once delivery Thu, 01 Dec 2022 15:00:00 -0000

    Today the Google Cloud Pub/Sub team is excited to announce the GA launch of exactly-once delivery feature. With this availability, Pub/Sub customers can receive exactly-once delivery within a cloud region and the feature provides following guarantees:

    • No redelivery occurs once the messages has been successfully acknowledged

    • No redelivery occurs while a message is outstanding. A message is considered outstanding until the acknowledgment deadline expires or the message is acknowledged.

    • In case of multiple valid deliveries, due to acknowledgment deadline expiration or client-initiated negative acknowledgment, only the latest acknowledgment ID can be used to acknowledge the message. Any requests with a previous acknowledgment ID will fail.

    This blog discusses the exactly-once delivery basics, how it works, best practices and feature limitations.


    Without exactly-once delivery, customers have to build their own complex, stateful processing logic to remove duplicate deliveries. With the exactly-once delivery feature, there are now stronger guarantees around not delivering the message while the acknowledgment deadline has not passed. It also makes the acknowledgement status more observable by the subscriber. The result is the capability to process messages exactly once much more easily. Let’s first understand why and where duplicates can be introduced. 

    Pub/Sub has the following typical flow of events:

    1. Publishers publish messages to a topic.

    2. Topic can have one or more subscriptions and each subscription will get all the messages published to the topic.

    3. A subscriber application will connect to Pub/Sub for the subscription to start receiving messages (either through a pull or push delivery mechanism).

    In this basic messaging flow, there are multiple places where duplicates could be introduced. 


    • Publisher might have a network failure resulting in not receiving the ack from Cloud Pub/Sub. This would cause the publisher to republish the message.

    • Publisher application might crash before receiving acknowledgement on an already published message.


    • Subscriber might also experience network failure post-processing the message, resulting in not acknowledging the message. This would result in redelivery of the message when the message has already been processed.

    • Subscriber application might crash after processing the message, but before acknowledging the message. This would again cause redelivery of an already processed message.


    • Pub/Sub service’s internal operations (e.g. server restarts, crashes, network related issues) resulting in subscribers receiving duplicates.

    It should be noted that there are clear differences between a valid redelivery and a duplicate:

    • A valid redelivery can happen either because of client-initiated negative acknowledgment of a message or when the client doesn't extend the acknowledgment deadline of the message before the acknowledgment deadline expires. Redeliveries are considered valid and the system is working as intended.

    • A duplicate is when a message is resent after a successful acknowledgment or before acknowledgment deadline expiration.

    Exactly-once side effects

    “Side effect” is a term used when the system modifies the state outside of its local environment. In the context of messaging systems, this is equivalent to a service being run by the client that pulls messages from the messaging system and updates an external system (e.g., transactional database, email notification system). It is important to understand that the feature does not provide any guarantees around exactly-once side effects and side effects are strictly outside the scope of this feature.

    For instance, let’s say a retailer wants to send push notifications to its customers only once. This feature ensures that the message is sent to the subscriber only once and no redelivery occurs either once the message has been successfully acknowledged or it is outstanding. It is the subscriber’s responsibility to leverage the email notification system’s exactly-once capabilities to ensure that message is pushed to the customer exactly once. Pub/Sub has neither connectivity nor control over the system responsible for delivering the side effect, and hence Pub/Sub’s exactly-once delivery guarantee should not be confused with exactly-once side effects.

    How it works

    Pub/Sub delivers this capability by taking the delivery state that was previously only maintained in transient memory and moving it to a massively scalable persistence layer. This allows Pub/Sub to provide strong guarantees that no duplicates will be delivered while a delivery is outstanding and no redelivery will occur once the delivery has been acknowledged. Acknowledgement IDs used to acknowledge deliveries have versioning associated with them and only the latest version will be allowed to acknowledge the delivery or change the acknowledge deadline for the delivery. RPCs with any older version of the acknowledgement ID will fail. Due to the introduction of this internal delivery persistence layer, exactly-once delivery subscriptions have higher publish-to-subscribe latency compared to regular subscriptions.

    1 exactly-once delivery.jpg

    Let’s understand this through an example. Here we have a single publisher, publishing messages to a topic. The topic has one subscription, for which we have three subscribers.

    2 exactly-once delivery.jpg

    Now let’s say a message (in blue) is sent to subscriber#1. At this point, the message is outstanding, which means that Pub/Sub has sent the message, but subscriber#1 has not acknowledged it yet. This is very common as the best practice is to process the message first before acknowledging it. Since the message is outstanding, this new feature will ensure that no duplicates are sent to any of the subscribers. 

    The persistent layer for exactly-once delivery stores a version number with every delivery of a message, which is also encoded in the delivery's acknowledgement ID. The existence of an unexpired entry indicates there is already an outstanding delivery and that we should not deliver a message (providing the stronger guarantee around the acknowledgement deadline). An attempt to acknowledge a message or modify its acknowledgement deadline with an acknowledgement ID that does not contain the most recent version can be rejected and a useful error message can be returned to the acknowledgement request.

    Coming back to the example, a delivery version for the delivery of message M (in blue) to subscriber#1 will be stored internally within Pub/Sub (let’s call it delivery#1). This would track that a delivery of message M is outstanding. Subscriber#1 successfully processes the message and sends back an acknowledgement (ACK#1). The message is then removed eventually from Pub/Sub (pertaining to the topic’s retention policy). 

    Now let’s consider a scenario that could potentially generate duplicates and how Pub/Sub’s exactly-once delivery feature guards against such failures.

    An example

    In this scenario, subscriber#1 gets the message and processes it by locking a row on the database. The message is outstanding at this point and an acknowledgement has not been sent to Pub/Sub. Pub/Sub knows through its delivery versioning mechanism that a delivery (delivery#1) is outstanding with subscriber#1.

    3 exactly-once delivery.jpg

    Without the stronger guarantees provided by this feature, a message could be redelivered to the same or a different subscriber (subscriber#2) while it is still outstanding. This would cause subscriber#2 trying to get a lock on the database for the update, resulting in multiple subscribers trying to get locks for the same row, causing processing delays.

    Exactly-once delivery eliminates this situation. Due to the introduction of the data deduplication layer, Pub/Sub knows that there is an outstanding delivery#1 which is unexpired and it should not deliver the same message to this subscriber (or any other subscriber).

    Using exactly-once delivery

    Simplicity is a key pillar of Pub/Sub. We have ensured that the feature is really easy to use. You can create a subscription with exactly-once delivery using the Google Cloud console, the Google Cloud CLI, client library, or Pub/Sub API. Please note that only pull subscription type supports exactly-once delivery, including subscribers that use the StreamingPull API. This documentation section provides more details on creating a pull subscription with exactly-once delivery.

    Using the feature effectively

    1. Consider using our latest client libraries to get the best feature experience.

    2. You should also use new interfaces in the client libraries that allow you to check the response for acknowledgements. Successful response will guarantee no redelivery. Specific client libraries samples can be found here - C++, C#, Go, Java, Node.js, PHP, Python, Ruby

    3. To reduce network related ack expirations, leverage minimum lease extension setting : Python, Node.js, Go (MinExtensionPeriodin)


    1. Exactly-once delivery is a regional feature. That is, the guarantees provided only apply for subscribers running in the same region. If a subscription with exactly-once delivery enabled has subscribers in multiple regions, they might see duplicates.

    2. For other subscription types (push and BigQuery), Pub/Sub initiates the delivery of messages and uses the response from the delivery as an acknowledgement; the message receiver has no way to know if the acknowledgement was actually processed. In contrast, pull subscriber clients initiate acknowledgement requests to Pub/Sub, which respond with whether or not the acknowledgement was successful. This difference in delivery behavior means that exactly-once semantics do not align well with non-pull subscriptions.

    To get started, you can read more about exactly-once delivery feature or simply create a new pull subscription for a topic using Cloud Console or the gcloud CLI.

    Additional resources

    Please check out the additional resources available at to explore this feature further:

  • What’s new with Google Cloud Wed, 30 Nov 2022 21:00:00 -0000

    Want to know the latest from Google Cloud? Find it here in one handy location. Check back regularly for our newest updates, announcements, resources, events, learning opportunities, and more. 

    Tip: Not sure where to find what you’re looking for on the Google Cloud blog? Start here: Google Cloud blog 101: Full list of topics, links, and resources.

    Week of Nov 28 - Dec 2, 2022

    • Zeotap partnered with Google Cloud to build a next-generation customer data platform with focus on Privacy, Security and Compliance. This blog post describes their journey using Google Data Cloud including BigQuery, BI Engine, Vertex AI to build customized Audience segments at scale. Read more here.

    Week of Nov 14 - Nov 18, 2022

    • Apigee has been named aleader in the 2022 Gartner Magic Quadrant for API Management, marking the seventh time in a row we’ve earned this recognition. We remain the top API Management vendor in our Ability to Execute, with a strong product offering, customer experience, and sales execution. Please help us share the good news via TwitterFacebook, and LinkedIn.
    • Connected-Stories has built an end-to-end creative management platform on Google Cloud including BigQuery, Vertex AI to develop, serve and optimize interactive video and display Ads that scale across any channel. Read more here.

    Week of Nov 7 - Nov 11, 2022

    • Private Marketplace functionality is now available in preview for Google Cloud Marketplace to help organizations scale compliant product discovery. Learn more here.
    • No-cost access to some of our popular training is available on Coursera until December 31,2022. Get hands-on experience to enhance your technical skills in the cloud environment for the most in-demand job roles. Training is available for both technical and non-technical professionals and spans foundational to advanced content. You’ll also earn a shareable certificate. Learn more about this training offer today.

    Week of Oct 31 - Nov 4, 2022

    • IAM Deny, a security guardrail to help Google Cloud customers harden their security posture at scale, is now Generally Available (GA). IAM Deny policies manage access to Google Cloud resources based on principal, resource type, and permissions they're trying to use. It enables administrators to harden their cloud security posture easily and at scale.
    • True Fit, a data-driven personalization platform built on Google Data Cloud describe their data journey to unlock Partner growth. True Fit publishes a number of BigQuery dataset for its Retail partners using Analytics Hub. Data sharing using Google Cloud has elevated True Fit’s business using real-world data in real-time. They achieved this in conjunction with the Built with BigQuery program from Cloud Partner Engineering. Read more.
    • Google Cloud Workstations is now in public preview.

    Week of Oct 24 - Oct 28, 2022

    • Google Cloud and Sibros Technology with their award winning Deep Connected Platform is enabling vehicle manufacturers and suppliers to reach the next level in their use of data  to gain valuable insights that should mitigate risks, reduce costs, add innovative products, drive sustainability and introduce value-added use cases services in the automotive industry. Read more.
    • Data Exploration Workbench in Dataplex is now Generally Available - it offers a Spark-powered serverless data exploration experience with one-click access to Spark SQL scripts and Jupyter notebooks. With the workbench, Data Consumers can spend more time generating insights rather than integrating different tools and platforms.Learn more

    Week of Oct 17 - Oct 21, 2022

    • Google Cloud Spanner launches Lock insights and transaction insights - easily troubleshoot lock contentions using pre-built dashboards. This is the 2nd milestone launch for Spanner insights. Learn more.
    • Google Cloud Migration Center is now in public preview. Check out our blog for more information.
    • Using Envoy to create cross-region replicas for Cloud Memorystore: Learn how you can create multi-regional deployments with Cloud Memorystore by using the Envoy proxy. This blog provides a step by step walkthrough which demonstrates how you can adapt your existing application to serve multiple regions or failover to a secondary region in case of a regional outage.
    • Google Cloud Logging’s Log Analytics team is hosting an external webinar to talk about Log Analytics powered by BigQuery and how our top customers have adopted them to save time and cost. Register here.

    Week of Oct 3 - Oct 7, 2022

    • Rapid Vulnerability Detection, a zero configuration service in Security Command Center Premium that detects vulnerabilities like exposed admin interfaces, weak credentials, and incomplete software installations, is now available in Public Preview.
    • When it comes to advanced log analysis using BigQuery, Log Analytics offers a simple, cost-effective and easy-to-operate alternative to managing your own log export to BigQuery. Use this migration guide to help you write or convert your SQL queries and make switching to Log Analytics easy.

    Week of Sept 26 - Sept 30, 2022

    • Google Cloud Logging launches Log Analytics powered by Big Query. The feature allows Log users to use the power of BQ within Cloud Logging to perform Analytics on Logs. You can update your existing Log Buckets to start using Log Analytics. It does not require complex data pipeline configurations to ingest data. Learn more.
    • BigQuery ML enables Faraday to make predictions for any US consumer brand . Faraday.ai is a Partner of Google Cloud enabling companies to unlock the patterns hidden in their data using BigQuery ML, such as increasing conversion of leads to subscribe customers via personalization, allowing scoring of leads, spend forecasting and Lon-to-value ratio identification for customers. They achieved this in conjunction with the Built with BigQuery program from Cloud Partner Engineering.

    Week of Sept 19 - Sept 23, 2022

    • Cloud Dataflow - PerfKit Benchmarker (PKB) has expanded support for benchmarking your own Dataflow pipelines. You can now more easily test your Dataflow pipelines for performance optimization, capacity planning, regression testing and TCO estimation. Watch the Beam Summit talk and demo, or read the detailed walkthrough.
    • Cloud Deploy now supports the ability to verify your deployment. Learn More.
    • Google Cloud Learning launches a new dedicated cloud training program to support 10,000 Ukrainian businesses and IT professionals, starting October 4, 2022. Learn more.
    • Cloud Load Balancing now supports Cross-Project Service Referencing with Internal HTTP(S) Load Balancing and Regional External HTTP(S) Load Balancing. This new capability allows organizations to configure one central load balancer and route traffic to hundreds of services distributed across multiple different projects. Organizations can thus optimize the number of load balancers needed to deploy your application, and lower manageability, operational costs, and quota requirements. Learn more

    • Datastream now supports direct replication into BigQuery in public preview. Datastream for BigQuery , leverages the new BigQuery CDC (UPSERT) Write API, making replication from operational database sources such as AlloyDB, PostgreSQL, MySQL, and Oracle, directly into BigQuery seamless. Learn more.

    • Datastream now supports PostgreSQL as a source in public preview.

    • Datastream introduces volume-based tiered pricing that makes it more affordable for customers moving larger volumes of data. Volume-based tiered pricing will be applied automatically based on actual usage. And for the next 6 months customers will also receive 1TB/month free backfill. Learn more.
    • Google Cloud launchesthe Fly Cup Challenge, created in partnership with The Drone Racing League (DRL) and taking place at Next ‘22 to usher in the new era of tech-driven sports. Learn more.
    • Accelerate migration from self-managed object storage to Google Cloud Storage by using Storage Transfer Service. It’s designed to move 100s of TB data and offers security, simplicity, and scale-out performance out of the box. Read the full blog.
    • In addition to sync files from Git repositories, Config Sync just brought GitOps to a next level with the support of two new formats: OCI artifacts and Helm charts. Learn more.
    • Cloud CDN now supports dynamic compression using Brotli and gzip algorithms, which can reduce data sent over the network by 60-80% for compressible content. Enabling dynamic compression can help you achieve faster page load times, speed up playback speed for video content, and optimize egress costs.
    • Cloud Dataflow - PerfKit Benchmarker (PKB) has expanded support for benchmarking your own Dataflow pipelines. You can now more easily test your Dataflow pipelines for performance optimization, capacity planning, regression testing and TCO estimation. Watch the Beam Summit talk and demo, or read the detailed walkthrough here.

    Week of Sept 12 - Sept 18, 2022

    • Pub/Sub monitoring dashboards are now part of the Pub/Sub UI in Google Cloud Console. Pub/Sub users can easily monitor the health of their real-time streaming applications by reading charts of insightful metrics. Customization on these provided charts and dashboard is also supported. Learn more.

    • Google Cloud Deploy now supports application delivery to Cloud Run. Learn More 

    • Artifact Registry now supports store Kubeflow pipeline templates in a Kubeflow Pipelines repository. Learn more

    • Google Cloud Deploy has enabled +14 additional regions, bringing Cloud Deploy to regional support parity with Cloud Build. Learn More

    Week of Sept 5 - Sept 9, 2022

    • We held our biggest storage event of the year on Sept 8 where we announced a number of new product innovations including: enhanced optimization and intelligence for Cloud Storage, new Filestore capabilities, the next generation of Persistent Disk called Hyperdisk, and the unveiling of our new Google Cloud Backup and DR service. Watch all the sessions on demand or read the full recap.
    • Storage Transfer Service now offers Preview support for moving data from S3-compatible storage to Cloud Storage. This feature builds on recent Cloud Storage launches, namely support for Multipart upload and List Object V2, which makes Cloud Storage suitable for running applications written for the S3 API. With this new feature, customers can seamlessly copy data from self-managed object storage to Google Cloud Storage. For customers moving data from AWS S3 to Cloud Storage, this feature provides an option to control network routes to Google Cloud, resulting in considerably lower egress charges. See Transfer from S3-compatible sources for details.
    • Kubernetes control plane metrics are now Generally Available for Google Kubernetes Engine. You can now configure GKE clusters with control plane version 1.23.6-gke.1500 or later to export to Cloud Monitoring certain metrics emitted by the Kubernetes API server, scheduler, and controller manager. These metrics are stored in Cloud Monitoring in a Prometheus-compatible format. They can be queried by sending either a PromQL or MQL query to the Cloud Monitoring API. They can also be used anywhere within Cloud Monitoring, including in custom dashboards or alerting rules.

    Week of Aug 29 - Sept 2, 2022

    • Apigee is introducinga new Pay-as-you-go pricing model to enable customers to unlock Apigee’s API management capabilities with no upfront commitment, control their own costs and pay only for what they are use. This new pricing model is offered as a complement to the existing Subscription plans (or) the ability to evaluate it for free. Learn more  
    • The network that powers Google Cloud grows with our customers, and we are committed to providing them with the resilience and performance they need and expect. Google is investing alongside regional partners in two additional submarine cables — IAX andMIST — which will support growing demand in the APAC region. We expect these two cables to be ready for service by the end of 2023.

    Week of Aug 22 - Aug 26, 2022

    • Join us August 30th for the “Power your business with modern cloud apps” webinar to learn strategies to leverage scalable cloud apps on Google Cloud using products like Google Kubernetes Engine, Cloud Run, Apigee, and Anthos. Don’t miss this opportunity to discover best practices for:
        • Accelerating developer productivity

        • Improving business innovation

        • Boosting resource efficiency while ensuring security and regulatory compliance

    • Why all retailers should consider leveraging Google Cloud Retail Search - Cloud Retail Search, part of Discovery Solutions For Retail portfolio, helps retailers significantly improve the shopping experience on their digital platform with ‘Google-quality’ search. Users now expect the same robust and intuitive search features as are offered by Google.com and other popular web platforms, who seem to have the uncanny ability to intelligently interpret and yield relevant results to complex search queries. Cloud Retail Search offers advanced search capabilities such as better understanding user intent and self-learning ranking models that help retailers unlock the full potential of their online experience. Learn more.

    Week of Aug 15 - Aug 19, 2022

    • Cloud SQL now supports deletion protection for MySQL, Postgres and SQL Server instances. With the deletion protection flag, you can now protect your instance from unintended deletions. The flag is enabled by default in the Cloud Console and when enabled, delete is blocked and the flag has to be disabled before an instance can be deleted. To disable the deletion protection flag, the user must have at least Cloud SQL Editor role.With the deletion protection flag, you now have added protection that will prevent accidental or malicious deletion of databases that can create expensive outages for applications. To learn more about deletion protection refer to Cloud SQL documentation
    • Google Cloud Deploy the default Skaffold LTS version has been upgraded to 1.39.1 Skaffold Release notes  Google Cloud Deploy Skaffold Docs

    Week of Aug 8 - Aug 12, 2022

    • Artifact Registry now supports use of organization policies that can require Customer Managed Encryption Keys (CMEK) protection and can limit which Cloud Key Management System CryptoKeys can be used for CMEK protection. Learn More
    • Google Cloud Deploy documentation has been re-formatted to make it easier to find information being sought.Docs
    • Google Cloud Deploy new blog post describing many new features and benefits added over the first half of the year. Blog
    • Google Cloud Deploy new GUI update that surfaces information related to a target’s execution environment.  Developers can now easily find and confirm where Google Cloud Deploy render and deploy operations take place in addition to worker pool type, execution environment, service account, and artifact storage location. Learn More 

    Week of Aug 1 - Aug 5, 2022

    • Bigtable-BigQuery federation is now Generally Available. Query Bigtable directly from BigQuery and combine with other data sources for real-time analytical insights. No ETL required.  Learn more
    • Join us August 30th for the “Power your business with modern cloud apps” webinar. We will be sharing best practices and strategies for how to simplify, streamline, and secure your application development using Google Cloud services like GKE, Apigee API, Anthos, and Cloud Run. Register today.

    Week of July 25 - July 29, 2022

    • Cloud Pub/Sub is introducing a new type of subscription called a “BigQuery subscription” that writes directly from Cloud Pub/Sub to BigQuery. You no longer have to write or run your own pipelines for data ingestion from Pub/Sub into BigQuery. This new extract, load, and transform (ELT) path will be able to simplify your event-driven architecture. Learn more.
    • BigLake enables you to maximize the true potential of your data spread across clouds, storage formats, data lakes, and warehouses. It is now Generally available, and you can use it to build multi-cloud data lakes that work across GCP and OSS query engines, in a secure and governed manner. Learn more.
    • Cloud Healthcare API is now available in 4 additional regions allowing customers to serve their own users faster, more reliably, and securely. The Cloud Healthcare API provides a managed solution for storing and accessing healthcare data in Google Cloud, providing a critical bridge between existing care systems and applications hosted on Google Cloud. Learn More.
        • asia-southeast2 (Jakarta)

        • us-east1 (South Carolina) 

        • us-west1 (Oregon)

        • us-west3 (Salt Lake City)

    • Cloud Deploy - You can now view and compare Kubernetes and Skaffold configuration files for releases, using Google Cloud Console. Learn More.
    • Cloud Deploy now offers an Easy Mode option that creates a skaffold.yalm file automatically from a Kubernetes manifest.  The feature is accessed from the command line by adding --from-k8s-manifest=FROM_K8S_MANIFEST to the gcloud deploy releases create command. The generated skaffold.yaml is suitable for onboarding, learning, and demonstrating Google Cloud Deploy.  Learn More

    Week of July 18 - July 22, 2022

    • Launched three major new Dataflow features to General Availability: Dataflow Go GA, Dataflow Prime GA, and Dataflow ML GA. 
    • The Data Engineer Spotlight is THIS WEEK! Register today to experience four technical sessions, expert speakers, a q&a session, and tons of on demand content. 
    • Speed up your workflow executions by running steps concurrently! Workflows now supports parallel steps, which can reduce the overall execution time for workflows that include long-running operations like HTTP requests and callbacks. Our latest codelab shows you how to more quickly process a dataset by parallelizing multiple BigQuery jobs within a workflow. Read more in our blog post.
    • Google Cloud introduces Batch. Batch is a fully-managed service which helps you run batch jobs easily, reliably, and at scale. Without additional software, Batch dynamically and efficiently manages resource provisioning, scheduling, queuing, and execution, freeing up time for you to focus on analyzing results. It is free, and you only pay for the resources used, but you can further reduce cost with Spot VMs and Custom Machine Types. Read more in the launch blog. 
    • Run your Arm workloads on Google Kubernetes Engine (GKE) with Tau T2A VMs in preview. Arm nodes come packed with key GKE features, including the ability to run using GKE Autopilot. We’ve also updated many popular Google Cloud developer tools and partnered with leading CI/CD, observability, and security ISVs to simplify running Arm workloads on GKE.

    Week of July 11 - July 15, 2022

    • Cloud Deploy users can now suspend a delivery pipeline. Suspending a pipeline is useful  for situations when there’s a problem with a release and you want to make sure no further actions occur. Suspended pipelines also allow teams to pause releases for a defined time period like holidays, busy seasons, etc.
    • Cloud Deploy users can now permanently abandon a release. An abandoned release has the following restrictions - it cannot be promoted, it cannot be rolled back, and it cannot be unabandoned. Some reasons to abandon a release include a serious bug or bugs in the release, a major security issue in the release, or the release includes a deprecated feature.

    Week of July 4 - July 8, 2022

    • Blue-green upgrade mechanism for upgrading GKE node pools is now generally available. With blue-green upgrades, you now have more control over the upgrade process for highly available production workloads. GKE creates a new set of nodes, moves your workloads and gives you “soak” time before committing the upgrade. You can also quickly rollback in the event your workloads cannot tolerate the upgrade.
    • Get a deep dive into managing traffic fluctuations with Google Cloud. European travel group REWE explores the value of Cloud Spanner In mitigating and supporting traffic surges and optimizing the consumer experience during peak travel seasons. 
    • Differentiation brings great customer experiences. Differentiation achievements help customers select a partner with confidence, knowing that Google Cloud has verified their skills and customer success across our products, horizontal solutions and key industries.

    Week of June 27 - July 1, 2022

    • Time-sharing GPUs on GKE are generally available. Time-sharing allows multiple containers to share a single physical GPU attached to a node. This helps achieve greater cost effectiveness by improving GPU Utilization and workload throughput.
    • Dual-stack networking is now available (preview) for GKE.  With this feature, you can now allocate dual-stack IPv4 and IPv6 addresses for Pods and nodes.  For Services, you can allocate single-stack (IPv4 only or IPv6 only) or dual-stack addresses.
    • View your GKE costs directly in Cloud Billing. Now in preview, you can view a detailed breakdown of cluster costs directly in the Google Cloud console or the Cloud Billing export to BigQuery.  With this detailed information, you can more easily allocate the costs of your GKE clusters and workloads across different teams.
    • Cloud Deploy is now available in 5 additional regions improving performance and flexibility. Learn More.
        • asia-east2 (Hong Kong)

        • europe-west2 (London)

        • europe-west3 (Frankfurt)

        • us-east4 (N. Virginia)

        • us-west2 (Los Angeles)

    • Cloud Deploy deployment of containers to Anthos user clusters using Connect gateway is now generally available. Learn more.
    • Launched Query Insights for Cloud Spanner - a new visualization tool for visualizing Query performance metrics and debugging Query Performance issues  in the Cloud console!
    • Now in preview, BigQuery BI Engine Preferred Tables. Preferred tables enable BigQuery customers to prioritize specific tables for acceleration by BI Engine to ensure predictable performance and optimized use of resources. Read our blog to learn more.
    • MITRE ATT&CK® mappings for Google Cloud security capabilities through our research partnership with the MITRE Engenuity Center for Threat-Informed Defense. Learn more.
    • Launched a new way of accessing billing information — from the Cloud Console mobile app. Now, with your Android or iOS mobile device, you can access not only your resources (App Engine, Compute, Databases, Storage or IAM), logs, incidents, errors, but also your billing information. With these enhanced billing features, we are making it easier for you to understand your cloud spend. 
    • Eventarc adds support for Firebase Realtime Database. Now you can create Eventarc triggers to send Firebase Realtime Database events to your favorite destinations that Eventarc supports. 
    • PostgreSQL interface for Cloud Spanner is generally available. The PostgreSQL interface for Spanner combines the scalability and reliability of Spanner that enterprises trust with the familiarity and portability of PostgreSQL that development teams love. Devops teams that have scaled their databases with brittle sharding or complex replication can now simplify their architecture with Spanner, using the tools and skills they already have. Get started today, for as low as $65 USD/month. Learn more.

    Week of June 20 - June 24, 2022

    • Read the latest Cloud Data Hero Story. This edition focuses on Francisco, the founder of Direcly, a Google Cloud partner. Francisco immigrated from Quito, Ecuador and founded his company from the ground up, without any external funding. Now, he’s finding innovative ways to leverage Google Cloud’s products for companies like Royal Caribbean International.

    Week of June 13 - June 17, 2022

    • Launched higher reservation limits for BigQuery BI Engine. BigQuery BI Engine now supports a default maximum reservation of 250GB per project for all customers. Previously this was at 100GB. You can still request additional BI Engine reservations for your projects here. This is being rolled out in the Google Cloud Console over the next few days to all customers. Alternatively, all customers can already use DDL statement as follows 

      • ALTER BI_CAPACITY `<PROJECT_ID>.region-<REGION>.default` SET OPTIONS(size_gb = 250);

    • Don’t miss our first ever Google Cloud Sustainability Summit on June 28, 2022. Learn how business and technology leaders are building for the future, and get insights to help you enact sustainable change within your organization. At this digital event, you’ll have a chance to explore the latest tools and best practices that can help you solve your most complex challenges. And you’ll be among the first to find out about product updates across Google Cloud, Earth Engine, and Google Workspace. Register today for this no-cost, solution-packed event.
    • On June 14, 2022, we are unveiling the winners of this year’s Google Cloud Customer Awards.We received an unprecedented number of entries and every participant can be proud of what their organization is achieving in the cloud today. The second annual Google Cloud Customer Awards celebrates organizations around the world who have continued to flex and adapt to new demands, while turning new ideas into interesting realities. Read our blog to check out the results.
    • The Cloud Digital Leader track is now part of the Google Cloud career readiness program, available for eligible faculty preparing their students for a cloud-first workforce. Students will build cloud literacy and learn the value of Google Cloud in driving digital transformation while also preparing for the Cloud Digital Leader certification exam. Learn more.

    Week of June 6 - June 10, 2022

    • Artifact Registry - Audit logs for Maven, npm, and Python repositories are now available in Cloud Logging. Documentation
    • Cloud Deploy New Region - Cloud Deploy is now available in the australia-southeast1 (Syndey) region. Release Notes
    • Cloud Deploy Terraform provider support. Cloud Deploy declarative resources, Delivery Pipeline and Target, are now available via the Google Cloud Deploy Terraform Provider. Documentation
    • Anthos on VMware user cluster lifecycle from the Google Cloud Console isin GA now. You will now be able to create, delete, update, and see Anthos on VMware user clusters from the Google Cloud Console. To learn more about the feature, check out  the Anthosdocumentation.
    • Granular instance sizing for Cloud Spanner is now generally available. Get started for as low as $40 per month and take advantage of 99.999% availability and scale as needed without downtime. With granular instance sizing, at a much lower cost you can still get all of the Spanner benefits like transparent replication across zones and regions, high-availability, resilience to different types of failures, and the ability to scale up and down as needed without any downtime.  Learn more.

    Week of May 30 - June 3, 2022

    • Google Cloud Deploy support for Skaffold version 1.37.1 has been updated to version 1.37.2, which is now the default Skaffold version. (Skaffold Docs)
    • Google Cloud just made it easier to compare the cost of modernization options. Want to look at Lift & Shift vs. Containerization options? The latest version of our fit assessment now includes cost guidance. See the release notes for more details.
    • Did you notice the new “Protect” tab in Google Kubernetes Engine? Protect for GKE automatically scans, identifies and suggests fixes for workload configuration risks by comparing your running workload config against industry best practices like the Kubernetes Pod Security Standards. Check out the documentation to learn more.
    • Google Cloud makes data warehouse migrations even easier with automated SQL translation as part of the BigQuery Migration Service. Learn more.
    • Google Cloud simplifies customer verification and benefits processing with Document AI for Identity cards now generally available. Automate identity verification and fraud detection workflows by extracting information from identity cards with a high degree of accuracy. Learn more.

    Week of May 23 - May 27, 2022

    • Artifact Registry now is available in more regions. Artifact Registry is now available in the following regions - europe-west9 (Paris, France), europe-southwest1 (Madrid, Spain), and us-east5 Columbus, United States). Release Notes 
    • Change streams for Cloud Spanner is now generally available.With change streams, Spanner users are now able to track and stream out changes (inserts, updates, and deletes) from their Cloud Spanner database in near real-time. Learn more.
    • Artifact Registry now supports new repository types. Apt and Yum repositories are now generally available. Release Notes
    • Business Messages announces expansion of its partner ecosystem to includeTwilio, Genesys, and Avaya - each widely recognized global platforms for customer care and communications. Read how they help businesses implement both AI Bot and Live Agent chat solutions to stay open for conversations and advance customers through the purchase funnel. And be sure to check out the new Business Messages partner directory!
    • Learn how to set up metrics and alerts to monitor errors in Cloud SQL for SQL Server error log using Google Cloud’s Operation Suite with this blog post.

    Week of May 16 - May 20, 2022

    • Machine learning is among the most exciting, fastest-moving technology disciplines. Join us June 9th for Google Cloud Applied ML Summit, a digital event that brings together some of the world’s leading ML and data science professionals to explore the latest cutting-edge AI tools for developing, deploying, and managing ML models at scale.
    • Join us virtually on June 2nd at the Google Cloud Startup Summit where you’ll hear the latest announcements about how we’re investing in and supporting the startup ecosystem. You'll also learn from technology experts about streamlining your app development and creating better user experiences, and get insights from innovative venture capitalists and founders to help your startup grow. This event is headlined by our keynote with Google Cloud CEO Thomas Kurian and Dapper Labs Co-Founder and CEO Roham Gharegozlou as they discuss the paradigm changes being brought by web3 and how startups can prepare for this shift.
    • Google Cloud Managed Service for Prometheus introduced a new high-usage pricing tierto bring more value for Kubernetes users who want to move all of their metrics operations to the service, and dropped the pricing for existing tiers by 25 percent.
    • Hear from the SRE teamat Maisons du Monde detail their journey from building open source Prometheus to deciding that Managed Service for Prometheus was the best fit for their organization.
    • Google Cloud has launched Autonomic Security Operations (ASO) for the U.S. public sector, a solution to modernize threat management, in line with the objectives of the White House Executive Order 14028 and Office of Management and Budget M-21-31. ASO is a transformational approach to security operations, powered by our Chronicle and Siemplify, to comprehensively detect and respond to cyber telemetry across an agency while meeting the Event Logging Tier requirements of the EO.

    Week of May 9 - May 13, 2022

    • We just published a blog post announcing the latest Google Cloud’s STAC-M3™ benchmark results. Following up on our 2018 STAC-M3 benchmark audit, a redesigned Google Cloud architecture achieved significant improvements: Up to 18x faster, Up to 9x higher throughput, and new record in STAC-M3.ß1.1T.YRHIBID-2.TIME. We also published a whitepaper on how we designed and optimized the cluster for API-driven cloud resources.
    • Security Command Center (SCC) released new finding types that alert customers when SCC is either misconfigured or configured in a way that prevents it from operating as expected. These findings provide remediation steps to return SCC to an operational state. Learn more and see examples.

    Week of May 2 - May 6, 2022

    • As part of Anthos release 1.11, Anthos Clusters on Azure and Anthos Clusters on AWS now support Kubernetes versions 1.22.8-gke.200 and 1.21.11-gke.100. As a preview feature, you can now choose Windows as your node pool image type when you create node pools with Kubernetes version 1.22.8. For more information, check out the Anthos multi cloud website.
    • The Google Cloud Future of Data whitepaper explores why the future of data will involve three key themes: unified, flexible, and accessible.
    • Learn about BigQuery BI Engine and how to analyze large and complex datasets interactively with sub-second query response time and high concurrency. Now generally available.
    • Announcing the launch of the second series of the Google Cloud Technical Guides for Startups, a video series for technical enablement aimed at helping startups to start, build and grow their businesses.
    • Solving for food waste with data analytics in Google Cloud. Explore why it is so necessary as a retailer to bring your data to the cloud to apply analytics to minimize food waste.
    • Mosquitoes get the swat with new Mosquito Forecast built by OFF! Insect Repellents and Google Cloud. Read how SC Johnson built an app that predicts mosquito outbreaks in your area.

    Week of April 25 - April 29, 2022

    Week of April 18 - April 22, 2022 

    Week of April 11 - April 15, 2022 

    • Machine learning company Moloco uses Cloud Bigtable to process 5+ million ad bid requests per second. Learn how Moloco uses Bigtable to keep up in a speedy market and process ad requests at unmatched speed and scale.
    • The Broad Institute of MIT and Harvard speeds scientific research with Cloud SQL. One of our customers, the Broad Institute, shares how they used Cloud SQL to accelerate scientific research. In this customer story, you will learn how the Broad Institute was able to get Google’s database services up and running quickly and lower their operational burden by using Cloud SQL.
    • Data Cloud Summit ‘22 recap blog on April 12: Didn’t get a chance to watch the Google Data Cloud Summit this year? Check out our recap to learn the top five takeaways - learn more about product announcements, customer speakers, partners, product demos and check out more resources on your favorite topics.
    • The new Professional Cloud Database Engineer certification in beta is here. By participating in this beta, you will directly influence and enhance the learning and career path for Cloud Database Engineers globally. Learn more and sign up today.
    • Learn how to use Kubernetes Jobs and cost-optimized Spot VMs to run and manage fault-tolerant AI/ML batch workloads on Google Kubernetes Engine.
    • Expanding Eventarc presence to 4 new regions—asia-south2, australia-southeast2, northamerica-northeast2, southamerica-west1. You can now create Eventarc resources in 30 regions.

    Week of April 4 - April 8, 2022 

    • Join us at the Google Data Cloud Summit on Wednesday, April 6, at 9 AM PDT.  Learn how Google Cloud technologies across AI, machine learning, analytics, and databases have helped organizations such as Exabeam, Deutsche Bank, and PayPal to break down silos, increase agility, derive more value from data, and innovate faster. Register today for this no cost digital event.
    • Announcing the first Data Partner Spotlight, on May 11th 
      We saved you a seat at the table to learn about the Data Cloud Partners in the Google Cloud ecosystem. We will spotlight technology partners, and deep dive into their solutions, so business leaders can make smarter decisions, and solve complex data challenges with Google Cloud. Register today for this digital event
    • Introducing Vertex AI Model Registry, a central repository to manage and govern the lifecycle of your ML models. Designed to work with any type of model and deployment target, including BigQuery ML, Vertex AI Model Registry makes it easy to manage and deploy models. Learn more about Google’s unified data and AI offering.
    • Vertex AI Workbenchis now GA, bringing together Google Cloud’s data and ML systems into a single interface so that teams have a common toolset across data analytics, data science, and machine learning. With native integrations across BigQuery, Spark, Dataproc, and Dataplex data scientists can build, train and deploy ML models 5X faster than traditional notebooks. Don’t miss this ‘How to’ session from the Data Cloud Summit.

    Week of Mar 28 - April 1, 2022

    • Learn how Google Cloud’s network and Network Connectivity Center can transform the private wires used for voice trading.
    • Anthos bare metal 1.11 minor release is available now. Containerd is the default runtime in Anthos clusters on bare metal in this release.  Examples of the feature enhancements are as below:
        • Upgraded Anthos clusters on bare metal to use Kubernetes version 1.22;

        • AddedEgress Network Address Translation (NAT) gateway capability to provide persistent, deterministic routing for egress traffic from clusters

        • Enabled IPv4/IPv6 dual-stack support

        • Additional enhancements in the release can be found in the the release note  here

    Week of Mar 21 - Mar 25, 2022

    • Google Cloud’s Behnaz Kibria reflects on a recent fireside chat that she moderated with Google Cloud’s Phil Moyer and former SEC Commissioner, Troy Paredes at FIA Boca. The discussion focused on the future of markets and policy, the new technologies that are already paving the way for greater speed and transparency, and what it will take to ensure greater resiliency, performance and security over the longer term. Read the blog.
    • Eventarc adds support for Firebase Alerts. Now you can create Eventarc triggers to send Firebase Alerts events to your favorite destinations that Eventarc supports.
    • Now you can control how your alerts handle missing data from telemetry data streams using Alert Policies in the Cloud Console or via API. In cloud ecosystems there are millions of data sources, and often, there are pauses or breaks in their telemetry data streams. Configure how this missing data influences your open incidents:

      • Option 1: Missing data is treated as “above the threshold”- and your incidents will stay open.

      • Option 2: missing data is evaluated as “below the threshold” and the incident will close after your retest window period.

    Week of Mar 14 - Mar 18, 2022

    • Natural language processing is a critical AI tool for understanding unstructured, often technical healthcare information, like clinical notes and lab reports. See how leading healthcare organizations are exploring NLP to unlock hidden value in their data.
    • A handheld lab: Read how Cue Health is revolutionizing healthcare diagnostics for COVID-19 and beyond—all from the comfort of home.
    • Providing reliable technical support for an increasingly distributed, hybrid workforce is becoming all the more crucial, and challenging. Cloud Customer Care has added a range of new offerings and features for businesses of all sizes to help you find the Google Cloud technical support services that are best for your needs and budget.
    • #GoogleforGames Dev Summit is NOW LIVE. Watch the keynote followed by over 20 product sessions on-demand to help you build high quality games and reach audiences around the world. Watch → g.co/gamedevsummit
    • Meeting (and ideally, exceeding) consumer expectations today is often a heavy lift for many companies—especially those running modern apps on legacy, on-premises databases. Read how Google Cloud database services provide you the best options for industry-leading reliability, global scale & open standards, enabling you to make your next big idea a reality. Read this blog.

    Week of Mar 07 - Mar 11, 2022

    • Learn how Google Cloud Partner Advantage partners help customers solve real-world business challenges in retail and ecommerce through data insights.
    • Introducing Community Security Analytics, an open-source repository of queries for self-service security analytics. Get started analyzing your own Google Cloud logs with BigQuery or Chronicle to detect potential threats to your workloads, and to audit usage of your data. Learn more.
    • On a mission to accelerate the world's adoption of a modern approach to threat management through Autonomic Security Operations, our latest update expands our ASO technology stack with Siemplify, offers a solution to the latest White House Executive Order 14028, introduces a community-based security analytics repository, and announces key R&D initiatives that we’re investing in to bolster threat-informed defenses worldwide. Read more here
    • Account defender, available today in public preview, is a feature in reCAPTCHA Enterprise that takes behavioral detection a step further. It analyzes the patterns of behavior for an individual account, in addition to the patterns of behavior of all user accounts associated with your website. Read more here.
    • Maximize your Cloud Spanner savings with new committed use discounts. Get up to 40% discount on Spanner compute capacity by purchasing committed use discounts. Once you make a commitment to spend a certain amount on an hourly basis on Spanner from a billing account, you can get discounts on instances in different instance configurations, regions, and projects associated with that billing account. This flexibility helps you achieve a high utilization rate of your commitment across regions and projects without manual intervention, saving you time and money. Learn more. 
    • In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Learn more.
    • Learn how to accelerate vendor due diligence on Google Cloud by leveraging third party risk management providers.
    • Hybrid work should not derail DEI efforts. If you’re moving to a hybrid work model, here’s how to make diversity, equity and inclusion central to it.
    • Learn how Cloud Data Fusion provides scalable data integration pipelines to help consolidate a customer’s SAP and non-SAP datasets within BigQuery.
    • Hong Kong–based startup TecPal builds and manages smart hardware and software for household appliances all over the world using Google Cloud. Find out how.
    • Eventarc adds support for Firebase Remote Config and Test Lab in preview. Now you can create Eventarc triggers to send Firebase Remote Config or Firebase Test Lab events to your favorite destinations that Eventarc supports. 
    • Anthos Service Mesh Dashboard is now available (public preview) on the Anthos clusters on Bare Metaland Anthos clusters on VMware . Customers can now get out-of-the-box telemetry dashboards to see a services-first view of their application on the Cloud Console.
    • Micro Focus Enterprise Server Google Cloud blueprint performs an automated deployment of Enterprise Server inside a new VPC or existing VPC. Learn more.
    • Learn how to wire your application logs with more information without adding a single line of code and get more insights with the new version of the Java library.
    • Pacemaker Alerts in Google Cloudcluster alerting enables the system administrator to be notified about critical events of the enterprise workloads in GCP like the SAP solutions.

    Week of Feb 28 - Mar 04, 2022

    • Announcing the Data Cloud Summit, April 6th!—Ready to dive deep into data? Join us at the Google Data Cloud Summit on Wednesday, April 6, at 9 AM PDT. This three-hour digital event is packed with content and experiences designed to help you unlock innovation in your organization. Learn how Google Cloud technologies across AI, machine learning, analytics, and databases have helped organizations such as Exabeam, Deutsche Bank, and PayPal to break down silos, increase agility, derive more value from data, and innovate faster. Register today for this no cost digital event.
    • Google Cloud addresses concerns about how its customers might be impacted by the invasion of Ukraine. Read more.
    • Eventarc is now HIPAA compliant— Eventarc is covered under the Google Cloud Business Associate Agreement (BAA), meaning it has achieved HIPAA compliance. Healthcare and life sciences organizations can now use Eventarc to send events that require HIPAA compliance.
    • Eventarc trigger for Workflows is now available in Preview. You can now select Workflows as a destination to events originating from any supported event provider
    • Error Reporting automatically captures exceptions found in logs ingested by Cloud Logging from the following languages: Go, Java, Node.js, PHP, Python, Ruby, and .NET, aggregates them, and then notifies you of their existence.
    • Learn moreabout how USAA partnered with Google Cloud to transform their operations by leveraging AI to drive efficiency in vehicle insurance claims estimation.
    • Learn how Google Cloud and NetApp’s ability to “burst to cloud”, seamlessly spinning up compute and storage on demand accelerates EDA design testing.
    • Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team.
    • Google Cloud Easy as Pie Hackathon, the results are in.
    • VPC Flow Logs Org Policy Constraints allow users to enforce VPC Flow Logs enablement across their organization, and impose minimum and maximum sampling rates. VPC Flow Logs are used to understand network traffic for troubleshooting, optimization and compliance purposes.
    • Google Cloud Managed Service for Prometheus is now generally available. Get all of the benefits of open source-compatible monitoring with the ease of use of Google-scale managed services. 
    • Google Cloud Deploy now supports Anthos clusters bringing opinionated, fully managed continuous delivery for hybrid and multicloud workloads. Cloud Deploy provides integrated best practices, security, and metrics from a centralized control plane.
    • Learn Google Workspace’s vision for frontline workers and how our Frontline solution innovations can bridge collaboration and productivity across workforce in-office and remote.

    Week of Feb 21 - Feb 25, 2022

    • Read how Paerpay promotes bigger tabs and faster, more pleasant transactions with Google Cloud  and the Google for Startups Cloud Program.
    • Learn about the advancements we’ve released for our Google Cloud Marketplace customers and partners in the last few months.
    • BBVA collaborated with Google Cloud to create one of the most successful Google Cloud training programs for employees to date. Read how they did it
    • Google for Games Developer Summit returns March 15 at 9AM PT! Learn about our latest games solutions and product innovations. It’s online and open to all. Check out the full agenda g.co/gamedevsummit 
    • Build a data mesh on Google Cloud with Dataplex (now GA 🎉). Read how Dataplex enables customers to centrally manage, monitor, and govern distributed data, and makes it securely accessible to a variety of analytics and data science tools.
    • While understanding what is happening now has great business value, forward-thinking companies like Tyson Foods are taking things a step further, using real-time analytics integrated with artificial intelligence (AI) and business intelligence (BI) to answer the question, “what might happen in the future?
    • Join us for the first Google Cloud Security Talks of 2022, happening on March 9th. Modernizing SecOps is a top priority for so many organizations. Register to attend and learn how you can enhance your approach to threat detection, investigation and response!
    • Google Cloud introduces their Data Hero series with a profile on Lynn Langit, a data cloud architect, educator, and developer on GCP.
    • Building ML solutions? Check out these guidelines for ensuring quality in each process of the MLOps lifecycle.
    • Eventarc is now Payment Card Industry Data Security Standard (PCI DSS)-compliant.

    Week of Feb 14 - Feb 18, 2022

    • The Google Cloud Retail Digital Pulse-Asia Pacificis an ongoing annual assessment carried out in partnership with IDC Retail Insights to understand the maturity of retail digital transformation in the Asia Pacific Region. The study covers 1304 retailers across eight markets & sub-segments to investigate their digital maturity across five dimensions - strategy, people, data , technology and process to arrive at a 4-stage Digital Pulse Index, with 4 being the most mature. It provides great insights in various stages of digital maturity of asian retailers, their drivers for digitisation, challenges, innovation hotspots and the focus areas with respect to use cases and technologies.
    • Deploying Cloud Memorystore for Redis for any scale: Learn how you can scale Cloud Memorystore for high volume use cases by leveraging client-side sharding. This blog provides a step by step walkthrough which demonstrates how you can adapt your existing application to scale to the highest levels with the help of the Envoy Proxy. Read our blog to learn more.
    • Check out how six SAP customers are driving value with BigQuery.
    • This Black History Month, we're highlighting Black-led startups using Google Cloud to grow their businesses. Check out how DOSS and its co-founder, Bobby Bryant, disrupts the real estate industry with voice search tech and analytics on Google Cloud.
    • Vimeo leverages managed database services from Google Cloud to serve up billions of views around the world each day. Read how it uses Cloud Spanner to deliver a consistent and reliable experience to its users no matter where they are.
    • How can serverless best be leveraged? Can cloud credits be maximized? Are all managed services equal? We dive into top questions for startups.
    • Google introduces Sustainability value pillar in GCP Active Assist solutionto accelerate our industry leadership in Co2 reduction and environmental protection efforts. Intelligent carbon footprint reduction tool is launched in preview.
    • Central States health insurance CIO Pat Moroney shares highs and lows from his career transforming IT. Read more
    • Traffic Director client authorization for proxyless gRPC services is now generally available. Combine with managed mTLS credentials in GKE to centrally manage access between workloads using Traffic Director. Read more.
    • Cloud Functions (2nd gen) is now in public preview. The next generation of our Cloud Functions Functions-as-a-Service platform gives you more features, control, performance, scalability and events sources. Learn more.

    Week of Feb 7 - Feb 11, 2022

    • Now announcing the general availability of the newest instance series in our Compute Optimized family, C2D—powered by 3rd Gen AMD EPYC processors. Read how C2D provides larger instance types, and memory per core configurations ideal for customers with performance-intensive workloads.
    • Digital health startup expands its impact on healthcare equity and diversity with Google Cloud Platform and the Google for Startups Accelerator for Black Founders. Rear more.
    • Storage Transfer Service support for agent pools is now generally available (GA) . You can use agent pools to create isolated groups of agents as a source or sink entity in a transfer job. This enables you to transfer data from multiple data centers and filesystems concurrently, without creating multiple projects for a large transfer spanning multiple filesystems and data centers. This option is available via API, Console, and gcloud transfer CLI.
    • The five trends driving healthcare and life sciences in 2022 will be powered by accessible data, AI, and partnerships.
    • Learn how COLOPL, Minna Bank and 7-Eleven Japan use Cloud Spanner to solve their scalability, performance and digital transformation challenges.

    Week of Jan 31 - Feb 4, 2022

    • Pub/Sub Lite goes regional. Pub/Sub Lite is a high-volume messaging service with ultra-low cost that now offers regional Lite topics, in addition to existing zonal Lite topics. Unlike zonal topics which are located in a single zone, regional topics are asynchronously replicated across two zones. Multi-zone replication protects from zonal failures in the service. Read about it here.

    • Google Workspace is making it easy for employees to bring modern collaboration to work, even if their organizations are still using legacy tools. Essentials Starter is a no-cost offer designed to help people bring the apps they know and love to use in their personal lives to their work life. Learn more.

    • We’re now offering 30 days free access to role-based Google Cloud training with interactive labs and opportunities to earn skill badges to demonstrate your cloud knowledge. Learn more.

    • Security Command Center (SCC) Premium adds support for additional compliance benchmarks, including CIS Google Cloud Computing Foundations 1.2 and OWASP Top 10 2017 & 2021. Learn more about how SCC helps manage and improve your cloud security posture.

    • Storage Transfer Service now offers Preview support transfers from self-managed object storage systems via user-managed agents. With this new feature, customers can seamlessly copy PBs of data from cloud or on-premise object storage to Google Cloud Storage. Object Storage sources must be compatible with Amazon S3 APIs. For customers migrating from AWS S3 to GCS, this feature gives an option to control network routes to Google Cloud. Fill this signup form to access this STS feature.

    Week of Jan 24-Jan 28, 2022

    • Learn how Sabre leveraged a 10-year partnership with Google Cloud to power the travel industry with innovative technology. As Sabre embarked on a cloud transformation, it sought managed database services from Google Cloud that enabled low latency and improved consistency. Sabre discovered how the strengths of both Cloud Spanner and Bigtable supported unique use cases and led to high performance solutions.

    • Storage Transfer Service now offers Preview support for moving data between two filesystems and keeping them in sync on a periodic schedule. This launch offers a managed way to migrate from a self-managed filesystem to Filestore. If you have on-premises systems generating massive amounts of data that needs to be processed in Google Cloud, you can now use Storage Transfer Service to accelerate data transfer from an on-prem filesystem to a cloud filesystem. See Transfer data between POSIX file systems for details.
    • Storage Transfer Service now offers Preview support for preserving POSIX attributes and symlinks when transferring to, from, and between POSIX filesystems. Attributes include the user ID of the owner, the group ID of the owning group, the mode or permissions, the modification time, and the size of the file. See Metadata preservation for details.
    • Bigtable Autoscaling is Generally Available (GA): Bigtable Autoscaling automatically adds or removes capacity in response to the changing demand for your applications. With autoscaling, you only pay for what you need and you can spend more time on your business instead of managing infrastructure.  Learn more.

    Week of Jan 17-Jan 21, 2022

    • Sprinklr and Google Cloud join forces to help enterprises reimagine their customer experience management strategies. Hear more from Nirav Sheth, Nirav Sheth, Director of ISV/Marketplace & Partner Sales.
    • Firestore Key Visualizer is Generally Available (GA): Firestore Key Visualizer is an interactive, performance monitoring tool that helps customers observe and maximize Firestore’s  performance. Learn more.
    • Like many organizations, Wayfair faced the challenge of deciding which cloud databases they should migrate to in order to modernize their business and operations. Ultimately, they chose Cloud SQL and Cloud Spanner because of the databases’ clear path for shifting workloads as well as the flexibility they both provide. Learn how Wayfair was able to migrate quickly while still being able to serve production traffic at scale.

    Week of Jan 10-Jan 14, 2022

    • Start your 2022 New Year’s resolutions by learning at no cost how to use Google Cloud. Read more to find how to take advantage of these training opportunities.
    • 8 megatrends drive cloud adoption—and improve security for all. Google Cloud CISO Phil Venables explains the eight major megatrends powering cloud adoption, and why they’ll continue to make the cloud more secure than on-prem for the foreseeable future. Read more.

    Week of Jan 3-Jan 7, 2022

    • Google Transfer Appliance announces General Availability of online mode. Customers collecting data at edge locations (e.g. cameras, cars, sensors) can offload to Transfer Appliance and stream that data to a Cloud Storage bucket. Online mode can be toggled to send the data to Cloud Storage over the network, or offline by shipping the appliance. Customers can monitor their online transfers for appliances from Cloud Console.

    Week of Dec 27-Dec 31, 2021

    • The most-read blogs about Google Cloud compute, networking, storage and physical infrastructure in 2021. Read more.

    • Top Google Cloud managed container blogs of 2021.

    • Four cloud security trends that organizations and practitioners should be planning for in 2022—and what they should do about them. Read more.

    • Google Cloud announces the top data analytics stories from 2021 including the top three trends and lessons they learned from customers this year. Read more.

    • Explore Google Cloud’s Contact Center AI (CCAI) and its momentum in 2021. Read more.

    • An overview of the innovations that Google Workspace delivered in 2021 for Google Meet. Read more.

    • Google Cloud’s top artificial intelligence and machine learning posts from 2021. Read more.

    • How we’ve helped break down silos, unearth the value of data, and apply that data to solve big problems. Read more.

    • A recap of the year’s infrastructure progress, from impressive Tau VMs, to industry-leading storage capabilities, to major networking leaps. Read more.

    • Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team. Read more.

    • Google Cloud - A cloud built for developers — 2021 year in review. Read more.

    • API management continued to grow in importance in 2021, and Apigee continued to innovate capabilities for customers, new solutions, and partnerships. Read more.

    • Recapping Google’s progress in 2021 toward running on 24/7 carbon-free energy by 2030 — and decarbonizing the electricity system as a whole. Read more.

    Week of Dec 20-Dec 24, 2021

    • And that’s a wrap! After engaging in countless customer interviews, we’re sharing our top 3 lessons learned from our data customers in 2021. Learn what customer data journeys inspired our top picks and what made the cut here.
    • Cloud SQL now shows you minor version information. For more information, see our documentation.
    • Cloud SQL for MySQL now allows you to select your MySQL 8.0 minor version when creating an instance and upgrade MySQL 8.0 minor version. For more information, see our documentation.
    • Cloud SQL for MySQL now supports database auditing. Database auditing lets you track specific user actions in the database, such as table updates, read queries, user privilege grants, and others. To learn more, see MySQL database auditing.

    Week of Dec 12-Dec 17, 2021

    • A CRITICAL VULNERABILITY in a widely used logging library, Apache’s Log4j, has become a global security incident. Security researchers around the globe warn that this could have serious repercussions. Two Google Cloud Blog posts describe how Cloud Armorand Cloud IDS both help mitigate the threat.
    • Take advantage of these ten no-cost trainings before 2022. Check them out here.
    • Deploy Task Queues alongside your Cloud Application: Cloud Tasks is now available in 23 GCP Regions worldwide. Read more.
    • Managed Anthos Service Mesh support for GKE Autopilot (Preview): GKE Autopilot with Managed ASM provides ease of use and simplified administration capabilities, allowing customers to focus on their application, not the infrastructure. Customers can now let Google handle the upgrade and lifecycle tasks for both the cluster and the service mesh. Configure Managed ASM with asmcli experiment in GKE Autopilot cluster.
    • Policy Troubleshooter for BeyondCorp Enterprise is now generally available! Using this feature, admins can triage access failure events and perform the necessary actions to unblock users quickly. Learn more by registering for Google Cloud Security Talks on December 15 and attending the BeyondCorp Enterprise session. The event is free to attend and sessions will be available on-demand.
    • Google Cloud Security Talks, Zero Trust Edition: This week, we hosted our final Google Cloud Security Talks event of the year, focused on all things zero trust. Google pioneered the implementation of zero trust in the enterprise over a decade ago with our BeyondCorp effort, and we continue to lead the way, applying this approach to most aspects of our operations. Check out our digital sessions on-demand to hear the latest updates on Google’s vision for a zero trust future and how you can leverage our capabilities to protect your organization in today’s challenging threat environment.

    Week of Dec 6-Dec 10, 2021

    • 5 key metrics to measure cloud FinOps impact in 2022 and beyond - Learn about the 5 key metrics to effectively measure the impact of Cloud FinOps across your organization and leverage the metrics to gain insights, prioritize on strategic goals, and drive enterprise-wide adoption. Learn more
    • We announced Cloud IDS, our new network security offering, is now generally available. Cloud IDS, built with Palo Alto Networks’ technologies, delivers easy-to-use, cloud-native, managed, network-based threat detection with  industry-leading breadth and security efficacy. To learn more, and request a 30 day trial credit, see the Cloud IDS webpage.

    Week of Nov 29-Dec 3, 2021

    • Join Cloud Learn, happening from Dec. 8-9: This interactive learning event will have live technical demos, Q&As, career development workshops, and more covering everything from Google Cloud fundamentals to certification prep. Learn more.

    • Get a deep dive into BigQuery Administrator Hub– With BigQuery Administrator Hub you can better manage BigQuery at scale with Resource Charts and Slot Estimator Administrators. Learn more about these tools and just how easy they are to usehere.

    • New data and AI in Media blog - How data and AI can help media companies better personalize; and what to watch out for. We interviewed Googlers, Gloria Lee, Executive Account Director of Media & Entertainment, and John Abel, Technical Director for the Office of the CTO, to share exclusive insights on how media organizations should think about and ways to make the most out of their data in the new era of direct-to-consumer. Watch our video interview with Gloria and John and read more.

    • Datastream is now generally available (GA): Datastream, a serverless change data capture (CDC) and replication service, allows you to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. Datastream currently supports CDC ingestion from Oracle and MySQL to Cloud Storage, with additional sources and destinations coming in the future. Datastream integrates with Dataflow and Cloud Data Fusion to deliver real time replication to a wide range of destinations, including BigQuery, Cloud Spanner and Cloud SQL. Learn more.

    Week of Nov 22 - Nov 26, 2021

    • Security Command Center (SCC) launches new mute findings capability: We’re excited to announce a new “Mute Findings” capability in SCC that helps you gain operational efficiencies by effectively managing the findings volume based on your organization’s policies and requirements. SCC presents potential security risks in your cloud environment as ‘findings’ across misconfigurations, vulnerabilities, and threats. With the launch of ‘mute findings’ capability, you gain a way to reduce findings volume and focus on the security issues that are highly relevant to you and your organization. To learn more, read this blog post and watch thisshort demo video.

    Week of Nov 15 - Nov 19, 2021

    • Cloud Spanner is our distributed, globally scalable SQL database service that decouples compute from storage, which makes it possible to scale processing resources separately from storage. This means that horizontal upscaling is possible with no downtime for achieving higher performance on dimensions such as operations per second for both reads and writes. The distributed scaling nature of Spanner’s architecture makes it an ideal solution for unpredictable workloads such as online games. Learn how you can get started developing global multiplayer games using Spanner.

    • New Dataflow templates for Elasticsearch releasedto help customers process and export Google Cloud data into their Elastic Cloud. You can now push data from Pub/Sub, Cloud Storage or BigQuery into your Elasticsearch deployments in a cloud-native fashion. Read more for a deep dive on how to set up a Dataflow streaming pipeline to collect and export your Cloud Audit logs into Elasticsearch, and analyze them in Kibana UI.

    • We’re excited to announce the public preview of Google Cloud Managed Service for Prometheus, a new monitoring offering designed for scale and ease of use that maintains compatibility with the open-source Prometheus ecosystem. While Prometheus works well for many basic deployments, managing Prometheus can become challenging at enterprise scale. Learn more about the service in our blog and on the website.

    Week of Nov 8 - Nov 12, 2021

    Week of Nov 1 - Nov 5, 2021

    • Time to live (TTL) reduces storage costs, improves query performance, and simplifies data retention in Cloud Spanner by automatically removing unneeded data based on user-defined policies. Unlike custom scripts or application code, TTL is fully managed and designed for minimal impact on other workloads. TTL is generally available today in Spanner at no additional cost. Read more.
    • New whitepaper available: Migrating to .NET Core/5+ on Google Cloud - This free whitepaper, written for .NET developers and software architects who want to modernize their .NET Framework applications, outlines the benefits and things to consider when migrating .NET Framework apps to .NET Core/5+ running on Google Cloud. It also offers a framework with suggestions to help you build a strategy for migrating to a fully managed Kubernetes offering or to Google serverless. Download the free whitepaper.
    • Export from Google Cloud Storage: Storage Transfer Service now offers Preview support for exporting data from Cloud Storage to any POSIX file system. You can use this bidirectional data movement capability to move data in and out of Cloud Storage, on-premises clusters, and edge locations including Google Distributed Cloud. The service provides built-in capabilities such as scheduling, bandwidth management, retries, and data integrity checks that simplifies the data transfer workflow. For more information, see Download data from Cloud Storage.
    • Document Translation is now GA! Translate documents in real-time in 100+ languages, and retain document formatting. Learn more about new features and see a demo on how Eli Lilly translates content globally.
    • Announcing the general availability of Cloud Asset Inventory console - We’re excited to announce the general availability of the new Cloud Asset Inventory user interface. In addition to all the capabilities announced earlier in Public Preview, the general availability release provides powerful search and easy filtering capabilities. These capabilities enable you to view details of resources and IAM policies, machine type and policy statistics, and insights into your overall cloud footprint. Learn more about these new capabilities by using the searching resources and searching IAM policies guides. You can get more information about Cloud Asset Inventory using our product documentation.

    Week of Oct 25 - Oct 29, 2021

    • BigQuery table snapshots are now generally available. A table snapshot is a low-cost, read-only copy of a table's data as it was at a particular time.
    • By establishing a robust value measurement approach to track and monitor the business value metrics toward business goals, we are bringing technology, finance, and business leaders together through the discipline of Cloud FinOps to show how digital transformation is enabling the organization to create new innovative capabilities and generate top-line revenue. Learn more.
    • We’ve announced BigQuery Omni, a new multicloud analytics service that allows data teams to perform cross-cloud analytics - across AWS, Azure, and Google Cloud - all from one viewpoint. Learn how BigQuery Omni works and what data and business challenges it solves here.

    Week of Oct 18 - Oct 22, 2021

    • Available now are our newest T2D VMs family based on 3rd Generation AMD EPYC processors. Learn more.
    • In case you missed it — top AI announcements from Google Cloud Next. Catch up on what’s new, see demos, and hear from our customers about how Google Cloud is making AI more accessible, more focused on business outcomes, and fast-tracking the time-to-value.
    • Too much to take in at Google Cloud Next 2021? No worries - here’s a breakdown of the biggest announcements at the 3-day event.
    • Check out the second revision of Architecture Framework, Google Cloud’s collection of canonical best practices.

    Week of Oct 4 - Oct 8, 2021

    • We’re excited to announce Google Cloud’s new goal of equipping more than 40 million people with Google Cloud skills. To help achieve this goal, we’re offering no-cost access to all our training content this month. Find out more here
    • Support for language repositories in Artifact Registry is now generally available. Artifact Registry allows you to store all your language-specific artifacts in one place. Supported package types include Java, Node and Python. Additionally, support for Linux packages is in public preview. Learn more.
    • Want to know what’s the latest with Google ML-Powered intelligence service Active Assist and how to learn more about it at Next’21? Check out this blog.

    Week of Sept 27 - Oct 1, 2021

    • Announcing the launch of Speaker ID. In 2020, customer preference for voice calls increased by 10 percentage points (to 43%) and was by far the most preferred service channel. But most callers still need to pass through archaic authentication processes which slows down the time to resolution and burns through valuable agent time. Speaker ID, from Google Cloud, brings ML-based speaker identification directly to customers and contact center partners, allowing callers to authenticate over the phone, using their own voice. Learn more.
    • Your guide to all things AI & ML at Google Cloud Next. Google Cloud Next is coming October 12–14 and if you’re interested in AI & ML, we’ve got you covered. Tune in to hear about real use cases from companies like Twitter, Eli Lilly, Wayfair, and more. We’re also excited to share exciting product news and hands on AI learning opportunities. Learn more about AI at Next and register for free today!
    • It is now simple to use Terraform to configure Anthos features on your GKE clusters. Check out part two of this series which explores adding Policy Controller audits to our Config Sync managed cluster. Learn more.

    Week of Sept 20 - Sept 24, 2021

    • Announcing the webinar, Powering market data through cloud and AI/ML. We’re sponsoring a Coalition Greenwich webinar on September 23rd where we’ll discuss the findings of our upcoming study on how market data delivery and consumption is being transformed by cloud and AI. Moderated by Coalition Greenwich, the panel will feature Trey Berre from CME Group, Brad Levy from Symphony, and Ulku Rowe representing Google Cloud. Register here.
    • New research from Google Cloud reveals five innovation trends for market data. Together with Coalition Greenwich we surveyed exchanges, trading systems, data aggregators, data producers, asset managers, hedge funds, and investment banks to examine both the distribution and consumption of market data and trading infrastructure in the cloud. Learn more about our findings here.
    • If you are looking for a more automated way to manage quotas over a high number of projects, we are excited to introduce a Quota Monitoring Solution from Google Cloud Professional Services. This solution benefits customers who have many projects or organizations and are looking for an easy way to monitor the quota usage in a single dashboard and use default alerting capabilities across all quotas.

      Week of Sept 13 - Sept 17, 2021

      • New storage features help ensure data is never lost. We are announcing extensions to our popular Cloud Storage offering, and introducing two new services, Filestore Enterprise, and Backup for Google Kubernetes Engine (GKE). Together, these new capabilities will make it easier for you to protect your data out-of-the box, across a wide variety of applications and use cases: Read the full article.
      • API management powers sustainable resource management. Water, waste, and energy solutions company, Veolia, uses APIs and API Management platform Apigee to build apps and help their customers build their own apps, too. Learn from their digital and API-first approach here.
      • To support our expanding customer base in Canada, we’re excited to announce that the new Google Cloud Platform region in Toronto is now open. Toronto is the 28th Google Cloud region connected via our high-performance network, helping customers better serve their users and customers throughout the globe. In combination with Montreal, customers now benefit from improved business continuity planning with distributed, secure infrastructure needed to meet IT and business requirements for disaster recovery, while maintaining data sovereignty.
      • Cloud SQL now supports custom formatting controls for CSVs.When performing admin exports and imports, users can now select custom characters for field delimiters, quotes, escapes, and other characters. For more information, see our documentation.

      Week of Sept 6 -Sept 10, 2021

      • Hear how Lowe’s SRE was able to reduce their Mean Time to Recovery (MTTR) by over 80% after adopting Google’s Site Reliability Engineering practices and Google Cloud’s operations suite.

      Week of  Aug 30 -Sept 3, 2021

      • A what’s new blog in the what’s new blog? Yes, you read that correctly. Google Cloud data engineers are always hard at work maintaining the hundreds of dataset pipelines that feed into our public datasets repository, but they’re also regularly bringing new ones into the mix. Check out our newest featured datasets and catch a few best practices in our living blog: What are the newest datasets in Google Cloud?
      • Migration success with Operational Health Reviews from Google Cloud’s Professional Service Organization - Learn how Google Cloud’s Professional Services Org is proactively and strategically guiding customers to operate effectively and efficiently in the Cloud, both during and after their migration process.
      • Learn how we simplified monitoring for Google Cloud VMware Engine and Google Cloud operations suite. Read more.

      Week of Aug 23 -Aug 27, 2021

      • Google Transfer Appliance announces preview of online mode. Customers are increasingly collecting data that needs to quickly be transferred to the cloud. Transfer Appliances are being used to quickly offload data from sources (e.g. cameras, cars, sensors) and can now stream that data to a Cloud Storage bucket. Online mode can be toggled as data is copied into the appliance and either send the data offline by shipping the appliance to Google or copy data to Cloud Storage over the network. Read more.
      • Topic retention for Cloud Pub/Sub is now Generally Available. Topic retention is the most comprehensive and flexible way available to retain Pub/Sub messages for message replay. In addition to backing up all subscriptions connected to the topic, new subscriptions can now be initialized from a timestamp in the past. Learn more about the feature here.
      • Vertex Predictions now supports private endpoints for online prediction. Through VPC Peering, Private Endpoints provide increased security and lower latency when serving ML models. Read more.

      Week of Aug 16 -Aug 20, 2021

      • Look for us to take security one step further by adding authorization features for service-to-service communications for gRPC proxyless services, as well as to support other deployment models, where proxyless gRPC services are running somewhere other than GKE, for example Compute Engine. We hope you'll join us and check out the setup guide and give us feedback.
      • Cloud Run now supports VPC Service Controls. You can now protect your Cloud Run services against data exfiltration by using VPC Service Controls in conjunction with Cloud Run’s ingress and egress settings. Read more.
      • Read how retailers are leveraging Google Cloud VMware Engine to move their on-premises applications to the cloud, where they can achieve the scale, intelligence, and speed required to stay relevant and competitive. Read more.
      • A series of new features for BeyondCorp Enterprise, our zero trust offering. We now offer native support for client certificates for eight types of VPC-SC resources. We are also announcing general availability of the on-prem connector, which allows users to secure HTTP or HTTPS based on-premises applications outside of Google Cloud. Additionally, three new BeyondCorp attributes are available in Access Context Manager as part of a public preview. Customers can configure custom access policies based on time and date, credential strength, and/or Chrome browser attributes. Read more about these announcements here.
      • We are excited to announce that Google Cloud, working with its partners NAG and DDN, demonstrated the highest performing Lustre file system on the IO500 ranking of the fastest HPC storage systems — quite a feat considering Lustre is one of the most widely deployed HPC file systems in the world.  Read the full article.
      • The Storage Transfer Service for on-premises data API is now available in Preview. Now you can use RESTful APIs to automate your on-prem-to-cloud transfer workflows.  Storage Transfer Service is a software service to transfer data over a network. The service provides built-in capabilities such as scheduling, bandwidth management, retries, and data integrity checks that simplifies the data transfer workflow.
      • It is now simple to use Terraform to configure Anthos features on your GKE clusters. This is the first part of the 3 part series that describes using Terraform to enable Config Sync.  For platform administrators,  this natural, IaC approach improves auditability and transparency and reduces risk of misconfigurations or security gaps. Read more.
      • In this commissioned study, “Modernize With AIOps To Maximize Your Impact”, Forrester Consulting surveyed organizations worldwide to better understand how they’re approaching artificial intelligence for IT operations (AIOps) in their cloud environments, and what kind of benefits they’re seeing. Read more.
      • If your organization or development environment has strict security policies which don’t allow for external IPs, it can be difficult to set up a connection between a Private Cloud SQL instance and a Private IP VM. This article contains clear instructions on how to set up a connection from a private Compute Engine VM to a private Cloud SQL instance using a private service connection and the mysqlsh command line tool.

      Week of Aug 9 -Aug 13, 2021

      • Compute Engine users have a new, updated set of VM-level “in-context” metrics, charts, and logs to correlate signals for common troubleshooting scenarios across CPU, Disk, Memory, Networking, and live Processes.  This brings the best of Google Cloud’s operations suite directly to the Compute Engine UI. Learn more.
      • ​​Pub/Sub to Splunk Dataflow template has been updatedto address multiple enterprise customer asks, from improved compatibility with Splunk Add-on for Google Cloud Platform, to more extensibility with user-defined functions (UDFs), and general pipeline reliability enhancements to tolerate failures like transient network issues when delivering data to Splunk. Read more to learn about how to take advantage of these latest features. Read more.
      • Google Cloud and NVIDIA have teamed up to make VR/AR workloads easier, faster to create and tetherless! Read more.
      • Register for the Google Cloud Startup Summit, September 9, 2021 at goo.gle/StartupSummit for a digital event filled with inspiration, learning, and discussion. This event will bring together our startup and VC community to discuss the latest trends and insights, headlined by a keynote by Astro Teller, Captain of Moonshots at X the moonshot factory. Additionally, learn from a variety of technical and business sessions to help take your startup to the next level.
      • Google Cloud and Harris Poll healthcare research reveals COVID-19 impacts on healthcare technology. Learn more.
      • Partial SSO is now available for public preview. If you use a 3rd party identity provider to single sign on into Google services, Partial SSO allows you to identify a subset of your users to use Google / Cloud Identity as your SAML SSO identity provider (short video and demo).

      Week of Aug 2-Aug 6, 2021

      • Gartner named Google Cloud a Leader in the 2021 Magic Quadrant for Cloud Infrastructure and Platform Services, formerly Infrastructure as a Service. Learn more.
      • Private Service Connect is now generally available. Private Service Connect lets you create private and secure connections to Google Cloud and third-party services with service endpoints in your VPCs. Read more.
      • 30 migration guides designed to help you identify the best ways to migrate, which include meeting common organizational goals like minimizing time and risk during your migration, identifying the most enterprise-grade infrastructure for your workloads, picking a cloud that aligns with your organization’s sustainability goals, and more. Read more.

      Week of Jul 26-Jul 30, 2021

      • This week we hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. Read more about our consumer packaged goods strategy and a guide to key summit content for brands in this blog from Giusy Buonfantino, Google Cloud’s Vice President of CPG.

      • We’re hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. Read more.

      • See how IKEA uses Recommendations AI to provide customers with more relevant product information. Read more.

      • ​​Google Cloud launches a career program for people with autism designed to hire and support more talented people with autism in the rapidly growing cloud industry. Learn more

      • Google Cloud follows new API stability tenets that work to minimize unexpected deprecations to our Enterprise APIs. Read more.

      Week of Jul 19-Jul 23, 2021

      • Register and join us for Google Cloud Next, October 12-14, 2021 at g.co/CloudNext for a fresh approach to digital transformation, as well as a few surprises. Next ‘21 will be a fully customizable digital adventure for a more personalized learning journey. Find the tools and training you need to succeed. From live, interactive Q&As and informative breakout sessions to educational demos and real-life applications of the latest tech from Google Cloud. Get ready to plug into your cloud community, get informed, and be inspired. Together we can tackle today’s greatest business challenges, and start solving for what’s next.
      • "Application Innovation" takes a front row seat this year– To stay ahead of rising customer expectations and the digital and in-person hybrid landscape, enterprises must know what application innovation means and how to deliver this type of innovation with a small piece of technology that might surprise you. Learn more about the three pillars of app innovation here.
      • We announced Cloud IDS, our new network security offering, which is now available in preview. Cloud IDS delivers easy-to-use, cloud-native, managed, network-based threat detection. With Cloud IDS, customers can enjoy a Google Cloud-integrated experience, built with Palo Alto Networks’ industry-leading threat detection technologies to provide high levels of security efficacy. Learn more.
      • Key Visualizer for Cloud Spanner is now generally available. Key Visualizer is a new interactive monitoring tool that lets developers and administrators analyze usage patterns in Spanner. It reveals trends and outliers in key performance and resource metrics for databases of any size, helping to optimize queries and reduce infrastructure costs. See it in action.
      • The market for healthcare cloud is projected to grow 43%. This means a need for better tech infrastructure, digital transformation & Cloud tools. Learn how Google Cloud Partner Advantage partners help customers solve business challenges in healthcare.

      Week of Jul 12-Jul 16, 2021

      • Simplify VM migrations with Migrate for Compute Engine as a Service: delivers a Google-managed cloud service that enables simple, frictionless, and large-scale enterprise migrations of virtual machines to Google Compute Engine with minimal downtime and risk. API-driven and integrated into your Google Cloud console for ease of use, this service uses agent-less replication to copy data without manual intervention and without VPN requirements. It also enables you to launch non-disruptive validations of your VMs prior to cutover.  Rapidly migrate a single application or execute a sprint with hundred systems using migration groups with confidence. Read more here.
      • The Google Cloud region in Delhi NCR is now open for business, ready to host your workloads. Learn more and watch the region launch event here.
      • Introducing Quilkin: the open-source game server proxy. Developed in collaboration with Embark Studios, Quilkin is an open source UDP proxy, tailor-made for high performance real-time multiplayer games. Read more.
      • We’re making Google Glass on Meet available to a wider network of global customers. Learn more.
      • Transfer Appliance supports Google Managed Encryption Keys — We’re announcing the support for Google Managed Encryption Keys with Transfer Appliance, this is in addition to the currently available Customer Managed Encryption Keys feature. Customers have asked for the Transfer Appliance service to create and manage encryption keys for transfer sessions to improve usability and maintain security. The Transfer Appliance Service can now manage the encryption keys for the customers who do not wish to handle a key themselves. Learn more about Using Google Managed Encryption Keys.

      • UCLA builds a campus-wide API program– With Google Cloud's API management platform, Apigee, UCLA created a unified and strong API foundation that removes data friction that students, faculty, and administrators alike face. This foundation not only simplifies how various personas connect to data, but also encourages more innovations in the future. Learn their story.

      • An enhanced region picker makes it easy to choose a Google Cloud region with the lowest CO2 outputLearn more.
      • Amwell and Google Cloud explore five ways telehealth can help democratize access to healthcareRead more.
      • Major League Baseball and Kaggle launch ML competition to learn about fan engagement. Batter up!
      • We’re rolling out general support of Brand Indicators for Message Identification (BIMI) in Gmail within Google Workspace. Learn more.

      • Learn how DeNA Sports Business created an operational status visualization system that helps determine whether live event attendees have correctly installed Japan’s coronavirus contact tracing app COCOA.

      • Google Cloud CAS provides a highly scalable and available private CA to address the unprecedented growth in certificates in the digital world. Read more about CAS.

      Week of Jul 5-Jul 9, 2021

      • Google Cloud and Call of Duty League launch ActivStat to bring fans, players, and commentators the power of competitive statistics in real-time. Read more.
      • Building applications is a heavy lift due to the technical complexity, which includes the complexity of backend services that are used to manage and store data. Firestore alters this by having Google Cloud manage your backend complexity through a complete backend-as-a-service! Learn more.
      • Google Cloud’s new Native App Development skills challenge lets you earn badges that demonstrate your ability to create cloud-native apps. Read more and sign up.

      Week of Jun 28-Jul 2, 2021

      • Storage Transfer Service now offers preview support for Integration with AWS Security Token Service. Security conscious customers can now use Storage Transfer Service to perform transfers from AWS S3 without passing any security credentials. This release will alleviate the security burden associated with passing long-term AWS S3 credentials, which have to be rotated or explicitly revoked when they are no longer needed. Read more.
      • The most popular and surging Google Search terms are now available in BigQuery as a public dataset. View the Top 25 and Top 25 rising queries from Google Trends from the past 30-days, including 5 years of historical data across the 210 Designated Market Areas (DMAs) in the US. Learn more.
      • A new predictive autoscaling capability lets you add additional Compute Engine VMs in anticipation of forecasted demand. Predictive autoscaling is generally available across all Google Cloud regions. Read more or consult the documentation for more information on how to configure, simulate and monitor predictive autoscaling.
      • Messages by Google is now the default messaging app for all AT&T customers using Android phones in the United States. Read more.
      • TPU v4 Pods will soon be available on Google Cloud, providing the most powerful publicly available computing platform for machine learning training. Learn more.
      • Cloud SQL for SQL Server has addressed multiple enterprise customer asks with the GA releases of both SQL Server 2019 and Active Directory integration, as well as the Preview release of Cross Region Replicas.  This set of releases work in concert to allow customers to set up a more scalable and secure managed SQL Server environment to address their workloads’ needs. Read more.

      Week of Jun 21-Jun 25, 2021

      • Simplified return-to-office with no-code technologyWe've just released a solution to your most common return-to-office headaches: make a no-code app customized to solve your business-specific challenges. Learn how to create an automated app where employees can see office room occupancy, check what desks are reserved or open, review disinfection schedules, and more in this blog tutorial.
      • New technical validation whitepaper for running ecommerce applications—Enterprise Strategy Group's analyst outlines the challenges of organizations running ecommerce applications and how Google Cloud helps to mitigate those challenges and handle changing demands with global infrastructure solutions. Download the whitepaper.
      • The fullagendafor Google for Games Developer Summit on July 12th-13th, 2021 is now available. A free digital event with announcements from teams including Stadia, Google Ads, AdMob, Android, Google Play, Firebase, Chrome, YouTube, and Google Cloud. Hear more about how Google Cloud technology creates opportunities for gaming companies to make lasting enhancements for players and creatives. Register at g.co/gamedevsummit
      • BigQuery row-level security is now generally available, giving customers a way to control access to subsets of data in the same table for different groups of users. Row-level security (RLS) extends the principle of least privilege access and enables fine-grained access control policies in BigQuery tables. BigQuery currently supports access controls at the project-, dataset-, table- and column-level. Adding RLS to the portfolio of access controls now enables customers to filter and define access to specific rows in a table based on qualifying user conditions—providing much needed peace of mind for data professionals.
      • Transfer from Azure ADLS Gen 2: Storage Transfer Service offers Preview support for transferring data from Azure ADLS Gen 2 to Google Cloud Storage. Take advantage of a scalable, serverless service to handle data transfer. Read more.
      • reCAPTCHA V2 and V3 customers can now migrate site keys to reCAPTCHA Enterprise in under 10 minutes and without making any code changes. Watch our Webinar to learn more. 
      • Bot attacks are the biggest threat to your business that you probably haven’t addressed yet. Check out our Forbes article to see what you can do about it.

      Week of Jun 14-Jun 18, 2021

      • A new VM family for scale-out workloads—New AMD-based Tau VMs offer 56% higher absolute performance and 42% higher price-performance compared to general-purpose VMs from any of the leading public cloud vendors. Learn more.
      • New whitepaper helps customers plot their cloud migrations—Our new whitepaper distills the conversations we’ve had with CIOs, CTOs, and their technical staff into several frameworks that can help cut through the hype and the technical complexity to help devise the strategy that empowers both the business and IT. Read more or download the whitepaper.
      • Ubuntu Pro lands on Google Cloud—The general availability of Ubuntu Pro images on Google Cloud gives customers an improved Ubuntu experience, expanded security coverage, and integration with critical Google Cloud features. Read more.
      • Navigating hybrid work with a single, connected experience in Google Workspace—New additions to Google Workspace help businesses navigate the challenges of hybrid work, such as Companion Mode for Google Meet calls. Read more.
      • Arab Bank embraces Google Cloud technology—This Middle Eastern bank now offers innovative apps and services to their customers and employees with Apigee and Anthos. In fact, Arab Bank reports over 90% of their new-to-bank customers are using their mobile apps. Learn more.
      • Google Workspace for the Public Sector Sector events—This June, learn about Google Workspace tips and tricks to help you get things done. Join us for one or more of our learning events tailored for government and higher education users. Learn more.

      Week of Jun 7-Jun 11, 2021

      • The top cloud capabilities industry leaders want for sustained innovation—Multicloud and hybrid cloud approaches, coupled with open-source technology adoption, enable IT teams to take full advantage of the best cloud has to offer. Our recent study with IDG shows just how much of a priority this has become for business leaders. Read more or download the report.
      • Announcing the Firmina subsea cable—Planned to run from the East Coast of the United States to Las Toninas, Argentina, with additional landings in Praia Grande, Brazil, and Punta del Este, Uruguay, Firmina will be the longest open subsea cable in the world capable of running entirely from a single power source at one end of the cable if its other power source(s) become temporarily unavailable—a resilience boost at a time when reliable connectivity is more important than ever. Read more.
      • New research reveals what’s needed for AI acceleration in manufacturing—According to our data, which polled more than 1,000 senior manufacturing executives across seven countries, 76% have turned to digital enablers and disruptive technologies due to the pandemic such as data and analytics, cloud, and artificial intelligence (AI). And 66% of manufacturers who use AI in their day-to-day operations report that their reliance on AI is increasing. Read more or download the report.
      • Cloud SQL offers even faster maintenance—Cloud SQL maintenance is zippier than ever. MySQL and PostgreSQL planned maintenance typically lasts less than 60 seconds and SQL Server maintenance typically lasts less than 120 seconds. You can learn more about maintenance here.
      • Simplifying Transfer Appliance configuration with Cloud Setup Application—We’re announcing the availability of the Transfer Appliance Cloud Setup Application. This will use the information you provide through simple prompts and configure your Google Cloud permissions, preferred Cloud Storage bucket, and Cloud KMS key for your transfer. Several cloud console based manual steps are now simplified with a command line experience. Read more
      • Google Cloud VMware Engine is now HIPAA compliant—As of April 1, 2021, Google Cloud VMware Engine is covered under the Google Cloud Business Associate Agreement (BAA), meaning it has achieved HIPAA compliance. Healthcare organizations can now migrate and run their HIPAA-compliant VMware workloads in a fully compatible VMware Cloud Verified stack running natively in Google Cloud with Google Cloud VMware Engine, without changes or re-architecture to tools, processes, or applications. Read more.
      • Introducing container-native Cloud DNS—Kubernetes networking almost always starts with a DNS request. DNS has broad impacts on your application and cluster performance, scalability, and resilience. That is why we are excited to announce the release of container-native Cloud DNS—the native integration of Cloud DNS with Google Kubernetes Engine (GKE) to provide in-cluster Service DNS resolution with Cloud DNS, our scalable and full-featured DNS service. Read more.
      • Welcoming the EU’s new Standard Contractual Clauses for cross-border data transfers—Learn how we’re incorporating the new Standard Contractual Clauses (SCCs) into our contracts to help protect our customers’ data and meet the requirements of European privacy legislation. Read more.
      • Lowe’s meets customer demand with Google SRE practices—Learn how Low’s has been able to increase the number of releases they can support by adopting Google’s Site Reliability Engineering (SRE) framework and leveraging their partnership with Google Cloud. Read more.
      • What’s next for SAP on Google Cloud at SAPPHIRE NOW and beyond—As SAP’s SAPPHIRE conference begins this week, we believe businesses have a more significant opportunity than ever to build for their next decade of growth and beyond. Learn more on how we’re working together with our customers, SAP, and our partners to support this transformation. Read more.
      • Support for Node.js, Python and Java repositories for Artifact Registrynow in Preview–With today’s announcement, you can not only use Artifact Registry to secure and distribute container images, but also manage and secure your other software artifacts. Read more.
      • What’s next for SAP on Google Cloud at SAPPHIRE NOW and beyond—As SAP’s SAPPHIRE conference begins this week, we believe businesses have a more significant opportunity than ever to build for their next decade of growth and beyond. Learn more on how we’re working together with our customers, SAP, and our partners to support this transformation. Read more.
      • Google named a Leader in The Forrester Wave: Streaming Analytics, Q2 2021 report–Learn about the criteria where Google Dataflow was rated 5 out 5 and why this matters for our customers here.
      • Applied ML Summit this Thursday, June 10–Watch our keynote to learn about predictions for machine learning over the next decade. Engage with distinguished researchers, leading practitioners, and Kaggle Grandmasters during our live Ask Me Anything session. Take part in our modeling workshops to learn how you can iterate faster, and deploy and manage your models with confidence–no matter your level of formal computer science training. Learn how to develop and apply your professional skills, grow your abilities at the pace of innovation, and take your career to the next level. Register now.

      Week of May 31-Jun 4, 2021

      • Security Command Center now supports CIS 1.1 benchmarks and granular access controlSecurity Command Center (SCC) now supports CIS benchmarks for Google Cloud Platform Foundation v1.1, enabling you to monitor and address compliance violations against industry best practices in your Google Cloud environment. Additionally, SCC now supports fine-grained access control for administrators that allows you to easily adhere to the principles of least privilege—restricting access based on roles and responsibilities to reduce risk and enabling broader team engagement to address security. Read more.
      • Zero-trust managed security for services with Traffic Director–We created Traffic Director to bring to you a fully managed service mesh product that includes load balancing, traffic management and service discovery. And now, we’re happy to announce the availability of a fully-managed zero-trust security solution using Traffic Director with Google Kubernetes Engine (GKE) and Certificate Authority (CA) Service. Read more.
      • How one business modernized their data warehouse for customer success–PedidosYa migrated from their old data warehouse to Google Cloud's BigQuery. Now with BigQuery, the Latin American online food ordering company has reduced the total cost per query by 5x. Learn more.
      • Announcing new Cloud TPU VMs–New Cloud TPU VMs make it easier to use our industry-leading TPU hardware by providing direct access to TPU host machines, offering a new and improved user experience to develop and deploy TensorFlow, PyTorch, and JAX on Cloud TPUs. Read more.
      • Introducing logical replication and decoding for Cloud SQL for PostgreSQL–We’re announcing the public preview of logical replication and decoding for Cloud SQL for PostgreSQL. By releasing those capabilities and enabling change data capture (CDC) from Cloud SQL for PostgreSQL, we strengthen our commitment to building an open database platform that meets critical application requirements and integrates seamlessly with the PostgreSQL ecosystem. Read more.
      • How 6 businesses are transforming with SAP on Google Cloud–Thousands of organizations globally rely on SAP for their most mission critical workloads. And for many Google Cloud customers, part of a broader digital transformation journey has included accelerating the migration of these essential SAP workloads to Google Cloud for greater agility, elasticity, and uptime. Read six of their stories.

      Week of May 24-May 28, 2021

      • Google Cloud for financial services: driving your transformation cloud journey–As we welcome the industry to our Financial Services Summit, we’re sharing more on how Google Cloud accelerates a financial organization’s digital transformation through app and infrastructure modernization, data democratization, people connections, and trusted transactions. Read more or watch the summit on demand.
      • Introducing Datashare solution for financial services–We announced the general availability of Datashare for financial services, a new Google Cloud solution that brings together the entire capital markets ecosystem—data publishers and data consumers—to exchange market data securely and easily. Read more.
      • Announcing Datastream in PreviewDatastream, a serverless change data capture (CDC) and replication service, allows enterprises to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. Read more.
      • Introducing Dataplex: An intelligent data fabric for analytics at scaleDataplex provides a way to centrally manage, monitor, and govern your data across data lakes, data warehouses and data marts, and make this data securely accessible to a variety of analytics and data science tools. Read more
      • Announcing Dataflow Prime–Available in Preview in Q3 2021, Dataflow Prime is a new platform based on a serverless, no-ops, auto-tuning architecture built to bring unparalleled resource utilization and radical operational simplicity to big data processing. Dataflow Prime builds on Dataflow and brings new user benefits with innovations in resource utilization and distributed diagnostics. The new capabilities in Dataflow significantly reduce the time spent on infrastructure sizing and tuning tasks, as well as time spent diagnosing data freshness problems. Read more.
      • Secure and scalable sharing for data and analytics with Analytics Hub–With Analytics Hub, available in Preview in Q3, organizations get a rich data ecosystem by publishing and subscribing to analytics-ready datasets; control and monitoring over how their data is being used; a self-service way to access valuable and trusted data assets; and an easy way to monetize their data assets without the overhead of building and managing the infrastructure. Read more.
      • Cloud Spanner trims entry cost by 90%–Coming soon to Preview, granular instance sizing in Spanner lets organizations run workloads at as low as 1/10th the cost of regular instances, equating to approximately $65/month. Read more.
      • Cloud Bigtable lifts SLA and adds new security features for regulated industries–Bigtable instances with a multi-cluster routing policy across 3 or more regions are now covered by a 99.999% monthly uptime percentage under the new SLA. In addition, new Data Access audit logs can help determine whether sensitive customer information has been accessed in the event of a security incident, and if so, when, and by whom. Read more.
      • Build a no-code journaling app–In honor of Mental Health Awareness Month, Google Cloud's no-code application development platform, AppSheet, demonstrates how you can build a journaling app complete with titles, time stamps, mood entries, and more. Learn how with this blog and video here.
      • New features in Security Command Center—On May 24th, Security Command Center Premium launched the general availability of granular access controls at project- and folder-level and Center for Internet Security (CIS) 1.1 benchmarks for Google Cloud Platform Foundation. These new capabilities enable organizations to improve their security posture and efficiently manage risk for their Google Cloud environment. Learn more.
      • Simplified API operations with AI–Google Cloud's API management platform Apigee applies Google's industry leading ML and AI to your API metadata. Understand how it works with anomaly detection here.
      • This week: Data Cloud and Financial Services Summits–Our Google Cloud Summit series begins this week with the Data Cloud Summit on Wednesday May 26 (Global). At this half-day event, you’ll learn how leading companies like PayPal, Workday, Equifax, and many others are driving competitive differentiation using Google Cloud technologies to build their data clouds and transform data into value that drives innovation. The following day, Thursday May 27 (Global & EMEA) at the Financial Services Summit, discover how Google Cloud is helping financial institutions such as PayPal, Global Payments, HSBC, Credit Suisse, AXA Switzerland and more unlock new possibilities and accelerate business through innovation. Read more and explore the entire summit series.
      • Announcing the Google for Games Developer Summit 2021 on July 12th-13th–With a surge of new gamers and an increase in time spent playing games in the last year, it’s more important than ever for game developers to delight and engage players. To help developers with this opportunity, the games teams at Google are back to announce the return of the Google for Games Developer Summit 2021 on July 12th-13th. Hear from experts across Google about new game solutions they’re building to make it easier for you to continue creating great games, connecting with players and scaling your business. Registration is free and open to all game developers. Register for the free online event at g.co/gamedevsummit to get more details in the coming weeks. We can’t wait to share our latest innovations with the developer community. Learn more.

      Week of May 17-May 21, 2021

      • Best practices to protect your organization against ransomware threats–For more than 20 years Google has been operating securely in the cloud, using our modern technology stack to provide a more defensible environment that we can protect at scale. While the threat of ransomware isn’t new, our responsibility to help protect you from existing or emerging threats never changes. In our recent blog post, we shared guidance on how organizations can increase their resilience to ransomware and how some of our Cloud products and services can help. Read more.

      • Forrester names Google Cloud a Leader in Unstructured Data Security Platforms–Forrester Research has named Google Cloud a Leader in The Forrester Wave: Unstructured Data Security Platforms, Q2 2021 report, and rated Google Cloud highest in the current offering category among the providers evaluated. Read more or download the report.
      • Introducing Vertex AI: One platform, every ML tool you needVertex AI is a managed machine learning (ML) platform that allows companies to accelerate the deployment and maintenance of artificial intelligence (AI) models. Read more.
      • Transforming collaboration in Google Workspace–We’re launching smart canvas, a new product experience that delivers the next evolution of collaboration for Google Workspace. Between now and the end of the year, we’re rolling out innovations that make it easier for people to stay connected, focus their time and attention, and transform their ideas into impact. Read more.
      • Developing next-generation geothermal power–At I/O this week, we announced a first-of-its-kind, next-generation geothermal project with clean-energy startup Fervo that will soon begin adding carbon-free energy to the electric grid that serves our data centers and infrastructure throughout Nevada, including our Cloud region in Las Vegas. Read more.
      • Contributing to an environment of trust and transparency in Europe–Google Cloud was one of the first cloud providers to support and adopt the EU GDPR Cloud Code of Conduct (CoC). The CoC is a mechanism for cloud providers to demonstrate how they offer sufficient guarantees to implement appropriate technical and organizational measures as data processors under the GDPR. This week, the Belgian Data Protection Authority, based on a positive opinion by the European Data Protection Board (EDPB), approved the CoC, a product of years of constructive collaboration between the cloud computing community, the European Commission, and European data protection authorities. We are proud to say that Google Cloud Platform and Google Workspace already adhere to these provisions. Learn more.
      • Announcing Google Cloud datasets solutions–We're adding commercial, synthetic, and first-party data to our Google Cloud Public Datasets Program to help organizations increase the value of their analytics and AI initiatives, and we're making available an open source reference architecture for a more streamlined data onboarding process to the program. Read more.
      • Introducing custom samples in Cloud Code–With new custom samples in Cloud Code, developers can quickly access your enterprise’s best code samples via a versioned Git repository directly from their IDEs. Read more.
      • Retention settings for Cloud SQL–Cloud SQL now allows you to configure backup retention settings to protect against data loss. You can retain between 1 and 365 days’ worth of automated backups and between 1 and 7 days’ worth of transaction logs for point-in-time recovery. See the details here.
      • Cloud developer’s guide to Google I/O 2021Google I/O may look a little different this year, but don’t worry, you’ll still get the same first-hand look at the newest launches and projects coming from Google. Best of all, it’s free and available to all (virtually) on May 18-20. Read more.

      Week of May 10-May 14, 2021

      • APIs and Apigee power modern day due diligence–With APIs and Google Cloud's Apigee, business due diligence company DueDil revolutionized the way they harness and share their Big Information Graph (B.I.G.) with partners and customers. Get the full story.
      • Cloud CISO Perspectives: May 2021–It’s been a busy month here at Google Cloud since our inaugural CISO perspectives blog post in April. Here, VP and CISO of Google Cloud Phil Venables recaps our cloud security and industry highlights, a sneak peak of what’s ahead from Google at RSA, and more. Read more.
      • 4 new features to secure your Cloud Run services–We announced several new ways to secure Cloud Run environments to make developing and deploying containerized applications easier for developers. Read more.
      • Maximize your Cloud Run investments with new committed use discounts–We’re introducing self-service spend-based committed use discounts for Cloud Run, which let you commit for a year to spending a certain amount on Cloud Run and benefiting from a 17% discount on the amount you committed. Read more.
      • Google Cloud Armor Managed Protection Plus is now generally available–Cloud Armor, our Distributed Denial of Service (DDoS) protection and Web-Application Firewall (WAF) service on Google Cloud, leverages the same infrastructure, network, and technology that has protected Google’s internet-facing properties from some of the largest attacks ever reported. These same tools protect customers’ infrastructure from DDoS attacks, which are increasing in both magnitude and complexity every year. Deployed at the very edge of our network, Cloud Armor absorbs malicious network- and protocol-based volumetric attacks, while mitigating the OWASP Top 10 risks and maintaining the availability of protected services. Read more.
      • Announcing Document Translation for Translation API Advanced in preview–Translation is critical to many developers and localization providers, whether you’re releasing a document, a piece of software, training materials or a website in multiple languages. With Document Translation, now you can directly translate documents in 100+ languages and formats such as Docx, PPTx, XLSx, and PDF while preserving document formatting. Read more.
      • Introducing BeyondCorp Enterprise protected profiles–Protected profiles enable users to securely access corporate resources from an unmanaged device with the same threat and data protections available in BeyondCorp Enterprise–all from the Chrome Browser. Read more.
      • How reCAPTCHA Enterprise protects unemployment and COVID-19 vaccination portals–With so many people visiting government websites to learn more about the COVID-19 vaccine, make vaccine appointments, or file for unemployment, these web pages have become prime targets for bot attacks and other abusive activities. But reCAPTCHA Enterprise has helped state governments protect COVID-19 vaccine registration portals and unemployment claims portals from abusive activities. Learn more.
      • Day one with Anthos? Here are 6 ideas for how to get started–Once you have your new application platform in place, there are some things you can do to immediately get value and gain momentum. Here are six things you can do to get you started. Read more.
      • The era of the transformation cloud is here–Google Cloud’s president Rob Enslin shares how the era of the transformation cloud has seen organizations move beyond data centers to change not only where their business is done but, more importantly, how it is done. Read more.

      Week of May 3-May 7, 2021

      • Transforming hard-disk drive maintenance with predictive ML–In collaboration with Seagate, we developed a machine learning system that can forecast the probability of a recurring failing disk—a disk that fails or has experienced three or more problems in 30 days. Learn how we did it.
      • Agent Assist for Chat is now in public previewAgent Assist provides your human agents with continuous support during their calls, and now chats, by identifying the customers’ intent and providing them with real-time recommendations such as articles and FAQs as well as responses to customer messages to more effectively resolve the conversation. Read more.
      • New Google Cloud, AWS, and Azure product map–Our updated product map helps you understand similar offerings from Google Cloud, AWS, and Azure, and you can easily filter the list by product name or other common keywords. Read more or view the map.
      • Join our Google Cloud Security Talks on May 12th–We’ll share expert insights into how we’re working to be your most trusted cloud. Find the list of topics we’ll cover here.
      • Databricks is now GA on Google Cloud–Deploy or migrate Databricks Lakehouse to Google Cloud to combine the benefits of an open data cloud platform with greater analytics flexibility, unified infrastructure management, and optimized performance. Read more.
      • HPC VM image is now GA–The CentOS-based HPC VM image makes it quick and easy to create HPC-ready VMs on Google Cloud that are pre-tuned for optimal performance. Check out our documentation and quickstart guide to start creating instances using the HPC VM image today.
      • Take the 2021 State of DevOps survey–Help us shape the future of DevOps and make your voice heard by completing the 2021 State of DevOps survey before June 11, 2021. Read more or take the survey.
      • OpenTelemetry Trace 1.0 is now available–OpenTelemetry has reached a key milestone: the OpenTelemetry Tracing Specification has reached version 1.0. API and SDK release candidates are available for Java, Erlang, Python, Go, Node.js, and .Net. Additional languages will follow over the next few weeks. Read more.
      • New blueprint helps secure confidential data in AI Platform Notebooks–We’re adding to our portfolio of blueprints with the publication of our Protecting confidential data in AI Platform Notebooks blueprint guide and deployable blueprint, which can help you apply data governance and security policies that protect your AI Platform Notebooks containing confidential data. Read more.
      • The Liquibase Cloud Spanner extension is now GALiquibase, an open-source library that works with a wide variety of databases, can be used for tracking, managing, and automating database schema changes. By providing the ability to integrate databases into your CI/CD process, Liquibase helps you more fully adopt DevOps practices. The Liquibase Cloud Spanner extension allows developers to use Liquibase's open-source database library to manage and automate schema changes in Cloud Spanner. Read more.
      • Cloud computing 101: Frequently asked questions–There are a number of terms and concepts in cloud computing, and not everyone is familiar with all of them. To help, we’ve put together a list of common questions, and the meanings of a few of those acronyms. Read more.

      Week of Apr 26-Apr 30, 2021

      • Announcing the GKE Gateway controller, in Preview–GKE Gateway controller, Google Cloud’s implementation of the Gateway API, manages internal and external HTTP/S load balancing for a GKE cluster or a fleet of GKE clusters and provides multi-tenant sharing of load balancer infrastructure with centralized admin policy and control. Read more.
      • See Network Performance for Google Cloud in Performance Dashboard–The Google Cloud performance view, part of the Network Intelligence Center, provides packet loss and latency metrics for traffic on Google Cloud. It allows users to do informed planning of their deployment architecture, as well as determine in real time the answer to the most common troubleshooting question: "Is it Google or is it me?" The Google Cloud performance view is now open for all Google Cloud customers as a public preview. Check it out.
      • Optimizing data in Google Sheets allows users to create no-code apps–Format columns and tables in Google Sheets to best position your data to transform into a fully customized, successful app–no coding necessary. Read our four best Google Sheets tips.
      • Automation bots with AppSheet Automation–AppSheet recently released AppSheet Automation, infusing Google AI capabilities to AppSheet's trusted no-code app development platform. Learn step by step how to build your first automation bot on AppSheet here.
      • Google Cloud announces a new region in Israel–Our new region in Israel will make it easier for customers to serve their own users faster, more reliably and securely. Read more.
      • New multi-instance NVIDIA GPUs on GKE–We’re launching support for multi-instance GPUs in GKE (currently in Preview), which will help you drive better value from your GPU investments. Read more.
      • Partnering with NSF to advance networking innovation–We announced our partnership with the U.S. National Science Foundation (NSF), joining other industry partners and federal agencies, as part of a combined $40 million investment in academic research for Resilient and Intelligent Next-Generation (NextG) Systems, or RINGS. Read more.
      • Creating a policy contract with Configuration as Data–Configuration as Data is an emerging cloud infrastructure management paradigm that allows developers to declare the desired state of their applications and infrastructure, without specifying the precise actions or steps for how to achieve it. However, declaring a configuration is only half the battle: you also want policy that defines how a configuration is to be used. This post shows you how.
      • Google Cloud products deliver real-time data solutions–Seven-Eleven Japan built Seven Central, its new platform for digital transformation, on Google Cloud. Powered by BigQuery, Cloud Spanner, and Apigee API management, Seven Central presents easy to understand data, ultimately allowing for quickly informed decisions. Read their story here.

      Week of Apr 19-Apr 23, 2021

      • Extreme PD is now GA–On April 20th, Google Cloud’s Persistent Disk launched general availability of Extreme PD, a high performance block storage volume with provisioned IOPS and up to 2.2 GB/s of throughput. Learn more.

      • Research: How data analytics and intelligence tools to play a key role post-COVID-19–A recent Google-commissioned study by IDG highlighted the role of data analytics and intelligent solutions when it comes to helping businesses separate from their competition. The survey of 2,000 IT leaders across the globe reinforced the notion that the ability to derive insights from data will go a long way towards determining which companies win in this new era. Learn more or download the study.

      • Introducing PHP on Cloud Functions–We’re bringing support for PHP, a popular general-purpose programming language, to Cloud Functions. With the Functions Framework for PHP, you can write idiomatic PHP functions to build business-critical applications and integration layers. And with Cloud Functions for PHP, now available in Preview, you can deploy functions in a fully managed PHP 7.4 environment, complete with access to resources in a private VPC network. Learn more.

      • Delivering our 2020 CCAG pooled audit–As our customers increased their use of cloud services to meet the demands of teleworking and aid in COVID-19 recovery, we’ve worked hard to meet our commitment to being the industry’s most trusted cloud, despite the global pandemic. We’re proud to announce that Google Cloud completed an annual pooled audit with the CCAG in a completely remote setting, and were the only cloud service provider to do so in 2020. Learn more.

      • Anthos 1.7 now available–We recently released Anthos 1.7, our run-anywhere Kubernetes platform that’s connected to Google Cloud, delivering an array of capabilities that make multicloud more accessible and sustainable. Learn more.

      • New Redis Enterprise for Anthos and GKE–We’re making Redis Enterprise for Anthos and Google Kubernetes Engine (GKE) available in the Google Cloud Marketplace in private preview. Learn more.

      • Updates to Google Meet–We introduced a refreshed user interface (UI), enhanced reliability features powered by the latest Google AI, and tools that make meetings more engaging—even fun—for everyone involved. Learn more.

      • DocAI solutions now generally availableDocument (Doc) AI platformLending DocAI and Procurement DocAI, built on decades of AI innovation at Google, bring powerful and useful solutions across lending, insurance, government and other industries. Learn more.

      • Four consecutive years of 100% renewable energy–In 2020, Google again matched 100 percent of its global electricity use with purchases of renewable energy. All told, we’ve signed agreements to buy power from more than 50 renewable energy projects, with a combined capacity of 5.5 gigawatts–about the same as a million solar rooftops. Learn more.

      • Announcing the Google Cloud region picker–The Google Cloud region picker lets you assess key inputs like price, latency to your end users, and carbon footprint to help you choose which Google Cloud region to run on. Learn more.

      • Google Cloud launches new security solution WAAP–WebApp and API Protection (WAAP) combines Google Cloud Armor, Apigee, and reCAPTCHA Enterprise to deliver improved threat protection, consolidated visibility, and greater operational efficiencies across clouds and on-premises environments. Learn more about WAAP here.
      • New in no-code–As discussed in our recent article, no-code hackathons are trending among innovative organizations. Since then, we've outlined how you can host one yourself specifically designed for your unique business innovation outcomes. Learn how here.
      • Google Cloud Referral Program now available—Now you can share the power of Google Cloud and earn product credit for every new paying customer you refer. Once you join the program, you’ll get a unique referral link that you can share with friends, clients, or others. Whenever someone signs up with your link, they’ll get a $350 product credit—that’s $50 more than the standard trial credit. When they become a paying customer, we’ll reward you with a $100 product credit in your Google Cloud account. Available in the United States, Canada, Brazil, and Japan. Apply for the Google Cloud Referral Program.

      Week of Apr 12-Apr 16, 2021

      • Announcing the Data Cloud Summit, May 26, 2021–At this half-day event, you’ll learn how leading companies like PayPal, Workday, Equifax, Zebra Technologies, Commonwealth Care Alliance and many others are driving competitive differentiation using Google Cloud technologies to build their data clouds and transform data into value that drives innovation. Learn more and register at no cost.
      • Announcing the Financial Services Summit, May 27, 2021–In this 2 hour event, you’ll learn how Google Cloud is helping financial institutions including PayPal, Global Payments, HSBC, Credit Suisse, and more unlock new possibilities and accelerate business through innovation and better customer experiences. Learn more and register for free: Global & EMEA.
      • How Google Cloud is enabling vaccine equity–In our latest update, we share more on how we’re working with US state governments to help produce equitable vaccination strategies at scale. Learn more.
      • The new Google Cloud region in Warsaw is open–The Google Cloud region in Warsaw is now ready for business, opening doors for organizations in Central and Eastern Europe. Learn more.
      • AppSheet Automation is now GA–Google Cloud’s AppSheet launches general availability of AppSheet Automation, a unified development experience for citizen and professional developers alike to build custom applications with automated processes, all without coding. Learn how companies and employees are reclaiming their time and talent with AppSheet Automation here.
      • Introducing SAP Integration with Cloud Data Fusion–Google Cloud native data integration platform Cloud Data Fusion now offers the capability to seamlessly get data out of SAP Business Suite, SAP ERP and S/4HANA. Learn more.

      Week of Apr 5-Apr 9, 2021

      • New Certificate Authority Service (CAS) whitepaper–“How to deploy a secure and reliable public key infrastructure with Google Cloud Certificate Authority Service” (written by Mark Cooper of PKI Solutions and Anoosh Saboori of Google Cloud) covers security and architectural recommendations for the use of the Google Cloud CAS by organizations, and describes critical concepts for securing and deploying a PKI based on CAS. Learn more or read the whitepaper.
      • Active Assist’s new feature, predictive autoscaling, helps improve response times for your applications–When you enable predictive autoscaling, Compute Engine forecasts future load based on your Managed Instance Group’s (MIG) history and scales it out in advance of predicted load, so that new instances are ready to serve when the load arrives. Without predictive autoscaling, an autoscaler can only scale a group reactively, based on observed changes in load in real time. With predictive autoscaling enabled, the autoscaler works with real-time data as well as with historical data to cover both the current and forecasted load. That makes predictive autoscaling ideal for those apps with long initialization times and whose workloads vary predictably with daily or weekly cycles. For more information, see How predictive autoscaling works or check if predictive autoscaling is suitable for your workload, and to learn more about other intelligent features, check out Active Assist.
      • Introducing Dataprep BigQuery pushdown–BigQuery pushdown gives you the flexibility to run jobs using either BigQuery or Dataflow. If you select BigQuery, then Dataprep can automatically determine if data pipelines can be partially or fully translated in a BigQuery SQL statement. Any portions of the pipeline that cannot be run in BigQuery are executed in Dataflow. Utilizing the power of BigQuery results in highly efficient data transformations, especially for manipulations such as filters, joins, unions, and aggregations. This leads to better performance, optimized costs, and increased security with IAM and OAuth support. Learn more.
      • Announcing the Google Cloud Retail & Consumer Goods Summit–The Google Cloud Retail & Consumer Goods Summit brings together technology and business insights, the key ingredients for any transformation. Whether you're responsible for IT, data analytics, supply chains, or marketing, please join! Building connections and sharing perspectives cross-functionally is important to reimagining yourself, your organization, or the world. Learn more or register for free.
      • New IDC whitepaper assesses multicloud as a risk mitigation strategy–To better understand the benefits and challenges associated with a multicloud approach, we supported IDC’s new whitepaper that investigates how multicloud can help regulated organizations mitigate the risks of using a single cloud vendor. The whitepaper looks at different approaches to multi-vendor and hybrid clouds taken by European organizations and how these strategies can help organizations address concentration risk and vendor-lock in, improve their compliance posture, and demonstrate an exit strategy. Learn more or download the paper.
      • Introducing request priorities for Cloud Spanner APIs–You can now specify request priorities for some Cloud Spanner APIs. By assigning a HIGH, MEDIUM, or LOW priority to a specific request, you can now convey the relative importance of workloads, to better align resource usage with performance objectives. Learn more.
      • How we’re working with governments on climate goals–Google Sustainability Officer Kate Brandt shares more on how we’re partnering with governments around the world to provide our technology and insights to drive progress in sustainability efforts. Learn more.

      Week of Mar 29-Apr 2, 2021

      • Why Google Cloud is the ideal platform for Block.one and other DLT companies–Late last year, Google Cloud joined the EOS community, a leading open-source platform for blockchain innovation and performance, and is taking steps to support the EOS Public Blockchain by becoming a block producer (BP). At the time, we outlined how our planned participation underscores the importance of blockchain to the future of business, government, and society. We're sharing more on why Google Cloud is uniquely positioned to be an excellent partner for Block.one and other distributed ledger technology (DLT) companies. Learn more.
      • New whitepaper: Scaling certificate management with Certificate Authority Service–As Google Cloud’s Certificate Authority Service (CAS) approaches general availability, we want to help customers understand the service better. Customers have asked us how CAS fits into our larger security story and how CAS works for various use cases. Our new white paper answers these questions and more. Learn more and download the paper.
      • Build a consistent approach for API consumers–Learn the differences between REST and GraphQL, as well as how to apply REST-based practices to GraphQL. No matter the approach, discover how to manage and treat both options as API products here.

      • Apigee X makes it simple to apply Cloud CDN to APIs–With Apigee X and Cloud CDN, organizations can expand their API programs' global reach. Learn how to deploy APIs across 24 regions and 73 zones here.

      • Enabling data migration with Transfer Appliances in APAC—We’re announcing the general availability of Transfer Appliances TA40/TA300 in Singapore. Customers are looking for fast, secure and easy to use options to migrate their workloads to Google Cloud and we are addressing their needs with Transfer Appliances globally in the US, EU and APAC. Learn more about Transfer Appliances TA40 and TA300.

      • Windows Authentication is now supported on Cloud SQL for SQL Server in public preview—We’ve launched seamless integration with Google Cloud’s Managed Service for Microsoft Active Directory (AD). This capability is a critical requirement to simplify identity management and streamline the migration of existing SQL Server workloads that rely on AD for access control. Learn more or get started.

      • Using Cloud AI to whip up new treats with Mars Maltesers—Maltesers, a popular British candy made by Mars, teamed up with our own AI baker and ML engineer extraordinaire, Sara Robinson, to create a brand new dessert recipe with Google Cloud AI. Find out what happened (recipe included).

      • Simplifying data lake management with Dataproc Metastore, now GADataproc Metastore, a fully managed, serverless technical metadata repository based on the Apache Hive metastore, is now generally available. Enterprises building and migrating open source data lakes to Google Cloud now have a central and persistent metastore for their open source data analytics frameworks. Learn more.

      • Introducing the Echo subsea cable—We announced our investment in Echo, the first-ever cable to directly connect the U.S. to Singapore with direct fiber pairs over an express route. Echo will run from Eureka, California to Singapore, with a stop-over in Guam, and plans to also land in Indonesia. Additional landings are possible in the future. Learn more.

      Week of Mar 22-Mar 26, 2021

      • 10 new videos bring Google Cloud to life—The Google Cloud Tech YouTube channel’s latest video series explains cloud tools for technical practitioners in about 5 minutes each. Learn more.
      • BigQuery named a Leader in the 2021 Forrester Wave: Cloud Data Warehouse, Q1 2021 report—Forrester gave BigQuery a score of 5 out of 5 across 19 different criteria. Learn more in our blog post, or download the report.
      • Charting the future of custom compute at Google—To meet users’ performance needs at low power, we’re doubling down on custom chips that use System on a Chip (SoC) designs. Learn more.
      • Introducing Network Connectivity Center—We announced Network Connectivity Center, which provides a single management experience to easily create, connect, and manage heterogeneous on-prem and cloud networks leveraging Google’s global infrastructure. Network Connectivity Center serves as a vantage point to seamlessly connect VPNs, partner and dedicated interconnects, as well as third-party routers and Software-Defined WANs, helping you optimize connectivity, reduce operational burden and lower costs—wherever your applications or users may be. Learn more.
      • Making it easier to get Compute Engine resources for batch processing—We announced a new method of obtaining Compute Engine instances for batch processing that accounts for availability of resources in zones of a region. Now available in preview for regional managed instance groups, you can do this simply by specifying the ANY value in the API. Learn more.
      • Next-gen virtual automotive showrooms are here, thanks to Google Cloud, Unreal Engine, and NVIDIA—We teamed up with Unreal Engine, the open and advanced real-time 3D creation game engine, and NVIDIA, inventor of the GPU, to launch new virtual showroom experiences for automakers. Taking advantage of the NVIDIA RTX platform on Google Cloud, these showrooms provide interactive 3D experiences, photorealistic materials and environments, and up to 4K cloud streaming on mobile and connected devices. Today, in collaboration with MHP, the Porsche IT consulting firm, and MONKEYWAY, a real-time 3D streaming solution provider, you can see our first virtual showroom, the Pagani Immersive Experience Platform. Learn more.
      • Troubleshoot network connectivity with Dynamic Verification (public preview)—You can now check packet loss rate and one-way network latency between two VMs on GCP. This capability is an addition to existing Network Intelligence Center Connectivity Tests which verify reachability by analyzing network configuration in your VPCs. See more in our documentation.
      • Helping U.S. states get the COVID-19 vaccine to more people—In February, we announced our Intelligent Vaccine Impact solution (IVIs) to help communities rise to the challenge of getting vaccines to more people quickly and effectively. Many states have deployed IVIs, and have found it able to meet demand and easily integrate with their existing technology infrastructures. Google Cloud is proud to partner with a number of states across the U.S., including Arizona, the Commonwealth of Massachusetts, North Carolina, Oregon, and the Commonwealth of Virginia to support vaccination efforts at scale. Learn more.

      Week of Mar 15-Mar 19, 2021

      • A2 VMs now GA: The largest GPU cloud instances with NVIDIA A100 GPUs—We’re announcing the general availability of A2 VMs based on the NVIDIA Ampere A100 Tensor Core GPUs in Compute Engine. This means customers around the world can now run their NVIDIA CUDA-enabled machine learning (ML) and high performance computing (HPC) scale-out and scale-up workloads more efficiently and at a lower cost. Learn more.
      • Earn the new Google Kubernetes Engine skill badge for free—We’ve added a new skill badge this month, Optimize Costs for Google Kubernetes Engine (GKE), which you can earn for free when you sign up for the Kubernetes track of the skills challenge. The skills challenge provides 30 days free access to Google Cloud labs and gives you the opportunity to earn skill badges to showcase different cloud competencies to employers. Learn more.
      • Now available: carbon free energy percentages for our Google Cloud regions—Google first achieved carbon neutrality in 2007, and since 2017 we’ve purchased enough solar and wind energy to match 100% of our global electricity consumption. Now we’re building on that progress to target a new sustainability goal: running our business on carbon-free energy 24/7, everywhere, by 2030. Beginning this week, we’re sharing data about how we are performing against that objective so our customers can select Google Cloud regions based on the carbon-free energy supplying them. Learn more.
      • Increasing bandwidth to C2 and N2 VMs—We announced the public preview of 100, 75, and 50 Gbps high-bandwidth network configurations for General Purpose N2 and Compute Optimized C2 Compute Engine VM families as part of continuous efforts to optimize our Andromeda host networking stack. This means we can now offer higher-bandwidth options on existing VM families when using the Google Virtual NIC (gVNIC). These VMs were previously limited to 32 Gbps. Learn more.
      • New research on how COVID-19 changed the nature of IT—To learn more about the impact of COVID-19 and the resulting implications to IT, Google commissioned a study by IDG to better understand how organizations are shifting their priorities in the wake of the pandemic. Learn more and download the report.

      • New in API security—Google Cloud Apigee API management platform's latest release, Apigee X, works with Cloud Armor to protect your APIs with advanced security technology including DDoS protection, geo-fencing, OAuth, and API keys. Learn more about our integrated security enhancements here.

      • Troubleshoot errors more quickly with Cloud Logging—The Logs Explorer now automatically breaks down your log results by severity, making it easy to spot spikes in errors at specific times. Learn more about our new histogram functionality here.

      Week of Mar 8-Mar 12, 2021

      • Introducing #AskGoogleCloud on Twitter and YouTube—Our first segment on March 12th features Developer Advocates Stephanie Wong, Martin Omander and James Ward to answer questions on the best workloads for serverless, the differences between “serverless” and “cloud native,” how to accurately estimate costs for using Cloud Run, and much more. Learn more.
      • Learn about the value of no-code hackathons—Google Cloud’s no-code application development platform, AppSheet, helps to facilitate hackathons for “non-technical” employees with no coding necessary to compete. Learn about Globe Telecom’s no-code hackathon as well as their winning AppSheet app here.
      • Introducing Cloud Code Secret Manager Integration—Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. Integrating Cloud Code with Secret Manager brings the powerful capabilities of both these tools together so you can create and manage your secrets right from within your preferred IDE, whether that be VS Code, IntelliJ, or Cloud Shell Editor. Learn more.
      • Flexible instance configurations in Cloud SQL—Cloud SQL for MySQL now supports flexible instance configurations which offer you the extra freedom to configure your instance with the specific number of vCPUs and GB of RAM that fits your workload. To set up a new instance with a flexible instance configuration, see our documentation here.
      • The Cloud Healthcare Consent Management API is now generally available—The Healthcare Consent Management API is now GA, giving customers the ability to greatly scale the management of consents to meet increasing need, particularly amidst the emerging task of managing health data for new care and research scenarios. Learn more.

      Week of Mar 1-Mar 5, 2021

      • Cloud Run is now available in all Google Cloud regions. Learn more.
      • Introducing Apache Spark Structured Streaming connector for Pub/Sub Lite—We’re announcing the release of an open source connector to read streams of messages from Pub/Sub Lite into Apache Spark.The connector works in all Apache Spark 2.4.X distributions, including Dataproc, Databricks, or manual Spark installations. Learn more.
      • Google Cloud Next ‘21 is October 12-14, 2021—Join us and learn how the most successful companies have transformed their businesses with Google Cloud. Sign-up at g.co/cloudnext for updates. Learn more.
      • Hierarchical firewall policies now GA—Hierarchical firewalls provide a means to enforce firewall rules at the organization and folder levels in the GCP Resource Hierarchy. This allows security administrators at different levels in the hierarchy to define and deploy consistent firewall rules across a number of projects so they're applied to all VMs in currently existing and yet-to-be-created projects. Learn more.
      • Announcing the Google Cloud Born-Digital Summit—Over this half-day event, we’ll highlight proven best-practice approaches to data, architecture, diversity & inclusion, and growth with Google Cloud solutions. Learn more and register for free.
      • Google Cloud products in 4 words or less (2021 edition)—Our popular “4 words or less Google Cloud developer’s cheat sheet” is back and updated for 2021. Learn more.
      • Gartner names Google a leader in its 2021 Magic Quadrant for Cloud AI Developer Services report—We believe this recognition is based on Gartner’s evaluation of Google Cloud’s language, vision, conversational, and structured data services and solutions for developers. Learn more.
      • Announcing the Risk Protection Program—The Risk Protection Program offers customers peace of mind through the technology to secure their data, the tools to monitor the security of that data, and an industry-first cyber policy offered by leading insurers. Learn more.
      • Building the future of work—We’re introducing new innovations in Google Workspace to help people collaborate and find more time and focus, wherever and however they work. Learn more.

      • Assured Controls and expanded Data Regions—We’ve added new information governance features in Google Workspace to help customers control their data based on their business goals. Learn more.

      Week of Feb 22-Feb 26, 2021

      • 21 Google Cloud tools explained in 2 minutes—Need a quick overview of Google Cloud core technologies? Quickly learn these 21 Google Cloud products—each explained in under two minutes. Learn more.

      • BigQuery materialized views now GA—Materialized views (MV’s) are precomputed views that periodically cache results of a query to provide customers increased performance and efficiency. Learn more.

      • New in BigQuery BI Engine—We’re extending BigQuery BI Engine to work with any BI or custom dashboarding applications that require sub-second query response times. In this preview, BI Engine will work seamlessly with Looker and other popular BI tools such as Tableau and Power BI without requiring any change to the BI tools. Learn more.

      • Dataproc now supports Shielded VMs—All Dataproc clusters created using Debian 10 or Ubuntu 18.04 operating systems now use Shielded VMs by default and customers can provide their own configurations for secure boot, vTPM, and Integrity Monitoring. This feature is just one of the many ways customers that have migrated their Hadoop and Spark clusters to GCP experience continued improvements to their security postures without any additional cost.

      • New Cloud Security Podcast by Google—Our new podcast brings you stories and insights on security in the cloud, delivering security from the cloud, and, of course, on what we’re doing at Google Cloud to help keep customer data safe and workloads secure. Learn more.

      • New in Conversational AI and Apigee technology—Australian retailer Woolworths provides seamless customer experiences with their virtual agent, Olive. Apigee API Management and Dialogflow technology allows customers to talk to Olive through voice and chat. Learn more.

      • Introducing GKE Autopilot—GKE already offers an industry-leading level of automation that makes setting up and operating a Kubernetes cluster easier and more cost effective than do-it-yourself and other managed offerings. Autopilot represents a significant leap forward. In addition to the fully managed control plane that GKE has always provided, using the Autopilot mode of operation automatically applies industry best practices and can eliminate all node management operations, maximizing your cluster efficiency and helping to provide a stronger security posture. Learn more.

      • Partnering with Intel to accelerate cloud-native 5G—As we continue to grow cloud-native services for the telecommunications industry, we’re excited to announce a collaboration with Intel to develop reference architectures and integrated solutions for communications service providers to accelerate their deployment of 5G and edge network solutions. Learn more.

      • Veeam Backup for Google Cloud now available—Veeam Backup for Google Cloud automates Google-native snapshots to securely protect VMs across projects and regions with ultra-low RPOs and RTOs, and store backups in Google Object Storage to enhance data protection while ensuring lower costs for long-term retention.

      • Migrate for Anthos 1.6 GA—With Migrate for Anthos, customers and partners can automatically migrate and modernize traditional application workloads running in VMs into containers running on Anthos or GKE. Included in this new release: 

        • In-place modernization for Anthos on AWS (Public Preview) to help customers accelerate on-boarding to Anthos AWS while leveraging their existing investment in AWS data sources, projects, VPCs, and IAM controls.

        • Additional Docker registries and artifacts repositories support (GA) including AWS ECR, basic-auth docker registries, and AWS S3 storage to provide further flexibility for customers using Anthos Anywhere (on-prem, AWS, etc). 

        • HTTPS Proxy support (GA) to enable M4A functionality (access to external image repos and other services) where a proxy is used to control external access.

      Week of Feb 15-Feb 19, 2021

      • Introducing Cloud Domains in preview—Cloud Domains simplify domain registration and management within Google Cloud, improve the custom domain experience for developers, increase security, and support stronger integrations around DNS and SSL. Learn more.

      • Announcing Databricks on Google Cloud—Our partnership with Databricks enables customers to accelerate Databricks implementations by simplifying their data access, by jointly giving them powerful ways to analyze their data, and by leveraging our combined AI and ML capabilities to impact business outcomes. Learn more.

      • Service Directory is GA—As the number and diversity of services grows, it becomes increasingly challenging to maintain an inventory of all of the services across an organization. Last year, we launched Service Directory to help simplify the problem of service management. Today, it’s generally available. Learn more.

      Week of Feb 8-Feb 12, 2021

      • Introducing Bare Metal Solution for SAP workloads—We’ve expanded our Bare Metal Solution—dedicated, single-tenant systems designed specifically to run workloads that are too large or otherwise unsuitable for standard, virtualized environments—to include SAP-certified hardware options, giving SAP customers great options for modernizing their biggest and most challenging workloads. Learn more.

      • 9TB SSDs bring ultimate IOPS/$ to Compute Engine VMs—You can now attach 6TB and 9TB Local SSD to second-generation general-purpose N2 Compute Engine VMs, for great IOPS per dollar. Learn more.

      • Supporting the Python ecosystem—As part of our longstanding support for the Python ecosystem, we are happy to increase our support for the Python Software Foundation, the non-profit behind the Python programming language, ecosystem and community. Learn more

      • Migrate to regional backend services for Network Load Balancing—We now support backend services with Network Load Balancing—a significant enhancement over the prior approach, target pools, providing a common unified data model for all our load-balancing family members and accelerating the delivery of exciting features on Network Load Balancing. Learn more.

      Week of Feb 1-Feb 4, 2021

      • Apigee launches Apigee X—Apigee celebrates its 10 year anniversary with Apigee X, a new release of the Apigee API management platform. Apigee X harnesses the best of Google technologies to accelerate and globalize your API-powered digital initiatives. Learn more about Apigee X and digital excellence here.
      • Celebrating the success of Black founders with Google Cloud during Black History Month—February is Black History Month, a time for us to come together to celebrate and remember the important people and history of the African heritage. Over the next four weeks, we will highlight four Black-led startups and how they use Google Cloud to grow their businesses. Our first feature highlights TQIntelligence and its founder, Yared.

      Week of Jan 25-Jan 29, 2021

      • BeyondCorp Enterprise now generally available—BeyondCorp Enterprise is a zero trust solution, built on Google’s global network, which provides customers with simple and secure access to applications and cloud resources and offers integrated threat and data protection. To learn more, read the blog post, visit our product homepage, and register for our upcoming webinar.

      Week of Jan 18-Jan 22, 2021

      • Cloud Operations Sandbox now available—Cloud Operations Sandbox is an open-source tool that helps you learn SRE practices from Google and apply them on cloud services using Google Cloud’s operations suite (formerly Stackdriver), with everything you need to get started in one click. You can read our blog post, or get started by visiting cloud-ops-sandbox.dev, exploring the project repo, and following along in the user guide

      • New data security strategy whitepaper—Our new whitepaper shares our best practices for how to deploy a modern and effective data security program in the cloud. Read the blog post or download the paper.   

      • WebSockets, HTTP/2 and gRPC bidirectional streams come to Cloud Run—With these capabilities, you can deploy new kinds of applications to Cloud Run that were not previously supported, while taking advantage of serverless infrastructure. These features are now available in public preview for all Cloud Run locations. Read the blog post or check out the WebSockets demo app or the sample h2c server app.

      • New tutorial: Build a no-code workout app in 5 steps—Looking to crush your new year’s resolutions? Using AppSheet, Google Cloud’s no-code app development platform, you can build a custom fitness app that can do things like record your sets, reps and weights, log your workouts, and show you how you’re progressing. Learn how.

      Week of Jan 11-Jan 15, 2021

      • State of API Economy 2021 Report now available—Google Cloud details the changing role of APIs in 2020 amidst the COVID-19 pandemic, informed by a comprehensive study of Apigee API usage behavior across industry, geography, enterprise size, and more. Discover these 2020 trends along with a projection of what to expect from APIs in 2021. Read our blog post here or download and read the report here.
      • New in the state of no-code—Google Cloud's AppSheet looks back at the key no-code application development themes of 2020. AppSheet contends the rising number of citizen developer app creators will ultimately change the state of no-code in 2021. Read more here.

      Week of Jan 4-Jan 8, 2021

      • Last year's most popular API posts—In an arduous year, thoughtful API design and strategy is critical to empowering developers and companies to use technology for global good. Google Cloud looks back at the must-read API posts in 2020. Read it here.

      Week of Dec 21-Dec 25, 2020

      Week of Dec 14-Dec 18, 2020

      • Memorystore for Redis enables TLS encryption support (Preview)—With this release, you can now use Memorystore for applications requiring sensitive data to be encrypted between the client and the Memorystore instance. Read more here.
      • Monitoring Query Language (MQL) for Cloud Monitoring is now generally available—Monitoring Query language provides developers and operators on IT and development teams powerful metric querying, analysis, charting, and alerting capabilities. This functionality is needed for Monitoring use cases that include troubleshooting outages, root cause analysis, custom SLI / SLO creation, reporting and analytics, complex alert logic, and more. Learn more.

      Week of Dec 7-Dec 11, 2020

      • Memorystore for Redis now supports Redis AUTH—With this release you can now use OSS Redis AUTH feature with Memorystore for Redis instances. Read more here.
      • New in serverless computing—Google Cloud API Gateway and its service-first approach to developing serverless APIs helps organizations accelerate innovation by eliminating scalability and security bottlenecks for their APIs. Discover more benefits here.
      • Environmental Dynamics, Inc. makes a big move to no-code—The environmental consulting company EDI built and deployed 35+ business apps with no coding skills necessary with Google Cloud’s AppSheet. This no-code effort not only empowered field workers, but also saved employees over 2,550 hours a year. Get the full story here.
      • Introducing Google Workspace for Government—Google Workspace for Government is an offering that brings the best of Google Cloud’s collaboration and communication tools to the government with pricing that meets the needs of the public sector. Whether it’s powering social care visits, employment support, or virtual courts, Google Workspace helps governments meet the unique challenges they face as they work to provide better services in an increasingly virtual world. Learn more.

      Week of Nov 30-Dec 4, 2020

      • Google enters agreement to acquire Actifio—Actifio, a leader in backup and disaster recovery (DR), offers customers the opportunity to protect virtual copies of data in their native format, manage these copies throughout their entire lifecycle, and use these copies for scenarios like development and test. This planned acquisition further demonstrates Google Cloud’s commitment to helping enterprises protect workloads on-premises and in the cloud. Learn more.
      • Traffic Director can now send traffic to services and gateways hosted outside of Google Cloud—Traffic Director support for Hybrid Connectivity Network Endpoint Groups (NEGs), now generally available, enables services in your VPC network to interoperate more seamlessly with services in other environments. It also enables you to build advanced solutions based on Google Cloud's portfolio of networking products, such as Cloud Armor protection for your private on-prem services. Learn more.
      • Google Cloud launches the Healthcare Interoperability Readiness Program—This program, powered by APIs and Google Cloud’s Apigee, helps patients, doctors, researchers, and healthcare technologists alike by making patient data and healthcare data more accessible and secure. Learn more here.
      • Container Threat Detection in Security Command Center—We announced the general availability of Container Threat Detection, a built-in service in Security Command Center. This release includes multiple detection capabilities to help you monitor and secure your container deployments in Google Cloud. Read more here.
      • Anthos on bare metal now GA—Anthos on bare metal opens up new possibilities for how you run your workloads, and where. You can run Anthos on your existing virtualized infrastructure, or eliminate the dependency on a hypervisor layer to modernize applications while reducing costs. Learn more.

      Week of Nov 23-27, 2020

      • Tuning control support in Cloud SQL for MySQL—We’ve made all 80 flags that were previously in preview now generally available (GA), empowering you with the controls you need to optimize your databases. See the full list here.
      • New in BigQuery ML—We announced the general availability of boosted trees using XGBoost, deep neural networks (DNNs) using TensorFlow, and model export for online prediction. Learn more.
      • New AI/ML in retail report—We recently commissioned a survey of global retail executives to better understand which AI/ML use cases across the retail value chain drive the highest value and returns in retail, and what retailers need to keep in mind when going after these opportunities. Learn more  or read the report.

      Week of Nov 16-20, 2020

      • New whitepaper on how AI helps the patent industry—Our new paper outlines a methodology to train a BERT (bidirectional encoder representation from transformers) model on over 100 million patent publications from the U.S. and other countries using open-source tooling. Learn more or read the whitepaper.
      • Google Cloud support for .NET 5.0—Learn more about our support of .NET 5.0, as well as how to deploy it to Cloud Run.
      • .NET Core 3.1 now on Cloud Functions—With this integration you can write cloud functions using your favorite .NET Core 3.1 runtime with our Functions Framework for .NET for an idiomatic developer experience. Learn more.
      • Filestore Backups in preview—We announced the availability of the Filestore Backups preview in all regions, making it easier to migrate your business continuity, disaster recovery and backup strategy for your file systems in Google Cloud. Learn more.
      • Introducing Voucher, a service to help secure the container supply chain—Developed by the Software Supply Chain Security team at Shopify to work with Google Cloud tools, Voucher evaluates container images created by CI/CD pipelines and signs those images if they meet certain predefined security criteria. Binary Authorization then validates these signatures at deploy time, ensuring that only explicitly authorized code that meets your organizational policy and compliance requirements can be deployed to production. Learn more.
      • 10 most watched from Google Cloud Next ‘20: OnAir—Take a stroll through the 10 sessions that were most popular from Next OnAir, covering everything from data analytics to cloud migration to no-code development. Read the blog.
      • Artifact Registry is now GA—With support for container images, Maven, npm packages, and additional formats coming soon, Artifact Registry helps your organization benefit from scale, security, and standardization across your software supply chain. Read the blog.

      Week of Nov 9-13, 2020

      • Introducing the Anthos Developer Sandbox—The Anthos Developer Sandbox gives you an easy way to learn to develop on Anthos at no cost, available to anyone with a Google account. Read the blog.
      • Database Migration Service now available in preview—Database Migration Service (DMS) makes migrations to Cloud SQL simple and reliable. DMS supports migrations of self-hosted MySQL databases—either on-premises or in the cloud, as well as managed databases from other clouds—to Cloud SQL for MySQL. Support for PostgreSQL is currently available for limited customers in preview, with SQL Server coming soon. Learn more.
      • Troubleshoot deployments or production issues more quickly with new logs tailing—We’ve added support for a new API to tail logs with low latency. Using gcloud, it allows you the convenience of tail -f with the powerful query language and centralized logging solution of Cloud Logging. Learn more about this preview feature.
      • Regionalized log storage now available in 5 new regions in preview—You can now select where your logs are stored from one of five regions in addition to global—asia-east1, europe-west1, us-central1, us-east1, and us-west1. When you create a logs bucket, you can set the region in which you want to store your logs data. Get started with this guide.

      Week of Nov 2-6, 2020

      • Cloud SQL adds support for PostgreSQL 13—Shortly after its community GA, Cloud SQL has added support for PostgreSQL 13. You get access to the latest features of PostgreSQL while Cloud SQL handles the heavy operational lifting, so your team can focus on accelerating application delivery. Read more here.
      • Apigee creates value for businesses running on SAP—Google Cloud’s API Management platform Apigee is optimized for data insights and data monetization, helping businesses running on SAP innovate faster without fear of SAP-specific challenges to modernization. Read more here.
      • Document AI platform is live—The new Document AI (DocAI) platform, a unified console for document processing, is now available in preview. You can quickly access all parsers, tools and solutions (e.g. Lending DocAI, Procurement DocAI) with a unified API, enabling an end-to-end document solution from evaluation to deployment. Read the full story here or check it out in your Google Cloudconsole.
      • Accelerating data migration with Transfer Appliances TA40 and TA300—We’re announcing the general availability of new Transfer Appliances. Customers are looking for fast, secure and easy to use options to migrate their workloads to Google Cloud and we are addressing their needs with next generation Transfer Appliances. Learn more about Transfer Appliances TA40 and TA300.

      Week of Oct 26-30, 2020

      • B.H., Inc. accelerates digital transformation—The Utah based contracting and construction company BHI eliminated IT backlog when non technical employees were empowered to build equipment inspection, productivity, and other custom apps by choosing Google Workspace and the no-code app development platform, AppSheet. Read the full story here.
      • Globe Telecom embraces no-code development—Google Cloud’s AppSheet empowers Globe Telecom employees to do more innovating with less code. The global communications company kickstarted their no-code journey by combining the power of AppSheet with a unique adoption strategy. As a result, AppSheet helped Globe Telecom employees build 59 business apps in just 8 weeks. Get the full story.
      • Cloud Logging now allows you to control access to logs via Log Views—Building on the control offered via Log Buckets (blog post), you can now configure who has access to logs based on the source project, resource type, or log name, all using standard IAM controls. Logs views, currently in Preview, can help you build a system using the principle of least privilege, limiting sensitive logs to only users who need this information. Learn more about Log Views.
      • Document AI is HIPAA compliantDocument AI now enables HIPAA compliance. Now Healthcare and Life Science customers such as health care providers, health plans, and life science organizations can unlock insights by quickly extracting structured data from medical documents while safeguarding individuals’ protected health information (PHI). Learn more about Google Cloud’s nearly 100 products that support HIPAA-compliance.

      Week of Oct 19-23, 2020

      • Improved security and governance in Cloud SQL for PostgreSQL—Cloud SQL for PostgreSQL now integrates with Cloud IAM (preview) to provide simplified and consistent authentication and authorization. Cloud SQL has also enabled PostgreSQL Audit Extension (preview) for more granular audit logging. Read the blog.
      • Announcing the AI in Financial Crime Compliance webinar—Our executive digital forum will feature industry executives, academics, and former regulators who will discuss how AI is transforming financial crime compliance on November 17. Register now.
      • Transforming retail with AI/ML—New research provides insights on high value AI/ML use cases for food, drug, mass merchant and speciality retail that can drive significant value and build resilience for your business. Learn what the top use cases are for your sub-segment and read real world success stories. Download the ebook here and view this companion webinar which also features insights from Zulily.
      • New release of Migrate for Anthos—We’re introducing two important new capabilities in the 1.5 release of Migrate for Anthos, Google Cloud's solution to easily migrate and modernize applications currently running on VMs so that they instead run on containers in Google Kubernetes Engine or Anthos. The first is GA support for modernizing IIS apps running on Windows Server VMs. The second is a new utility that helps you identify which VMs in your existing environment are the best targets for modernization to containers. Start migrating or check out the assessment tool documentation (Linux | Windows).
      • New Compute Engine autoscaler controls—New scale-in controls in Compute Engine let you limit the VM deletion rate by preventing the autoscaler from reducing a MIG's size by more VM instances than your workload can tolerate to lose. Read the blog.
      • Lending DocAI in previewLending DocAI is a specialized solution in our Document AI portfolio for the mortgage industry that processes borrowers’ income and asset documents to speed-up loan applications. Read the blog, or check out the product demo.

      Week of Oct 12-16, 2020

      • New maintenance controls for Cloud SQL—Cloud SQL now offers maintenance deny period controls, which allow you to prevent automatic maintenance from occurring during a 90-day time period. Read the blog.
      • Trends in volumetric DDoS attacks—This week we published a deep dive into DDoS threats, detailing the trends we’re seeing and giving you a closer look at how we prepare for multi-terabit attacks so your sites stay up and running. Read the blog.
      • New in BigQuery—We shared a number of updates this week, including new SQL capabilities, more granular control over your partitions with time unit partitioning, the general availability of Table ACLs, and BigQuery System Tables Reports, a solution that aims to help you monitor BigQuery flat-rate slot and reservation utilization by leveraging BigQuery’s underlying INFORMATION_SCHEMA views. Read the blog.
      • Cloud Code makes YAML easy for hundreds of popular Kubernetes CRDs—We announced authoring support for more than 400 popular Kubernetes CRDs out of the box, any existing CRDs in your Kubernetes cluster, and any CRDs you add from your local machine or a URL. Read the blog.
      • Google Cloud’s data privacy commitments for the AI era—We’ve outlined how our AI/ML Privacy Commitment reflects our belief that customers should have both the highest level of security and the highest level of control over data stored in the cloud. Read the blog.

      • New, lower pricing for Cloud CDN—We’ve reduced the price of cache fill (content fetched from your origin) charges across the board, by up to 80%, along with our recent introduction of a new set of flexible caching capabilities, to make it even easier to use Cloud CDN to optimize the performance of your applications. Read the blog.

      • Expanding the BeyondCorp Alliance—Last year, we announced our BeyondCorp Alliance with partners that share our Zero Trust vision. Today, we’re announcing new partners to this alliance. Read the blog.

      • New data analytics training opportunities—Throughout October and November, we’re offering a number of no-cost ways to learn data analytics, with trainings for beginners to advanced users. Learn more.

      • New BigQuery blog series—BigQuery Explained provides overviews on storage, data ingestion, queries, joins, and more. Read the series.

      Week of Oct 5-9, 2020

      • Introducing the Google Cloud Healthcare Consent Management API—This API gives healthcare application developers and clinical researchers a simple way to manage individuals’ consent of their health data, particularly important given the new and emerging virtual care and research scenarios related to COVID-19. Read the blog.

      • Announcing Google Cloud buildpacks—Based on the CNCF buildpacks v3 specification, these buildpacks produce container images that follow best practices and are suitable for running on all of our container platforms: Cloud Run (fully managed), Anthos, and Google Kubernetes Engine (GKE). Read the blog.

      • Providing open access to the Genome Aggregation Database (gnomAD)—Our collaboration with Broad Institute of MIT and Harvard provides free access to one of the world's most comprehensive public genomic datasets. Read the blog.

      • Introducing HTTP/gRPC server streaming for Cloud Run—Server-side HTTP streaming for your serverless applications running on Cloud Run (fully managed) is now available. This means your Cloud Run services can serve larger responses or stream partial responses to clients during the span of a single request, enabling quicker server response times for your applications. Read the blog.

      • New security and privacy features in Google Workspace—Alongside the announcement of Google Workspace we also shared more information on new security features that help facilitate safe communication and give admins increased visibility and control for their organizations. Read the blog.

      • Introducing Google Workspace—Google Workspace includes all of the productivity apps you know and use at home, at work, or in the classroom—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, Chat and more—now more thoughtfully connected. Read the blog.

      • New in Cloud Functions: languages, availability, portability, and more—We extended Cloud Functions—our scalable pay-as-you-go Functions-as-a-Service (FaaS) platform that runs your code with zero server management—so you can now use it to build end-to-end solutions for several key use cases. Read the blog.

      • Announcing the Google Cloud Public Sector Summit, Dec 8-9—Our upcoming two-day virtual event will offer thought-provoking panels, keynotes, customer stories and more on the future of digital service in the public sector. Register at no cost.

    • Marketing Analytics With Google Cloud Wed, 30 Nov 2022 20:00:00 -0000

      When multiple siloed systems and platforms are used to collect marketing data, marketers often struggle to create a holistic view of their performance and the business impact of marketing initiatives. If this scenario sounds familiar to you, you’ll be happy to hear that Google Cloud has marketing analytics tools to help marketers bring together data and increase marketing ROI. These tools can be used to break down data silos and decrease time to insights. This blog post details how you can use Google Cloud to help you transform marketing analytics in your organization by creating audience segments, gleaning marketing insights, and enhancing your customer experience. You can also watch this video to learn more about marketing analytics at Google Cloud. 

      Create Audience Segmentation

      A core capability in marketing analytics is analyzing audience data and audience segments. This analysis is often complicated by audience data that is distributed across multiple systems, including CRM solutions and web analytics platforms like Google Analytics 360 or Adobe Analytics. Many organizations have a tremendous amount of audience data but have difficulty unlocking its value because they are unsure about where it lives, how to access it, and how to best harness it. 

      Example of Data Sources and ML Capabilities of Big Query
      Example of Data Sources and ML Capabilities of Big Query

      If you’re a marketer in this situation, you can use BigQuery – a serverless, highly scalable, and cost-effective multi cloud data warehouse – to ingest audience data from various sources like Google Ads, Facebook, Salesforce, and more. You can then use the built-in machine learning and AI capabilities tobuild and train ML models for segmenting your audiences into meaningful marketing targets. You can create audience segments like high customer lifetime value, propensity to buy for new customers, and propensity to churn for customer retention. With the ability to create audience segments, you can gain deeper insights on your audience. Plus, you can also activate these audiences back into your ad channels through Google Marketing Platform

      Glean Marketing Insights

      As a marketer, you can use BigQuery and Looker, our modern business intelligence solution, to create a one-stop console for all of your marketing performance data – from ad impressions to on-site traffic and customer data – with native connectors to Google Marketing Platform for easy activation. This can help you not only uncover marketing insights but also share them easily with stakeholders across your organization to democratize marketing performance, enable insight-powered decision-making, and give everyone access to a single source of truth.

      Example Looker Marketing Analytics Dashboard
      Example Looker Marketing Analytics Dashboard

      Google Cloud can also help enable attribution modeling beyond your current ads platform or demand-side platform (DSP).  This is an ideal solution for those who find that the data-driven attribution solutions built into Analytics 360 / Display & Video 360 are suboptimal for your specific use cases or data limitations. In addition, with Google Cloud you can tap into the power of Google Trends data to gain new consumer insights and identify opportunities for product innovation early on. 

      Enhance Customer Experiences 

      Beyond improved audience segmentation and marketing insights, Google Cloud marketing analytics solutions also help you deliver enhanced customer experiences through consumer sentiment analysis. This starts with BigQuery, where you can aggregate online comments and then analyze the sentiment with our Natural Language API to better understand how your brand and marketing messages are resonating with your customers. BigQuery, with its built-in ML capabilities, also enables scaled creative analysis, generating insights from successful creatives to understand their impact on ad performance. You can also build unified app analytics with BigQuery and Looker, centralizing common data sources for app-centric organizations to build better app experiences by unlocking consumer and app insights across marketing channels.

      Next Steps

      The examples covered in this post are just the starting point of what is possible with Google Cloud for marketing organizations. Google Cloud marketing analytics solutions also support many advanced use cases like customer data platforms, dynamic pricing, and much more. To learn more about how to use BigQuery and Looker check out this video.  Keep in mind, you don’t have to wait for a real-world project to try out these solutions. You can sign up today for the BigQuery sandbox, which lets you explore public datasets like Google Trends and run queries without a credit card.  

    • Load testing I/O Adventure with Cloud Run Wed, 30 Nov 2022 17:00:00 -0000

      In 2021 and 2022, Google I/O invited remote attendees to meet each other and explore a virtual world in the I/O Adventure online conference experience powered by Google Cloud technologies.  (For more details on how this was done, see my previous post.)

      When building an online experience like I/O Adventure, it's important to decide on a provisioning strategy early on. Before we could make that decision, however, we needed to have a reasonably accurate estimate of how many attendees to expect.

      Estimating the actual number of attendees in advance is easier for an in-person event than it is for a (free) online event. We recognized that this number could vary wildly from our best guesses, in either direction. The only safe strategy was to design a server architecture that would be able to handle much more traffic than we actually expected. Since the I/O Adventure experience was going to be live for only a few days during the conference, we determined that it would be affordable to overprovision by spinning up many server instances before the event started.

      To further ensure that heavy traffic would not degrade the attendees’ experience, we decided to implement a queue. Our system would welcome as many potential simultaneous attendees as it could smoothly support, and steer additional users to a waiting queue. In the unlikely event that the actual traffic exceeded the large allocated capacity, the queue would prevent the system from becoming overly congested.

      Designing a scalable cloud architecture for our project was one thing. Making sure that it would actually be able to support a heavy load is quite another! This post describes how we performed load testing on the cloud backend as a whole system, rather than on individual cloud components such as a single VM.

      When load testing an entire cloud backend, you need to address several concerns that are not necessarily accounted for in component-level testing. Quotas are a good example. Quota issues are difficult to foresee before you actually hit them. And, we needed to consider many quotas! Do we have enough quota to spin up 200 VMs? More specifically, do we have enough for the type of machine (E2, N2, etc.) that we use? And even more specifically, in the cloud region where the project is deployed?

      IO Bots going to Adventure World
      Pouring thousands of bots in the I/O Adventure world


      To design the load test and simulate thousand of attendees, we had to take two key factors into account:

      • Attendees communicate from their browser with the cloud backend using WebSockets
      • A typical attendee session lasts for at least 15 minutes without disconnecting, which is long-lived compared to some common load testing methodologies more focused on individual HTTP requests
      IO Adventure Attendee-server communication
      I/O Adventure Attendee-Server communication between attendee's browser and a GKE server pod, using WebSockets

      While it is possible to set up a load test suite by provisioning and managing VMs, I have a strong preference for serverless solutions like Cloud Functions and Cloud Run. So, I wanted to know if we could use one of them to simulate user sessions – to essentially play the role of a load injector. Does Google Cloud's serverless infrastructure support the necessary protocol – WebSockets – for this use case?

      Yes! It turns out that Cloud Run supports WebSockets for egress and ingress, and has a configurable per-request timeout up to 1 hour.

      Request Time out
      Load test Client-Server communication
      Load test Client-Server communication using Cloud Run communicating via WebSockets with GKE pods

      In the load test we mimicked a typical attendee session, in which a WebSocket connection transmits thousands of messages over several minutes, without disconnecting.


      On the backend, the I/O Adventures servers handle thousands of simultaneous attendees by:

      • Accepting 500 attendees in each “shard”, where each shard is a server representing a part of the whole conference world, in which attendees can interact with each other;

      • Having hundreds of independent, preprovisioned shards;

      • Running several shards in each GKE Node;

      • Routing each incoming attendee to a shard with free capacity.

      On the client side (the load injector), we implemented multiple levels of concurrency:

      • Each trigger (e.g. an HTTPS request from my workstation initiated with curl) can launch many concurrent sessions of 15 minutes and wait for their completion.

      • Each Cloud Run instance can handle many concurrent triggering requests (maximum concurrent requests per instance).

      • Cloud Run automatically starts new instances when the existing instances approach their full capacity. A Cloud Run service can scale to hundreds or thousands of container instances as needed.

      • We created an additional Cloud Run service specifically for triggering more simultaneous requests to the main Cloud Run injector as a way to amplify the load test.

      Simulating a single attendee story

      A simulated “user story” is a load test scenario that consists of logging in, being routed to the GKE pod of a shard, making a few hundred random attendee movements for 15 minutes, and disconnecting.

      For this project, I ran the simulation in Cloud Run, and I kicked off the test by issuing a curl command from my laptop. In this setup, the scenario (story) initiates a connection as a WebSocket client, and the pods are WebSocket servers.

      Injecting one attendee story
      Injecting one attendee stroy

      Initiating many attendee stories with one trigger

      We created an injector service handler (implemented as a Node.js script) to start many stories in parallel, and wait for their completion.

      Injecting many attendee stories with 1 trigger
      Injecting many attendee stories with one trigger

      Injecting many attendee stories with several concurrent triggers

      I can multiply the load by triggering the injector many times concurrently with curl, from a command line terminal:

      [StructValue([(u'code', u'for i in {1..10}; do \r\n curl -X POST "https://fancy-load-test.run.app" &\r\ndone\r\nwait'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ed59dbb3190>)])]

      Cloud Run automatically scales up by spinning new injector machines (i.e Cloud Run instances) when needed.

      Injecting attendee stories through many injector Cloud Run instances
      Injecting attendee stories through many curl requests, triggering many Cloud Run injector instances (Login service details omitted for clarity.)

      Injecting more attendee stories through an extra Cloud Run service

      In the previous setup, my workstation became the bottleneck as I was launching too many long-lived trigger requests with curl.  My workstation limited the number of TCP connections, and struggled to keep up with the CPU load for all the SSL handshakes. 

      We fixed this by creating a new intermediate service specifically for dealing with a large number of triggers. We tried several parameter values (number of stories per trigger, max requests per Cloud Run instance, etc.) to maximize the injector’s throughput.

      injecting attendee stories through an extra Cloud Run trigger service
      Injecting attendee stories through an extra Cloud Run trigger service

      Note the BigQuery component, used for log ingestion on both the injector side and the server side.

      Measuring the success rate

      Unlike HTTP requests, which have an explicit response code, WebSocket messages are unidirectional and by default don’t expect an acknowledgement.

      To keep track of how many stories have run successfully to completion, we wrote a few events (login, start, finish…) to the standard output and activated a logging sink to stream all of the logs to BigQuery. The events were logged from the point of view of the clients (the injector) and from the point of view of the servers (GKE pods).

      This made it very convenient, with aggregate SQL queries, to:

      • make sure that at least 99% of all the stories did finish successfully, and

      • make sure the stories did not take more time than expected.

      Big Query Results

      We also kept an eye on several Grafana dashboards to monitor live metrics of our GKE cluster, and make sure that CPU, memory, and bandwidth resources didn’t get overwhelmed.

      Grafana Monitoring

      Visual check

      As a bonus, it was very fun to connect as a “real” attendee and watch hundreds of bots running everywhere!

      Simulation Watch

      Connecting to the system under stress with a browser also enabled us to assess the subjective experience.  We could see, for example, how smooth the animations were when a shard was hosting 500 attendees and the frontend was rendering dozens of moving avatars.


      With a total of 4000 triggers for 40 stories each, and a max concurrency of 40 requests per Cloud Run instance, our tests used just over 100 instances and successfully injected 160,000 simultaneous active attendees. We ran this load test script several times over a few days, for a total cost of about $100. The test took advantage of the full capacity of all of the server CPU cores (used by the GKE cluster) that our quota allowed. Mission accomplished!

      We learned that:

      • The cost of the load test was acceptable.
      • The quotas we needed to raise were the number of specific CPU cores and the number of external IPv4 addresses.
      • Our platform would successfully sustain a load target of 160K attendees.

      During the actual event, the peak traffic turned out to be less than the maximum supported load.  (As a result, no attendees had to wait in the queue that we had implemented.) Following our tests, we were confident that the backend would handle the target load without any major issues, and it did.

      Of course, Cloud Run and Cloud Run Jobs can handle many types of workloads, not only website backends and load tests. Take some time to explore them further and think about where you can put them to use in your own workflows!

    • Cloud CISO Perspectives: November 2022 Wed, 30 Nov 2022 17:00:00 -0000

      Welcome to November’s Cloud CISO Perspectives. I’d like to celebrate the first year of the Google Cybersecurity Action Team (GCAT) and look ahead to the team’s goals for 2023.  

      As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

      GCAT, one year later

      We launched the Google Cybersecurity Action Team in October 2021 as a premier security advisory team, with the singular mission of supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses. The core mission is to help guide customers through the cycle of their security transformation, starting with their first cloud-adoption roadmap and implementation through increasing their cyber resilience preparedness for potential events, and to even help engineer new solutions in partnership with them as requirements change.

      [StructValue([(u'title', u'Hear monthly from our Cloud CISO in your inbox'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee6c5aeb5d0>), (u'btn_text', u'Subscribe today'), (u'href', u'https://go.chronicle.security/cloudciso-newsletter-signup'), (u'image', None)])]

      Readers know that cybersecurity has only become more top-of-mind – yet organizations face continued challenges as they kick off and advance their security transformations. Our desire to help with and accelerate this process is directly tied to Google Cloud’s shared fate model, where we take an active stake in the security posture of our customers by offering secure defaults, capabilities to ensure secure deployments and configurations, opinionated guidance on how to configure cloud workloads for security, and assistance with measuring, reducing, accepting, and transferring risk.

      We’ve gotten very positive feedback on our strategy of deploying the right people with the right expertise at the right moment during customers’ transformation journeys, and doing it in an integrated way so that the handoff from one specialist team to the next is seamless. This may not seem revolutionary, but focusing on making customers more secure from the beginning of their journey helps reduce toil and ingrain better security practices earlier on. 

      We focus heavily on how we build the institutional memory of particular customers on particular teams so that if a customer comes back, we can deploy the same or adjacent people to work with them. Most organizations are not solely on one cloud platform, so it's helpful to make sure we’ve got people we can re-deploy who understand the customer’s broader multicloud and hybrid environment. We look at the challenges that we see in engagements with customers and use those as a fast feedback loop into which future solutions and blueprints and products we should be working on.

      Ultimately, GCAT’s role is at the forefront of making these transformations less daunting. We’ve also found that our quarterly Threat Horizons report helps progress towards that goal. Threat Horizons offers a unique fusion of security data and strategic threat intelligence pulled together from across research teams at Google, geared for security leaders and their leadership teams. Many CISOs and other leaders have told us that they find Threat Horizons helpful in part because our research often reflects their own findings, and can help make their arguments stronger.

      As GCAT moves into its second year, we plan on further developing partnerships with our consulting teams (Professional Services and Mandiant) and we’ll continue to scale our offerings through specializations and feedback loops.

      You can also listen here to my conversation with Google Cloud security experts Anton Chuvakin and Timothy Peacock on the Cloud Security podcast on the first year of GCAT and how it fits in with industry trends.  

      Security Talks in December

      Our Google Cloud Security Talks event for Q4 will focus on two topics that we’ve emphasized continuously in our Cloud CISO Perspectives — threat detection and Zero Trust. Join us on December 7 to hear from leaders across Google as well as leading-edge customers on these two critical initiatives. Click here to reserve your spot and we’ll see you there (virtually).

      Google Cybersecurity Action Team highlights

      Here are the latest updates, products, services and resources from our security teams this month: 

      • Securing tomorrow today: We updated our internal encryption-in-transit protocol to protect communications within Google from potential quantum computing threats. Here's why.

      • Making Cobalt Strike harder for threat actors to abuse: We took steps with Cobalt Strike’s vendor to hunt down cracked versions of the popular red team software, which often are used in cyberattacks. Read more.

      • How data embassies can strengthen resiliency with sovereignty: Data embassies extend the concept of using a digital haven to reduce risk, made possible by the flexible, distributed nature of the cloud. Here’s how they work, and how they intersect with Google Cloud. Read more.

      • For a successful cloud transformation, change your culture first: To fully incorporate all the benefits of a cloud transformation, an organization should update its security mindset and culture, along with its technology. Read more.

      • From the FBI to Google Cloud, meet CISO Director MK Palmore: Following three decades in the Marines and the FBI, MK Palmore came to Google Cloud’s Office of the Chief Information Security Officer in 2021 to help Google tackle some of the hardest security problems the industry faces right now. Read more.

      • Does the internet need sunscreen? No, submarine cables are protected from solar storms: A Google team set out to analyze the risks that undersea cables face from solar storms. Here’s what they learned. Read more.

      • CISO Survival Guide: How financial services organizations can more securely move to the cloud: The first day in the cloud can be daunting for financial services organizations. What are the key questions they face, and how can they best respond to them? Read more.

      • Multicloud Mindset: Thinking about open source and security in a multicloud world: Security leaders and architects are shifting away from traditional security models, which are increasingly insufficient for protecting multicloud environments. Here’s what you need to know about the trend. Read more.

      Google Cloud security tips, tricks, and updates

      • 4 more reasons to use Chrome’s cloud-based management: Take a deep dive into recent improvements to the Chrome Browser Cloud Management tool. Read more.

      • Introducing Cloud Armor features to help improve efficacy: Google Cloud Armor can be used more efficiently with two new features, an auto-deploy option for proposed rules generated by Adaptive Protection, and advanced rule tuning. Read more.

      • IAM Deny creates a simple way to harden your security posture at scale: New Identity and Access Management Deny policies can more easily create rules that broadly restrict resource access, a powerful, coarse-grained control to help implement security policies at scale. Read more.

      • Chronicle Security Operations offers new, faster search and investigative experience: A new investigative experience comes to Chronicle Security Operations, with lightning-fast search across any form of structured data, and greater flexibility to pivot and drill-down when conducting complex, open-ended threat investigations. Read more.

      • How to analyze security and compliance of your dependencies with the Open Source Insights dataset: The Open Source Insights project scans millions of open-source packages, computes their dependency graphs, and annotates those graphs with security advisories, license information, popularity metrics, and other metadata. Read more.

      • How to migrate on-premises Active Directory users to Google Cloud Managed Microsoft AD: For organizations operating in Microsoft-centered environments, Google Cloud offers a highly-available, hardened Managed Service for Microsoft Active Directory running on Windows virtual machines. Read more.

      • Announcing Private Marketplace, now in Preview: Looking to reduce employee usage of shadow IT and out-of-date software? IT and cloud administrators can now create a private, curated version of Google Cloud Marketplace for their organizations. Read more.

      • New Mobile SDK can help reCAPTCHA Enterprise protect iOS, Android apps: The reCAPTCHA Enterprise Mobile SDK can help block fake users and bots from accessing mobile apps while allowing legitimate users to proceed, and it’s now generally available to developers. Read more.

      • Practicing the principle of least privilege with Cloud Build and Artifact Registry: How to help reduce the blast radius of misconfigurations and malicious users using Cloud Build and Artifact Registry. Read more.

      • Automate cleanup of unused Google Cloud projects: Part of reducing technological debt means getting rid of abandoned projects, but doing that manually is time-consuming. You can automate that process using Remora, a serverless solution that works with the Unattended Project Recommender. Read more.

      • Should I use Cloud Armor: Cloud Armor provides DDoS defense and additional security for apps and websites running on Google Cloud, on-prem or on other platforms. This guide can help you decide when to use this powerful tool. Read more.

      • How to configure Traffic Director: Traffic Director is a managed Google service that helps solve common networking challenges related to flow, security, and observability. Here’s how to use it. Read more.

      Compliance & Controls

      • Google Cloud completes Korea Financial Security Institute audit: Earlier this year, we worked with South Korean auditors to support a group of leading South Korean FSIs interested in expanding their adoption of Google Cloud. Read more.

      • Google Public Sector announces continuity-of-operations offering for government entities under cyberattack: Every U.S. government agency is now expected to have a Continuity of Operations Plan (COOP) in place. Google Workspace is positioned to help with these business and collaboration continuity needs, ensuring agency teams can continue to work effectively and securely in the event of an incident. Read more.

      • Announcing Assured Workloads for Israel in Preview: Assured Workloads helps customers create and maintain controlled environments. The Assured Workloads Preview for Israel provides data residency in our new Israel Cloud region, cryptographic control over data, and service usage restrictions that help keep organizations in policy compliance. Read more.

      Google Cloud Security Podcasts

      We launched a new weekly podcast focusing on Cloud Security in February 2021. Hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today. This month, they discussed:

      • Google Workspace security, from threats to Zero Trust: Is compliance changing? Have hardware keys really stopped phishing? Which security assumptions do we need to revisit? We discuss these important hybrid workplace security questions and more with Nikhil Sinha and Kelly Anderson of Google Workspace. Listen here.

      • Secrets of cloud security incident response: Cloud transformations also change security standards and protocol, including incident response challenges, creating effective partnerships with cloud service providers, and even the definition of a security incident, with Google security specialists Matt Linton and John Stone. Listen here.

      • A deep dive on the release of detection rules for CobaltStike abuse: In this  conversation with Greg Sinclair, security engineer at Google Cloud, we discuss his blog post explaining how and why Google Cloud took action to limit the scope of malicious actor abuse of Cobalt Strike. Listen here

      • Who observes Cloud Security Observability? From improving detection and response to making network communications more secure to its impact on the shift to TLS 1.3, here is everything you wanted to know about “observability data” but were afraid to ask, with Jeff Bollinger, director of incident response and detection engineering at LinkedIn. Listen here.

      • Cloud threats and incidents — RansomOps, misconfigurations, and cryptominers: How are cloud environments attacked and compromised today, and is cloud security a misnomer? With Alijca Cade, director of financial services at Google Cloud’s Office of the CISO, Ken Westin, director of security strategy at Cybereason, and Robert Wallace, senior director at Mandiant. Listen here.

      To have our Cloud CISO Perspectives post delivered every month to your inbox, sign up for our newsletter. We’ll be back next month with more security-related updates.

    • Built with BigQuery: Zeotap uses Google BigQuery to build highly customized audiences at scale Wed, 30 Nov 2022 17:00:00 -0000

      Zeotap’s mission is to help brands monetise customer data in a privacy-first Europe. Today, Zeotap owns three data solutions. Zeotap CDP is the next-generation Customer Data Platform that empowers brands to collect, unify, segment and activate customer data. Zeotap CDP puts privacy and security first while empowering marketers to unlock and derive business value in their customer data with a powerful and marketer-friendly user interface. Zeotap Data delivers quality targeting at scale by enabling the activation of 2,500 tried-and-tested Champion Segments across 100+ programmatic advertising and social platforms. ID+ is a universal marketing ID initiative that paves the way for addressability in the cookieless future. Zeotap’s CDP is a SaaS application that is hosted on Google Cloud. A client can use Zeotap CDP SaaS product suite to onboard its first-party data, use the provided tools to create audiences and activate them on marketing channels and advertising platforms. 

      Zeotap partnered with Google Cloud to provide a customer data platform that is differentiated in the market with a focus on privacy, security and compliance. Zeotap CDP, built with BigQuery, is empowered with tools and capabilities to democratize AI/ML models to predict customer behavior and personalize the customer experience to enable the next generation digital marketing experts to drive higher conversion rates, return on advertising spend and reduce customer acquisition cost.

      The capability to create actionable audiences that are highly customized the first time, improve speed to market to capture demand and drive customer loyalty are differentiating factors. However, as the audiences get more specific it becomes more difficult to estimate and tune the size of the audience segment. Being able to identify the right customer attributes is critical for building audiences at scale. 

      Consider the following example, a fast fashion retailer has a broken size run and is at risk of taking a large markdown because of an excess of XXS and XS sizes. What if you are able to instantly build an audience of customers who have a high propensity for this brand or style, tend to purchase at full price, and match the size profile for the remaining inventory to drive full price sales and avoid costly markdowns. 

      Most CDPs provide size information only after a segment is created and its data processed. If the segment sizes are not relevant and quantifiable, the target audiences list has to be recreated impacting speed to market and capturing customer demand. Estimating the segment size and tuning the size of the audience segment is often referred to as the segment size estimation problem. The segment size needs to be estimated and segments should be available for exploration and processing with a sub-second latency to provide a near real-time user experience.

      Traditional approaches to solve this problem relies on pre-aggregation database models which involve sophisticated data ingestion and failure management, thus wasting a lot of compute hours and requiring extensive pipeline orchestration. There are a number of disadvantages with this traditional approach:

      1. Higher cost and maintenance as multiple Extract, Transform and Load (ETL) processes are involved

      2. Higher failure rate and re-processing required from scratch in case of failures

      3. Takes hours/days to ingest data at large-scale

      Zeotap CDP relies on the power of Google Cloud Platform to tackle this segment size estimation problem using BigQuery for processing and estimation, the BI Engine to provide sub-second latency required for online predictions and Vertex AI ecosystem with BigQuery ML to provide a no-code AI segmentation and lookalike audiences. Zeotap CDP’s strength is to offer this estimation at the beginning of segment creation before any kind of data processing using pre-calculated metrics. Any correction in segment parameters can be made near real time, saving a lot of user’s time.

      The data cloud, with BigQuery at its core, functions as a data lake at scale and the analytical compute engine that calculates the pre-aggregated metrics. The BI engine is used as a caching and acceleration layer to make these metrics available with near sub-second latency. Compared to the traditional approach this setup does not require a heavy data processing framework like Spark/Hadoop or sophisticated pipeline management. Microservices deployed on the GKE platform are used for orchestration using BigQuery SQL ETL capabilities. This does not require a separate data ingestion in the caching layer as the BI engine works seamlessly in tandem with BigQuery and is enabled using a single setting.

      The below diagram depicts how Zeotap manages the first party data and solves for the segment size estimation problem.


      The API layer, powered by Apigee provides secure client access to Zeotap’s API infrastructure to read and ingest first party data in real-time. The UI Services Layer, backed by GKE and Firebase provides access to Zeotap’s platform front-ending audience segmentation, real-time workflow orchestration / management, analytics & dashboards. The Stream & Batch processing manages the core data ingestion using PubSub, Dataflow and Cloud Run. Google BigQuery, Cloud SQL, BigTable and Cloud Storage make up all of the Storage layer. 

      The Destination Platform allows clients to activate its data across various marketing channels, data management and ad management platforms like Google DDP, TapTap, TheTradeDesk etc (plus more than 150+ such integrations). Google BigQuery is at the heart of the Audience Platform to allow clients to slice and dice its first party assets, enhance it with Zeotap’s universal ID graph or its third-party data assets and push to downstream destinations for activation and funnel analysis. The Predictive Analytics layer allows clients to create and activate machine-learned (e.g. CLV and RFM modeling) based segments with just a few clicks. Cloud IAM, Cloud Operations suite and Collaborations tools deliver the cross-sectional needs of security, logging and collaboration. 

      For segment/audience size estimation, the core data that is client’s first party data resides in its own GCP project. First step here is to identify low cardinality columns using BigQuery’s “approx count distinct” capabilities. At this time, Zeotap supports a sub-second estimation on only low cardinality ( represents the number of unique values) dimensions, like Gender with Male/Female/M/N values and Age with limited age buckets. A sample query looks like this,


      Once pivoted by columns, the results look like this

      3.QueryPivot Results.jpg

      Now the cardinality numbers are available for all columns, they are divided into two groups, one below the threshold (low cardinality) and one above the threshold (high cardinality). Next step is to run a reverse ETL query to create aggregates on low cardinality dimensions and corresponding HLL sketches for user count (measure) dimensions.

      A sample query looks like this

      5.GCP Estimator Project.jpg

      The resultant data is loaded into a separate estimator Google Cloud project for further processing and analysis. This project contains a metadata store with datasets required for processing client requests and is front ended with BI engine to provide acceleration to estimation queries. With this process, the segment size is calculated using pre-aggregated metrics without processing the entire first party dataset and enables the end user to create and experiment with a number of segments without incurring any delays as in the traditional approach.

      This approach obsoletes ETL steps required to realize this use-case which drives a benefit of over 90% time reduction and 66% cost reduction for the segment size estimation. Also, enabling BI engine on top of BigQuery boosts query speeds by more than 60%, optimizes resource utilization and improves query response as compared to native BigQuery queries. The ability to experiment with audience segmentation is one of the many capabilities that Zeotap CDP provides their customers. The cookieless future will drive experimentation with concepts like topics for IBA (Interest-based advertising) and developing models that support a wide range of possibilities in predicting customer behavior.

      There is an ever increasing demand for shared data, where customers are requesting access to the finished data in the form of datasets to share both within and across the organization through external channels. These datasets unlock more opportunities where the curated data can be used as-is or coalesced with other datasets to create business centric insights or fuel innovation by enabling ecosystem or develop visualizations. To meet this need, Zeotap is leveraging Google Cloud Analytics Hub to create a rich data ecosystem of analytics-ready datasets. 

      Analytics Hub is powered by Google BigQuery, which provides a self-service approach to securely share data by publishing and subscribing to trusted data sets as listings in Private and Public Exchanges. It allows Zeotap to share the data in place having full control while end customers have access to fresh data without the need to move data at large scale. 

      Click here to learn more about Zeotap’s CDP capabilities or to request a demo.

      The Built with BigQuery advantage for ISVs 

      Google is helping tech companies like Zeotap build innovative applications on Google’s data cloud with simplified access to technology, helpful and dedicated engineering support, and joint go-to-market programs through the Built with BigQuery initiative, launched in April as part of the Google Data Cloud Summit. Participating companies can: 

      • Get started fast with a Google-funded, pre-configured sandbox. 

      • Accelerate product design and architecture through access to designated experts from the ISV Center of Excellence who can provide insight into key use cases, architectural patterns, and best practices. 

      • Amplify success with joint marketing programs to drive awareness, generate demand, and increase adoption.

      BigQuery gives ISVs the advantage of a powerful, highly scalable data warehouse that’s integrated with Google Cloud’s open, secure, sustainable platform. And with a huge partner ecosystem and support for multi-cloud, open source tools and APIs, Google provides technology companies the portability and extensibility they need to avoid data lock-in. 

      Click here to learn more about Built with BigQuery.

      We thank the Google Cloud and Zeotap team members who co-authored the blog:
      Zeotap: Shubham Patil, Engineering Manager; Google: Bala Desikan, Principal Architect and Sujit Khasnis, Cloud Partner Engineering

      Related Article

      Built with BigQuery: How True Fit's data journey unlocks partner growth

      True Fit, a data-driven personalization platform built on Google Data Cloud to provide fit personalization for retailers by sharing curat...

      Read Article
    • 6 common mistakes to avoid in RESTful web API Design Wed, 30 Nov 2022 17:00:00 -0000

      Imagine ordering a “ready-to-assemble” table online, only to find that the delivery package did not include the assembly instructions. You know what the end product looks like, but have little to no clue how to start assembling the individual pieces to get there. A poorly designed API tends to create a similar experience for a consumer developer. Well designed APIs make it easy for consumer developers to find, explore, access, and use them. In some cases, good quality APIs even spark new ideas and open up new use cases for consumer developers. 

      There are methods to improve API design — like following RESTful practices. But time and again we are seeing customers unknowingly program minor inconveniences into their APIs. To help you avoid these pitfalls, here are six of the most common mistakes we have seen developers make while creating the API — and guidance on how to get it right. 

      #1 Thinking inside-out vs outside-in

      Being everything for everybody often means that nothing you do is the best it could be, and that is just as true for APIs. When customers turn to APIs, they are looking for specific solutions to make their work easier and more productive. If there is an API that better works to their needs, they will choose that one over yours. This is why it’s so important to know what your customers need to do their work better, and then building to fill those needs. In other words, start thinking Outside-in as opposed to Inside-Out. Specifically, 

      • Inside-out refers to designing APIs around internal systems or services you would like to expose.
      • Outside-in refers to designing APIs around customer experiences you want to create. Read more about the Outside-in perspective in the API product mindset

      The first step to this is learning from your customers — be it internal consumer developers or external customers — and their use cases. Ask them about the apps they are building, their pain points, and what would help streamline or simplify their development. Write down their most significant use cases and create a sample API response that only gives them the exact data they need for each case. As you test this, look for overlap between payloads and adapt your designs to genericize them across common or similar use cases.

      1 RESTful web.jpg

      If you can’t connect with your customers — because you don’t have direct access, they don’t have time, or they just don’t know what they want — the best approach is to imagine what you would build with your APIs. Think big and think creatively. While you don't want to design your APIs for vaporware, thinking about the big picture can make it easier to build non-breaking changes in the future. For example the image below showcases APIs offered by Google Maps. Even without diving into the documentation, looking at the names like “Autocomplete” or “Address Validation” clearly outlines the purposes and potential fit for a customer’s use case.

      2 RESTful web.jpg

      #2 Making your APIs too complex for users

      Customers turn to APIs to bypass complicated programming challenges so they can get to the part they know how to do well. If they feel like using your API means learning a whole new system or language, then it isn’t fitting their needs and they will likely look for something else. It’s up to your team to make an API that is strong and smart enough to do what your customer wants, but also simple enough to hide how complicated the tasks your API solves for really are. For example if you know your customers are using your APIs to present information about recently open restaurants and highly rated pizzeria to their consumers, providing them with a simple API call as below would be of great help:

      [StructValue([(u'code', u'GET /restaurants?location=Austin&category=Pizzeria&open=true&sort=-priority,created_at'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe0b7c90>)])]

      To see if your API design is simple enough, pretend you are building the whole system from scratch — or if you have a trusted customer who is willing to help, ask them to test it and report their results. If you can complete the workflow without having to stop to figure something out, then you're good to go. On the other hand, if you catch rough edges caused by trying to code around system complexity issues, then keep trying to refactor. The API will be ready when you can say that nothing is confusing and that it either meets your customers’ needs or can easily be updated as needs change.

      #3 Creating “chatty” APIs with too many calls

      Multiple network calls slow down the process and creates higher connection overhead — which means higher operational costs. This is why it’s so important to minimize the number of API calls.

      The key to this is outside-in design: simplify. Look for ways to reduce the number of API calls a customer must make in their application's workflow. If your customers are building mobile applications, for example, they often need to minimize their network traffic to reduce battery drain, and requiring a couple calls instead of a dozen can make a big difference. 

      Rather than deciding between building distinct, data-driven microservices and streamlining API usage, consider offering both: fine-grained APIs for specific data types, and “experience APIs” (APIs that are designed to power user experiences. Here is a further theoretical discussion on Experience APIs) around common or customer-specific user interfaces. These experience APIs compose multiple smaller domains into a single endpoint; making it much simpler for your customers — especially those building user interfaces — to render their screens easily and quickly.

      Another option here is to use something like GraphQL to allow for this type of customizability. Generally you should avoid building a unique endpoint for every possible screen, but common screens like home pages and user account information can make a world of difference to your API consumers. 

      #4 Not allowing for flexibility

      Even if you’ve followed all of the steps above, you may find that there are edge cases that do not fit under your beautifully designed payloads. Maybe your customer needs more data in a single page of results than usual, or the payload has way more data than their app requires. You can’t create a one-size-fits-all solution, but you also don’t want a reputation for building APIs that are limiting. Here are 3 simple options to make your endpoints more flexible. 

      • Filter out response properties: You can either use query parameters for sorting and pagination, or use GraphQL which provides these types of details natively. By giving customers the option to request only the properties they need, it guarantees that they won’t have to sort through tons of unnecessary data to get what they need. For example, if some of your customers only need the title, author, and bestseller ranking, give them the ability to retrieve only that data with a query string parameter.
      [StructValue([(u'code', u'GET /books?fields=title,author,ranking'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfd5660d0>)])]

      • Ability to sort with pagination. Generally, you don't want to guarantee the order of objects in an API response because minor changes in logic or peculiarities in your data source might change the sort order at some point. In some cases, however, your customers may want to sort by a particular field. Giving them that option, combined with a pagination option, will give them a highly efficient API when they only want the top few results. For example Spotify API utilizes a simple offset and limit parameter set to allow pagination. A sample endpoint as shown in the documentation would look like this

      [StructValue([(u'code', u'$ curl https://api.spotify.com/v1/artists/1vCWHaC5f2uS3yhpwWbIA6/albums?album_type=SINGLE&offset=20&limit=10'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfd566750>)])]

      • Use mature compositions like GraphQL: Since customer data needs can differ, giving them on-the-fly composites lets them build to the combinations of data they need, rather than being restricted to a single data type or a pre-set combination of data fields. Using GraphQL can even bypass the need to build experience APIs, but when this isn’t an option, you can use query string parameter options like “expand” to create these more complex queries. Here is a sample response that demonstrates a collection of company resources with embedded properties included

      [StructValue([(u'code', u'"data": [\r\n {\r\n "CompanyUid": "27e9cf71-fca4",\r\n "name": "ABCCo",\r\n "status": "Active",\r\n "_embedded": {\r\n "organization": {\r\n "CompanyUid": "27e9cf71-fca4",\r\n "name": "ABCCo",\r\n "type": "Company",\r\n "taxId": "0123",\r\n "city": "Portland",\r\n "notes": ""\r\n }\r\n }\r\n }\r\n]'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe0b7810>)])]

      #5 Making design unreadable to humans 

      “K”eep “I”t “S”imply “S”tupid when you are designing your API. While APIs are meant for computer-to-computer interaction, the first client of an API is always a human, and the API contract is the first piece of documentation. Developers are more apt to study your payload design before they dig into your docs. Observation studies suggest that developers spend more than 51% of their time in editor and client as compared to ~18% on reference. 

      For example, if you skim through the payload below it takes some time to understand because instead of property names it includes an “id”. Even the property name “data” does not suggest anything meaningful aside from just being an artifact of the JSON design. A few extra bytes in the payload can save a lot of early confusion and accelerate adoption of your API. Notice how user-ids appearing on the left of the colon (in the position where other examples of JSON ideally have property names) creates confusion in reading the payload.
      [StructValue([(u'code', u'"{id-a}": \r\n{ "data": \r\n [ \r\n { \r\n "AirportCode": "LAX",\r\n "AirportName": "Los Angeles",\r\n "From": "LAX", \r\n "To": "Austin", \r\n "departure": "2014-07-15T15:11:25+0000",\r\n "arrival": "2014-07-15T16:31:25+0000" \r\n } \r\n... // More data \r\n ] \r\n},'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfdf9c890>)])]

      We think that JSON like this is more difficult to learn. If you want to eliminate any ambiguity in the words you choose to describe the data, keep the payload simple and if any of those labels could be interpreted in more than one way, adjust them to be more clear. Here is a sample response from Airlines endpoint of aviationstack API. Notice how the property names clearly explain the expected result while maintaining a simple JSON structure.

      [StructValue([(u'code', u'"data": [\r\n {\r\n "airline_name": "American Airlines",\r\n "iata_code": "AA",\r\n "iata_prefix_accounting": "1",\r\n "icao_code": "AAL",\r\n "callsign": "AMERICAN",\r\n "type": "scheduled",\r\n "status": "active",\r\n "fleet_size": "963",\r\n "fleet_average_age": "10.9",\r\n "date_founded": "1934",\r\n "hub_code": "DFW",\r\n "country_name": "United States",\r\n "country_iso2": "US"\r\n },\r\n [...]\r\n ]'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e5cfe97d650>)])]

      #6 Know when you can break the RESTful rules

      Being true to the RESTful basics — such as using the correct HTTP verbs, status codes, and stateless resource-based interfaces — can make your customers' lives easier because they don't need to learn an all new lexicon, but remember that the goal is just to help them get their job done. If you put RESTful design first over user experience, then it doesn’t really serve its purpose.

      Your goal should be helping your customers be successful with your data, as quickly and easily as possible. Occasionally, that may mean breaking some "rules" of REST to offer simpler and more elegant interfaces. Just be consistent in your design choices across all of your APIs, and be very clear in your documentation about anything that might be peculiar or nonstandard. 


      Beyond these common pitfalls, we have also created a comprehensive guide packaging up our rich experience designing and managing APIs at incredible scale with Google Cloud's API management product, Apigee. 

      Apigee — Google Cloud’s native API management platform — helps you build, manage, and secure APIs — for any use case, scale or environment. Get started with Apigee today or check out our documentation for additional information.

    • Low-latency fraud detection with Cloud Bigtable Wed, 30 Nov 2022 17:00:00 -0000

      Each time someone makes a purchase with a credit card, financial companies want to determine if that was a legitimate transaction or if it is using a stolen credit card, abusing a promotion or hacking into a user's account. Every year, billions of dollars are lost due to credit card fraud, so there are serious financial consequences. Companies dealing with these transactions need to balance predicting fraud accurately and predicting fraud quickly. 

      In this blog post, you will learn how to build a low-latency, real-time fraud detection system that scales seamlessly by using Bigtable for user attributes, transaction history and machine learning features. We will follow an existing code solution, examine the architecture, define the database schema for this use case, and see opportunities for customizations.

      The code for this solution is on GitHub and includes a simplistic sample dataset and pre-trained fraud detection model plus a Terraform configuration. This blog and example's goal is to showcase the end-to-end solution rather than machine learning specifics since most fraud detection models in reality can involve hundreds of variables. If you want to spin up the solution and follow along, clone the repo and follow the instructions in the README to set up resources and run the code.

      [StructValue([(u'code', u'git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git\r\ncd java-docs-samples/bigtable/use-cases/fraudDetection'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ed9eeeac790>)])]

      Fraud detection pipeline

      When someone initiates a credit card purchase, the transaction is sent for processing before the purchase can be completed. The processing includes validating the credit card, checking for fraud, and adding the transaction to the user's transaction history. Once those steps are completed, and if there is no fraud identified, the point of sale system can be notified that the purchase can finish. Otherwise, the customer might receive a notification indicating there was fraud, and further transactions can be blocked until the user can secure their account.

      The architecture for this application includes:

      • Input stream of customer transactions

      • Fraud detection model

      • Operational data store with customer profiles and historical data

      • Data pipeline for processing transactions

      • Data warehouse for training the fraud detection model and querying table level analytics

      • Output stream of fraud query results

      The architecture diagram below shows how the system is connected and which services are included in the Terraform setup.

      1 Cloud Bigtable fraud detection.jpg


      Before creating a fraud detection pipeline, you will need a fraud detection model trained on an existing dataset. This solution provides a fraud model to try out, but it is tailored for the simplistic sample dataset. When you're ready to deploy this solution yourself based on your own data, you can follow our blog on how to train a fraud model with BigQuery ML.

      Transaction input stream

      The first step towards detecting fraud is managing the stream of customer transactions. We need an event-streaming service that can horizontally scale to meet the workload traffic, so Cloud Pub/Sub is a great choice. As our system grows, additional services can subscribe to the event-stream to add new functionality as part of a microservice architecture. Perhaps the analytics team will subscribe to this pipeline for real time dashboards and monitoring.

      When someone initiates a credit card purchase, a request from the point of sale system will come in as a Pub/Sub message. This message will have information about the transaction like location, transaction amount, merchant id and customer id. Collecting all the transaction information is critical for us to make an informed decision since we will update the fraud detection model based on purchase patterns over time as well as accumulate recent data to use for the model inputs. The more data points we have, the more opportunities we have to find anomalies and make an accurate decision.

      Transaction pipeline

      Pub/sub has built-in integration with Cloud Dataflow, Google Cloud's data pipeline tool, which we will use for processing the stream of transactions with horizontal scalability. It's common to design Dataflow jobs with multiple sources and sinks, so there is a lot of flexibility in pipeline design. Our pipeline design here only fetches data from Bigtable, but you could also add additional data sources or even 3rd party financial APIs to be part of the processing. Dataflow is also great for outputting results to multiple sinks, so we can write to databases, publish an event stream with the results, and even call APIs to send emails or texts to users about the fraud activity.

      Once the pipeline receives a message, our Dataflow job does the following:

      • Fetch user attributes and transaction history from Bigtable

      • Request a prediction from Vertex AI

      • Write the new transaction to Bigtable

      • Send the prediction to a Pub/Sub output stream

      2 Cloud Bigtable fraud detection.jpg
      [StructValue([(u'code', u'Pipeline pipeline = Pipeline.create(options);\r\n\r\nPCollection<RowDetails> modelOutput =\r\n pipeline\r\n .apply(\r\n "Read PubSub Messages",\r\n PubsubIO.readStrings().fromTopic(options.getInputTopic()))\r\n .apply("Preprocess Input", ParDo.of(PREPROCESS_INPUT))\r\n .apply("Read from Cloud Bigtable",\r\n ParDo.of(new ReadFromTableFn(config)))\r\n .apply("Query ML Model",\r\n ParDo.of(new QueryMlModelFn(options.getMLRegion())));\r\n\r\nmodelOutput\r\n .apply(\r\n "TransformParsingsToBigtable",\r\n ParDo.of(WriteCBTHelper.MUTATION_TRANSFORM))\r\n .apply(\r\n "WriteToBigtable",\r\n CloudBigtableIO.writeToTable(config));\r\n\r\nmodelOutput\r\n .apply(\r\n "Preprocess Pub/Sub Output",\r\n ParDo.of(\r\n new DoFn<RowDetails, String>() {\r\n @ProcessElement\r\n public void processElement(\r\n @Element final RowDetails modelOutput,\r\n final OutputReceiver<String> out)\r\n throws IllegalAccessException {\r\n out.output(modelOutput.toCommaSeparatedString());\r\n }\r\n }))\r\n .apply("Write to PubSub",\r\n PubsubIO.writeStrings().to(options.getOutputTopic()));\r\n\r\npipeline.run();'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ed9efb6a5d0>)])]

      Operational data store

      To detect fraud in most scenarios, you cannot just look at just one transaction in a silo – you need the additional context in real time in order to detect an anomaly. Information about the customer's transaction history and user profile are the features we will use for the prediction.

      We'll have lots of customers making purchases, and since we want to validate the transaction quickly, we need a scalable and low-latency database that can act as part of our serving layer. Cloud Bigtable is a horizontally-scalable database service with consistent single-digit millisecond latency, so it aligns great with our requirements. 

      Schema design
      Our database will store customer profiles and transaction history. The historical data provides context which allows us to know if a transaction follows its customer's typical purchase patterns. These patterns can be found by looking at hundreds of attributes. A NoSQL database like Bigtable allows us to add columns for new features seamlessly unlike less flexible relational databases which would require schema changes to augment. 

      Data scientists and engineers can work to evolve the model over time by mixing and matching features to see what creates the most accurate model. They can also use the data in other parts of the application: generating credit card statements for customers or creating reports for analysts. Bigtable as an operational data store here allows us to provide a clean current version of the truth shared by multiple access points within our system.

      For the table design, we can use one column family for customer profiles and another for transaction history since they won't always be queried together. Most users are only going to make a few purchases a day, so we can use the user id for the row key. All transactions can go in the same row since Bigtable's cell versioning will let us store multiple values at different timestamps in row-column intersections. 

      Our table example data includes more columns, but the structure looks like this:

      3 Cloud Bigtable fraud detection.jpg

      Since we are recording every transaction each customer is making, the data could grow very quickly, but garbage collection policies can simplify data management. For example, we might want to keep a minimum of 100 transactions then delete any transactions older than six months. 

      Garbage collection policies apply per column family which gives us flexibility. We want to retain all the information in the customer profile family, so we can use a default policy that won't delete any data. These policies can be managed easily via the Cloud Console and ensure there's enough data for decision making while trimming the database of extraneous data. 

      Bigtable stores timestamps for each cell by default, so if a transaction is incorrectly categorized as fraud/not fraud, we can look back at all of the information to debug what went wrong. There is also the opportunity to use cell versioning to support temporary features. For example, if a customer notifies us that they will be traveling during a certain time, we can update the location with a future timestamp, so they can go on their trip with ease. 

      With our pending transaction, we can extract the customer id and fetch that information from the operational data store. Our schema allows us to do one row lookup to get an entire user's information.

      [StructValue([(u'code', u'Table table = getConnection().getTable(TableName.valueOf(options.getCBTTableId()));\r\nResult row = table.get(new Get(Bytes.toBytes(transactionDetails.getCustomerID())));\r\n\r\nCustomerProfile customerProfile = new CustomerProfile(row);'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eda0ad4ae50>)])]

      Request a prediction

      Now, we have our pending transaction and the additional features, so we can make a prediction. We took the fraud detection model that we trained previously and deployed it to Vertex AI Endpoints. This is a managed service with built-in tooling to track our model's performance.

      4 Cloud Bigtable fraud detection.jpg
      [StructValue([(u'code', u'PredictRequest predictRequest =\r\n PredictRequest.newBuilder()\r\n .setEndpoint(endpointName.toString())\r\n .addAllInstances(instanceList)\r\n .build();\r\n\r\nPredictResponse predictResponse = predictionServiceClient.predict(\r\n predictRequest);\r\ndouble fraudProbability =\r\n predictResponse\r\n .getPredictionsList()\r\n .get(0)\r\n .getListValue()\r\n .getValues(0)\r\n .getNumberValue();\r\n\r\nLOGGER.info("fraudProbability = " + fraudProbability);'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ed9ee5c0ad0>)])]

      Working with the result

      We will receive the fraud probability back from the prediction service and then can use it in a variety of ways. 

      Stream the prediction
      We will receive the fraud probability back from the prediction service and need to pass the result along. We can send the result and transaction as a Pub/Sub message in a result stream, so the point of sale service and other services can complete processing. Multiple services can react to the event stream, so there is a lot of customization you can add here. One example would be to  use the event stream as a Cloud Function trigger for a custom function that notifies users of fraud via email or text.

      Another customization you could add to the pipeline would be to include a mainframe or a relational database like Cloud Spanner or AlloyDB to commit the transaction and update the account balance. The payment will only go through if the balance can be removed from the remaining credit limit otherwise the customer's card will have to be declined.

      [StructValue([(u'code', u'modelOutput\r\n .apply(\r\n "Preprocess Pub/Sub Output",\r\n ParDo.of(\r\n new DoFn<RowDetails, String>() {\r\n @ProcessElement\r\n public void processElement(\r\n @Element final RowDetails modelOutput,\r\n final OutputReceiver<String> out)\r\n throws IllegalAccessException {\r\n out.output(modelOutput.toCommaSeparatedString());\r\n }\r\n }))\r\n .apply("Write to PubSub",\r\n PubsubIO.writeStrings().to(options.getOutputTopic()));'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eda0a4ff850>)])]

      Update operational data store
      We also can write the new transaction and its fraud status to our operational data store in Bigtable. As our system processes more transactions, we can improve the accuracy of our model by updating the transaction history, so we will have more data points for future transactions. Bigtable scales horizontally for reading and writing data, so keeping our operational data store up to date requires minimal additional infrastructure setup.

      Making test predictions

      Now that you understand the entire pipeline and it's up and running, we can send a few transactions to the Pub/Sub stream from our dataset. If you've deployed the codebase, you can generate transactions with gcloud and look through each tool in the Cloud Console to monitor the fraud detection ecosystem in real time.

      Run this bash script from the terraform directory to publish transactions from the testing data:

      [StructValue([(u'code', u'NUMBER_OF_LINES=5000\r\nPUBSUB_TOPIC=$(terraform -chdir=../ output pubsub_input_topic | tr -d \'"\')\r\nFRAUD_TRANSACTIONS_FILE="../datasets/testing_data/fraud_transactions.csv"\r\nLEGIT_TRANSACTIONS_FILE="../datasets/testing_data/legit_transactions.csv"\r\n\r\nfor i in $(eval echo "{1..$NUMBER_OF_LINES}")\r\ndo\r\n # Send a fraudulent transaction\r\n MESSAGE=$(sed "${i}q;d" $FRAUD_TRANSACTIONS_FILE)\r\n echo ${MESSAGE}\r\n gcloud pubsub topics publish ${PUBSUB_TOPIC} --message="${MESSAGE}"\r\n sleep 5\r\n\r\n # Send a legit transaction\r\n MESSAGE=$(sed "${i}q;d" $LEGIT_TRANSACTIONS_FILE)\r\n echo ${MESSAGE}\r\n gcloud pubsub topics publish ${PUBSUB_TOPIC} --message="${MESSAGE}"\r\n sleep 5\r\ndone'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ed9ee5cc4d0>)])]


      In this piece, we've looked at each part of a fraud detection pipeline and how to ensure each has scale and low-latency using the power of Google Cloud. This example is available on GitHub, so explore the code, launch it yourself, and try making modifications to match your needs and data. The Terraform setup included uses dynamically scalable resources like Dataflow, Pub/sub, and Vertex AI with an initial one node Cloud Bigtable instance that you can scale up to match your traffic and system load.

      With millions of daily credit card purchases, how can you detect which transactions are fraudulent before they complete? In this video, developer advocate Billy Jacobson shows how we can use big data processing tools, machine learning, and the scalable Bigtable database to detect fraud in milliseconds. Watch to learn how you can deploy your own fraud detection system with horizontal scalability!

      Related Article

      How Cloud Bigtable helps Ravelin detect retail fraud with low latency

      Detecting fraud with low latency and accepting payments at scale is made easier thanks to Bigtable.

      Read Article
    • BigQuery Geospatial Functions - ST_IsClosed and ST_IsRing Wed, 30 Nov 2022 17:00:00 -0000

      Geospatial data analytics lets you use location data (latitude and longitude) to get business insights. It's used for a wide variety of applications in industry, such as package delivery logistics services, ride-sharing services, autonomous control of vehicles, real estate analytics, and weather mapping. 

      BigQuery, Google Cloud’s large-scale data warehouse, provides support for analyzing large amounts of geospatial data. This blog post discusses two geography functions we've recently added in order to expand the capabilities of geospatial analysis in BigQuery: ST_IsClosed and ST_IsRing.

      BigQuery geospatial functions

      In BigQuery, you can use the GEOGRAPHY data type to represent geospatial objects like points, lines, and polygons on the Earth’s surface. In BigQuery, geographies are based on the Google S2 Library, which uses Hilbert space-filling curves to perform spatial indexing to make the queries run efficiently. BigQuery comes with a set of geography functions that let you process spatial data using standard ANSI-compliant SQL. (If you're new to using BigQuery geospatial analytics, start with Get started with geospatial analytics, a tutorial that uses BigQuery to analyze and visualize the popular NYC Bikes Trip dataset.) 

      The new ST_IsClosed and ST_IsRing functions are boolean accessor functions that help determine whether a geographical object (a point, a line, a polygon, or a collection of these objects) is closed or is a ring. Both of these functions accept a GEOGRAPHY column as input and return a boolean value. 

      The following diagram provides a visual summary of the types of geometric objects.

      1 BigQuery Geospatial Functions.jpg

      For more information about these geometric objects, see Well-known text representation of geometry in Wikipedia.

      Is the object closed? (ST_IsClosed)

      The ST_IsClosed function examines a GEOGRAPHY object and determines whether each of the elements of the object has an empty boundary. The boundary for each element is defined formally in the ST_Boundary function. The following rules are used to determine whether a GEOGRAPHY object is closed:

      • A point is always closed.

      • A linestring is closed if the start point and end point of the linestring are the same.

      • A polygon is closed only if it's a full polygon.

      • A collection is closed if every element in the collection is closed. 

      • An empty GEOGRAPHY object is not closed. 

      Is the object a ring? (ST_IsRing)

      The other new BigQuery geography function is ST_IsRing. This function determines whether a GEOGRAPHY object is a linestring and whether the linestring is both closed and simple. A linestring is considered closed as defined by the ST_IsClosed function. The linestring is considered simple if it doesn't pass through the same point twice, with one exception: if the start point and end point are the same, the linestring forms a ring. In that case, the linestring is considered simple.

      Seeing the new functions in action

      The following query shows you what the ST_IsClosed and ST_IsRing function return for a variety of geometric objects. The query creates a series of ad-hoc geography objects and uses the UNION ALL statement to create a set of inputs. The query then calls the ST_IsClosed and ST_IsRing functions to determine whether each of the inputs are closed or are rings. You can run this query in the BigQuery SQL workspace page in the Google Cloud console.
      [StructValue([(u'code', u"WITH example AS(\r\n SELECT ST_GeogFromText('POINT(1 2)') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('LINESTRING(2 2, 4 2, 4 4, 2 4, 2 2)') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('LINESTRING(1 2, 4 2, 4 4)') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('POLYGON((0 0, 2 2, 4 2, 4 4, 0 0))') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('MULTIPOINT(5 0, 8 8, 9 6)') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('MULTILINESTRING((0 0, 2 0, 2 2, 0 0), (4 4, 7 4, 7 7, 4 4))') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('GEOMETRYCOLLECTION EMPTY') AS geography\r\n UNION ALL\r\n SELECT ST_GeogFromText('GEOMETRYCOLLECTION(POINT(1 2), LINESTRING(2 2, 4 2, 4 4, 2 4, 2 2))') AS geography)\r\nSELECT\r\n geography,\r\n ST_IsClosed(geography) AS is_closed, \r\n ST_IsRing(geography) AS is_ring \r\nFROM example;"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3999d4ff90>)])]

      The console shows the following results. You can see in the is_closed and is_ring columns what each function returns for the various input geography objects.

      2 BigQuery Geospatial Functions.jpg

      The new functions with real-world geography objects

      In this section, we show queries using linestring objects that represent line segments that connect some of the cities in Europe. We show the various geography objects on maps and then discuss the results that you get when you call ST_IsClosed and ST_IsRing for these geography objects. 

      You can run the queries by using the BigQuery Geo Viz tool. The maps are the output of the tool. In the tool you can click the Show results button to see the values that the functions return for the query.

      3 BigQuery Geospatial Functions.jpg

      Start point and end point are the same, no intersection

      In the first example, the query creates a linestring object that has three segments. The segments are defined by using four sets of coordinates: the longitude and latitude for London, Paris, Amsterdam, and then London again, as shown in the following map created by the Geo Viz tool:

      4 BigQuery Geospatial Functions.jpg

      The query looks like the following:

      [StructValue([(u'code', u"WITH example AS (\r\nSELECT ST_GeogFromText('LINESTRING(-0.2420221 51.5287714, 2.2768243 48.8589465, 4.763537 52.3547921, -0.2420221 51.5287714)') AS geography)\r\nSELECT \r\n geography, \r\n ST_IsClosed(geography) AS is_closed,\r\n ST_IsRing(geography) AS is_ring\r\nFROM example;"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e397bec3bd0>)])]

      In the example table that's created by the query, the columns with the function values show the following:

      • ST_IsClosed returns true. The start point and end point of the linestring are the same.

      • ST_IsRing returns true. The geography is closed, and it's also simple because there are no self-intersections.

      Start point and end point are different, no intersection

      Another scenario is when the start and end points are different. For example, imagine two segments that connect London to Paris and then Paris to Amsterdam, as in this map:

      5 BigQuery Geospatial Functions.jpg

      The following query represents this set of coordinates:

      [StructValue([(u'code', u"WITH example AS (\r\nSELECT ST_GeogFromText('LINESTRING(-0.2420221 51.5287714, 2.2768243 48.8589465, 4.763537 52.3547921)') AS geography)\r\nSELECT \r\n geography, \r\n ST_IsClosed(geography) AS is_closed,\r\n ST_IsRing(geography) AS is_ring\r\nFROM example;"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e397b56c3d0>)])]

      This time, the ST_IsClosed and ST_IsRing functions return the following values:

      • ST_IsClosed returns false. The start point and end point of the linestring are different.

      • ST_IsRing returns false. The linestring is not closed. It's simple because there are no self-intersections, but ST_IsRing returns true only when the geometry is both closed and simple.

      Start point and end point are the same, with intersection

      The third example is a query that creates a more complex geography. In the linestring, the start point and end point are the same. However, unlike the earlier example, the line segments of the linestring intersect. A map of the segments shows connections that go from London to Zürich, then to Paris, then to Amsterdam, and finally back to London:

      6 BigQuery Geospatial Functions.jpg

      In the following query, the linestring object has five sets of coordinates that define the four segments:

      [StructValue([(u'code', u"WITH example AS (\r\nSELECT ST_GeogFromText('LINESTRING(-0.2420221 51.5287714, 8.393389 47.3774686, 2.2768243 48.8589465, 4.763537 52.3547921, -0.2420221 51.5287714)') AS geography)\r\nSELECT \r\n geography,\r\n ST_IsClosed(geography) AS is_closed,\r\n ST_IsRing(geography) as is_ring\r\nFROM example;"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e3998fdadd0>)])]

      In the query, ST_IsClosed and ST_IsRing return the following values:

      • ST_IsClosed returns true. The start point and end point are the same, and the linestring is closed despite the self-intersection.

      • ST_IsRing returns false. The linestring is closed, but it's not simple because of the intersection.

      Start point and end point are different, with intersection

      In the last example, the query creates a linestring that has three segments that connect four points: London, Zürich, Paris, and Amsterdam. On a map, the segments look like the following:

      7 BigQuery Geospatial Functions.jpg

      The query is as follows:

      [StructValue([(u'code', u"WITH example AS (\r\nSELECT ST_GeogFromText('LINESTRING(-0.2420221 51.5287714, 8.393389 47.3774686, 2.2768243 48.8589465, 4.763537 52.3547921)') AS geography)\r\nSELECT \r\n geography, \r\n ST_IsClosed(geography) AS is_closed,\r\n ST_IsRing(geography) AS is_ring\r\nFROM example;"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e397b56c790>)])]

      The new functions return the following values:

      • ST_IsClosed returns false. The start point and end point are not the same.  

      • ST_IsRing returns false. The linestring is not closed and it's not simple.

      Try it yourself

      Now that you've got an idea of what you can do with the new ST_IsClosed and ST_IsRing functions, you can explore more on your own. For details about the individual functions, read the ST_IsClosed and ST_IsRing entries in the BigQuery documentation. To learn more about the rest of the geography functions available in BigQuery Geospatial, take a look at the BigQuery geography functions page.

      Thanks to Chad Jennings, Eric Engle and Jing Jing Long for their valuable support to add more functions to BigQuery Geospatial.  Thank you Mike Pope for helping review this article.

    Google has many products and the following is a list of its products: Android AutoAndroid OSAndroid TVCalendarCardboardChromeChrome EnterpriseChromebookChromecastConnected HomeContactsDigital WellbeingDocsDriveEarthFinanceFormsGboardGmailGoogle AlertsGoogle AnalyticsGoogle Arts & CultureGoogle AssistantGoogle AuthenticatorGoogle ChatGoogle ClassroomGoogle DuoGoogle ExpeditionsGoogle Family LinkGoogle FiGoogle FilesGoogle Find My DeviceGoogle FitGoogle FlightsGoogle FontsGoogle GroupsGoogle Home AppGoogle Input ToolsGoogle LensGoogle MeetGoogle OneGoogle PayGoogle PhotosGoogle PlayGoogle Play BooksGoogle Play GamesGoogle Play PassGoogle Play ProtectGoogle PodcastsGoogle ShoppingGoogle Street ViewGoogle TVGoogle TasksHangoutsKeepMapsMeasureMessagesNewsPhotoScanPixelPixel BudsPixelbookScholarSearchSheetsSitesSlidesSnapseedStadiaTilt BrushTranslateTravelTrusted ContactsVoiceWazeWear OS by GoogleYouTubeYouTube KidsYouTube MusicYouTube TVYouTube VR

    Google News

    Think with Google

    Google AI BlogAndroid Developers BlogGoogle Developers Blog
    AI is Artificial Intelligence

    Nightmare Scenario: Inside the Trump Administration’s Response to the Pandemic That Changed. From the Washington Post journalists Yasmeen Abutaleb and Damian Paletta - the definitive account of the Trump administration’s tragic mismanagement of the COVID-19 pandemic, and the chaos, incompetence, and craven politicization that has led to more than a half million American deaths and counting.

    Since the day Donald Trump was elected, his critics warned that an unexpected crisis would test the former reality-television host - and they predicted that the president would prove unable to meet the moment. In 2020, that crisis came to pass, with the outcomes more devastating and consequential than anyone dared to imagine. Nightmare Scenario is the complete story of Donald Trump’s handling - and mishandling - of the COVID-19 catastrophe, during the period of January 2020 up to Election Day that year. Yasmeen Abutaleb and Damian Paletta take us deep inside the White House, from the Situation Room to the Oval Office, to show how the members of the administration launched an all-out war against the health agencies, doctors, and scientific communities, all in their futile attempts to wish away the worst global pandemic in a century...


    ZDNet » Google

    9to5Google » Google

    Computerworld » Google

    • Hey, Google: It's time to step up your Pixel upgrade promise Fri, 02 Dec 2022 02:45:00 -0800

      Look, it's no big secret that I'm a fan of Google's Pixel program.

      I've personally owned Pixel phones since the first-gen model graced our gunk-filled pockets way back in 2016. And Pixels have been the only Android devices I've wholeheartedly recommended for most folks ever since.

      There's a reason. And more than anything, it comes down to the software and the overall experience Google's Pixel approach provides.

      • Part of that is the Pixel's interface and the lack of any unnecessary meddling and complication — including the absence of confusing (and often privacy-compromising) duplicative apps and services larded onto the phone for the manufacturer's business benefit and at the expense of your user experience.
      • Part of it is the unmatched integration of exceptional Google services and exclusive Google intelligence that puts genuinely useful stuff you'll actually benefit from front and center and makes it an integrated part of the Pixel package.
      • And, yes, part of it is the Pixel upgrade promise and the fact that Pixel phones are still the only Android devices where both timely and reliable software updates are a built-in feature and guarantee.

      [Psst: Got a Pixel? Any Pixel? Check out my free Pixel Academy e-course to uncover all sorts of advanced intelligence lurking within your phone!]

      To read this article in full, please click here

    • Apple, Google face legal pressure over UK mobile services dominance Tue, 22 Nov 2022 06:44:00 -0800

      Apple faces yet more regulation as the UK’s competition watchdog launches an investigation into how Apple and Google dominate the market for mobile services.

      Control the internet by controlling the browsers

      The Competition and Markets Authority (CMA) has said it will now investigate both companies for their dominance around browsers, app stores, and cloud gaming.

      For insight into that dominance, the CMA points out that 97% of all UK mobile web browsing makes use of either Apple or Google’s browser engine.

      To read this article in full, please click here

    • Is ChromeOS right for you? A 4-question quiz to find out Tue, 22 Nov 2022 03:00:00 -0800

      Google's ChromeOS is one of the world's most misunderstood computing platforms. Chromebooks are foundationally different from traditional PCs, after all — and consequently, there are a lot of misconceptions about how they work and what they can and can't do.

      Since people are always asking me whether a Chromebook might be right for their needs, I thought I'd put together a quick guide to help any such wonderers figure it out. Whether it's you or someone you know who's curious, the following four questions should help shed some light on what the platform's all about and for whom it makes sense.

      To read this article in full, please click here

    • 11 out-of-sight Pixel Watch superpowers Fri, 04 Nov 2022 02:45:00 -0700

      Having a Googley gadget on your wrist can be a great way to stay on top of stuff — but with Google's Pixel Watch in particular, the best productivity-boosting possibilities are the ones you can't see.

      A smartwatch, after all, is a tiny screen. And that means it isn't especially optimal for intricate, extended interactions revolving around touch (unless you're a Tinkerbell-sized creature with teensy fingies, in which case the Pixel Watch is probably bigger than your entire being — so, yeah, good luck with that).

      It seems like the understatement of the century, I realize. And yet, device-maker after device-maker continues to emphasize those very sorts of painfully awkward touch-based interactions and app-centric experiences with the smartwatch form.

      To read this article in full, please click here

    • iPhone to Android: The ultimate switching guide Fri, 28 Oct 2022 03:00:00 -0700

      So, you're ready to leave your iPhone for greener pastures — specifically, the bright green hue of Google's Android ecosystem.

      It's a major move, to be sure, but it doesn't have to be daunting. Beneath the surface-level differences, Android and iOS actually have a lot in common — and with the right steps, you can switch from an iPhone to an Android device without losing anything significant (including your sanity).

      Make your way through this easy-to-follow guide, and you'll be happily settled in your new high-tech home in no time.

      All-in-one iOS-to-Android switching tools

      First things first: Google itself now offers a universal iOS-to-Android switching service that works with any device running 2020's Android 12 software or higher. That's hands-down the simplest way to get everything from your old iPhone onto your new Android device in one fell swoop and with the least amount of hassle possible.

      To read this article in full, please click here

    • Google execs knew 'Incognito mode' failed to protect privacy, suit claims Thu, 27 Oct 2022 13:47:00 -0700

      A federal judge in California is considering motions to dismiss a lawsuit against Google that alleges the company misled them into believing their privacy was being protected while using Incognito mode in the Chrome browser.

      The lawsuit, filed in the Northern District Court of California by five users more than two years ago, is now awaiting a recent motion by those plaintiffs for two class-action certifications.

      The first would cover all Chrome users with a Google account who accessed a non-Google website containing Google tracking or advertising code and who were in “Incognito mode”; the second covers all Safari, Edge, and Internet Explorer users with a Google account who accessed a non-Google website containing Google tracking or advertising code while in “private browsing mode.” 

      To read this article in full, please click here

    • Got a Google Pixel Watch? Get this watch face Wed, 26 Oct 2022 03:00:00 -0700

      I'll admit it: I've really been diggin' Google's purty new Pixel Watch.

      I've been wearing the watch for the past few weeks, bathing and slumber-time notwithstanding, and here's the twist: I absolutely did not expect to enjoy it.

      I was actually really into Wear OS and Google's smartwatch odyssey early on, too, way back when the platform first came out in the prehistoric era of 2014. But then, well, a couple things happened:

      1. I reached a point where I wanted the ability to be less tethered to technology, and wearing a smartwatch made me feel perpetually on the grid and tied to my electronic obligations.
      2. Google gave up on its thoughtfully conceived original vision for wearables, with a focus on contextual info and glanceable nuggets, and instead started chasing Apple's more popular philosophy — with an emphasis on complicated standalone apps and intricate interactions. That stuff looks impressive in ads but doesn't make for a great real-world experience in my eyes, and it just didn't jibe with the way I wanted to use a watch.

      So suffice it to say, I assumed my journey with the new Pixel Watch would be an interesting technological diversion but a short-lived adventure I'd be eager to abandon.

      To read this article in full, please click here

    • 9 more out-of-sight settings for your Google Pixel 7 Fri, 21 Oct 2022 02:45:00 -0700

      This week, we're pawing our way through the giddily good Google Pixel 7 and exploring some of the phone's more easily overlooked options.

      The Pixel 7 and its plus-sized Pixel 7 Pro sibling are practically overflowing with awesome stuff, y'see, but some of their best experience-enhancing possibilities require a wee bit of spelunking to surface and set up.

      So following up on our first set of hidden Pixel 7 settings, today, we're gonna dive into even more out-of-sight switches you'll absolutely want to dig up and adjust on your glitzy new Googley phone.

      To read this article in full, please click here

    • 7 helpful hidden settings for your Google Pixel 7 Tue, 18 Oct 2022 03:00:00 -0700

      All right, Pixel pals: So you've given into temptation and picked up a Google Pixel 7 or Pixel 7 Pro. Maybe you upgraded from a previous Pixel, or maybe it was an entirely new path for you after years in the land o' Samsung or perhaps even (gasp!) that other smartphone operating system.

      However you got there, congratulations: You've now got the greatest Googley gadget on this girthy green Earth. I've been living with the Pixel 7 and its plus-sized Pixel 7 Pro sibling for a solid couple weeks now, and the devices really are fantastic. They're the key to experiencing Android at its finest, and they're arguably among the best overall devices you could buy on any platform right now.

      To read this article in full, please click here

    • How to choose the best Android phones for business Mon, 17 Oct 2022 03:00:00 -0700

      Android dominates smartphone usage throughout the world — in every region except North America and Oceania. Thus, businesses in many regions are likely to support and issue Android devices to employees as their mainstay mobile devices. Even in areas where Apple’s iPhone dominates or is comparable in market share, businesses are likely to support or issue Android devices at least as a secondary option.

      Google has a certification called Android Enterprise Recommended that focuses on enterprise concerns around performance, device management, bulk device enrollment, and security update commitments. Google publishes a tool to help IT see which devices meet that certification in various regions, as well as explore supported Android versions and end dates for security updates.

      To read this article in full, please click here

    • Google Pixel 7 vs. every past Pixel: To upgrade or not to upgrade? Fri, 14 Oct 2022 03:00:00 -0700

      So you've got a Pixel — any ol' Pixel. Or maybe even a relatively new Pixel. And you're thinking about getting Google's glowing new Pixel 7 phone or perhaps its Pixel 7 Pro sibling.

      It's tempting, I know. (We all love shiny stuff!) But is it actually worth your while to get the Pixel 7, or are you better off hanging onto your current Pixel for a while longer?

      Having lived with both the Google Pixel 7 and the Pixel 7 Pro for a full week now — and coming from the perspective of someone who personally owns a Pixel 6, has owned plenty of Pixels before that, and has spent a significant amount of time with every single Pixel model — lemme tell ya: There isn't a simple, one-size-fits-all answer.

      To read this article in full, please click here

    • Will the CHIPS Act really bring back semiconductor production and tech jobs? Wed, 12 Oct 2022 03:00:00 -0700

      The US, where semiconductors were invented, was producing 37% of the world's supply of chips as recently as the 1990s. But only about 12% of all computer chips are produced domestically now. 

      That decline in domestic chip production was exposed by the worldwide supply chain crisis, and that has led to calls for reshoring microprocessor manufacturing in the US. With the federal government spurring them on, the likes of IntelSamsung, and TSMC have unveiled plans for a flurry of new US fabrication plants. (Qualcomm, in partnership with GlobalFoundries, also said it would invest $4.2 billion to double chip production in its Malta, New York fabrication facility.)

      To read this article in full, please click here

    • What everyone's getting wrong about the Google Pixel 7 Tue, 11 Oct 2022 03:00:00 -0700

      Do a little leisure reading about Google's shiny new Pixel 7 phones, and you're bound to encounter a handful of common conclusions:

      1. The Pixel 7 and Pixel 7 Pro are mostly meant to be reference devices and demo-like showcases for Google's software.
      2. Google doesn't expect many people (or businesses!) to buy 'em.
      3. Pixel phones in general have been total commercial flops.

      As someone who's studied, written about, and personally owned Pixels since the start — and the same with the Google's self-developed Nexus phones before 'em — lemme tell ya: These fly-by analyses couldn't be more inaccurate.

      And, fittingly enough, they're almost always put out there by people who don't use Pixels themselves, have little to no connection to the thriving community of Pixel owners and enthusiasts, and more often than not are iPhone owners who try on their Android philosopher hats two to three times a year — only while observing the platform's most high-profile and impossible-to-miss launches.

      To read this article in full, please click here

    • A wild new way to use Android widgets Wed, 05 Oct 2022 03:00:00 -0700

      When we Androidians think about widgets, we tend to think about our humble home screens.

      Makes sense, right? That's where widgets have traditionally existed here in the land o' Android (with one short-lived exception, anyway, but Google's convinced we've forgotten about that).

      Hold the phone, though — 'cause it turns out there's a whole other way to interact with widgets on your favorite Googley gadget. Few mere mortals are aware, but at some point, Google quietly started offering the ability to call up Android widgets on demand, as you need 'em, via a simple spoken command.

      To read this article in full, please click here

    • Mozilla: Apple, Google, and Microsoft lock you into their browsers Wed, 28 Sep 2022 03:00:00 -0700

      Apple, Google, Microsoft and others have essentially locked users into their web browsers through default settings in their OS platforms, giving the platform makers an unfair advantage over competitors, according to a new report by Firefox maker Mozilla.

      Mozilla researchers found each platform maker “wants to keep people within its walled garden” by steering mobile and desktop users to Apple Safari, Google Chrome, or Microsoft Edge. “All five major platforms today (Google, Apple, Meta, Amazon, Microsoft) bundle their respective browsers with their operating systems and set them as the operating system default in the prime home screen or dock position,” Mozilla wrote in a 66-page report.

      To read this article in full, please click here

    • 3 smart settings for better Google Pixel battery life Fri, 23 Sep 2022 02:45:00 -0700

      If there's one feeling all of us phone-carrying cuttlefish can relate to, it's the sense of anxiety when that dreaded low-battery warning shows up on our screens.

      Both Android itself and Google's Pixel phones, specifically, have gotten much better at managing battery life over the years. But some of the Pixel's most intelligent systems for safeguarding your stamina are options in your phone's software — and that means it's up to you to find 'em.

      Google's Pixel software is absolutely overflowing with those sorts of out-of-sight treasures, so to continue our ongoing Pixel settings explorations, I want to spelunk our way into some of your device's most advanced options for stretching your battery life to the max.

      To read this article in full, please click here

    • Android's underappreciated design revolution Tue, 20 Sep 2022 03:00:00 -0700

      Over the past couple years, those of us who pay close attention to mobile-tech matters have been watching a whole new paradigm of design shape up right before our overly moist eyeballs.

      And you know I have to be talking about something important here, 'cause I'm using big words like "paradigm" and, erm, "eyeballs."

      The subject in question is something core to the Android experience — particularly for anyone who's palming a Google-made Pixel phone, where the core Android software exists in its most undiluted form.

      It's a little somethin' called Material You, and having lived with a Pixel through a full year of Android 12 and now the beginning of Android 13, I'm here to tell you it's one of the most shape-shifting and underappreciated advancements we've seen in modern tech — even if hardly anyone seems to be giving it the credit it deserves.

      To read this article in full, please click here

    • Google’s failure to quash EU antitrust ruling has broad implications for tech companies Thu, 15 Sep 2022 10:11:00 -0700

      The EU General Court's decision Wednesday to largely uphold the ruling of the European Commission that fined Google €4 billion (US$3.9 billion) for antitrust violations could have wide-ranging implications for other tech companies.

      The case dates back to 2018, when the EU’s competition chief, Margrethe Vestager, issued a ruling that Google used its Android mobile operating system to undermine competitors.

      The ruling dealt with three types of agreements that involved Google’s mobile application distribution agreements (MADAs), antifragmentation agreements (AFAs), and revenue sharing agreements (RSAs).

      To read this article in full, please click here

    • As telehealth use plummets, the healthcare industry faces a crossroads Mon, 12 Sep 2022 03:00:00 -0700

      After reaching historically high adoption rates during the height of the COVID-19 pandemic, the use of telehealth services has plummeted since the beginning of the year.

      Experts say that places the healthcare industry at a fork in the road, where providers, payors, and tech companies must choose whether to embrace an effective and convenient healthcare medium or be left behind as telehealth marches forward.

      The road toward adoption of telehealth — the use of electronic communications to provide care and other services — has been long. Before the COVID-19 pandemic took hold in 2020, the adoption rate in the US, nearly 60 years after telehealth technology was first introduced, was just 0.9% of outpatient visits.

      To read this article in full, please click here

    • Got a Google Pixel? Flip this secret Android 13 switch Wed, 31 Aug 2022 03:00:00 -0700

      Friends, Android-appreciators, fellow Pixel-persons — listen up, for what I'm about to tell you may very well change how you think about getting around your favorite Googley phone:

      No matter what you've seen with Android 13 so far or how much digging you've done to unearth its many buried treasures, you almost certainly haven't experienced the software's most significant and shape-shifting addition. And there's good reason for that: The addition isn't technically available on your device.

      Android 13, as you may know by now, is a tale of two different operating systems. And while the improvements on the standard Pixel phone front are certainly not insignificant, the advancement that has the potential to make the most meaningful difference in your day-to-day life isn't intended to be used in that environment. It's limited only to foldable phones and tablets (for now, at least).

      To read this article in full, please click here

    Pac-Man Video Game - Play Now

    A Sunday at Sam's Boat on 5720 Richmond Ave, Houston, TX 77057