Home

Google Fi SIM Card Kit. Choose between the Simply Unlimited, Unlimited Plus and Flexible plans based on your data usage. 4G LTE and nationwide 5G coverage included for compatible phones.

Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. Google was launched in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Some of Google’s products are Google Docs, Google Sheets, Google Slides, Gmail, Google Search, Google Duo, Google Maps, Google Translate, Google Earth, and Google Photos. Play our Pac-Man videogame.

Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California. The project initially involved an unofficial "third founder", Scott Hassan, the original lead programmer who wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company. Read the full story...
Clothing & Jewelry —— Cellphones —— Microsoft Products —— All Products


Google Blog
TwitterFacebookInstagramYouTube



Google Ads
Many books were created to help people understand how Google works, its corporate culture and how to use its services and products. The following books are available: Ultimate Guide to Google AdsThe Ridiculously Simple Guide to Google Docs: A Practical Guide to Cloud-Based Word ProcessingMastering Google Adwords: Step-by-Step Instructions for Advertising Your Business (Including Google Analytics)Google Classroom: Definitive Guide for Teachers to Learn Everything About Google Classroom and Its Teaching Apps. Tips and Tricks to Improve Lessons’ Quality.3 Months to No.1: The "No-Nonsense" SEO Playbook for Getting Your Website Found on GoogleUltimate Guide to Google AdsGoogle AdSense Made Easy: Monetize Your Website and Blogs Instantly With These Proven Google Adsense TechniquesUltimate Guide to Google AdWords: How to Access 100 Million People in 10 Minutes (Ultimate Series)


Google Cloud Blog
TwitterFacebookInstagramYouTube

  • In a pioneering agreement, Google Public Sector and UC Riverside launch new model for research access Mon, 20 Mar 2023 18:00:00 -0000

    On March 9, we announced a collaboration with the University of California, Riverside (UCR) to modernize its enterprise infrastructure and support its cutting-edge research program. Our ambitious three-year agreement allows UCR to access Google Cloud’s state-of-the-art cloud computing resources at a fixed subscription rate. 

    I believe this could be a paradigm shift for research in higher education. By moving from a pay-per-use model for cloud computing access to a subscription model, Google Cloud and UCR are pioneering a new way of supporting research and enterprise infrastructure.

    UCR is making a major strategic investment in secure, agile, and scalable enterprise infrastructure and research computing services to facilitate innovation and opportunity for those who learn, teach, work, and do research at UCR. Matthew Gunkel
    Associate Vice Chancellor and CIO of Information Technology Solutions

    Trailblazing a new way to support research

    Too often, researchers all over the nation are stymied by the costs and queues of using supercomputer clusters. Now, UCR researchers can run workloads on several thousand processors whenever they want and the cost is included in the agreement with Google Cloud. When researchers can better predict and contain costs, easily access scalable resources, and seamlessly collaborate with colleagues, they can focus on envisioning solutions to our most pressing scientific challenges. That's a win-win for everyone.

    For research faculty like Dr. Bryan Wong, a Professor of Materials Science and Engineering at UCR, this is good news. Wong needs easy access to high performance computing for his everyday research. To test the behavior and reactivity of materials used to make solar cells or other electronics, he runs quantum simulations on hundreds of computer processing units (CPUs) simultaneously. 

    Like researchers nationwide, he typically spends hours writing 10-15 page proposals requesting computer resources at supercomputer clusters, then waits three months for approval. Sometimes, he doesn’t get as much time as he requested, and the resources allotted were always expensive. “ITS’ new approach to research computing services is much easier and there’s no lag time,” Wong says.

    “Through this new service structure we aim to empower faculty and students to focus on their research by removing administrative barriers and providing quick access to infrastructure and service,” says Gunkel. “The goal is to fuel more discoveries and grants, which in turn will help UCR attract top talent.”

    The subscription agreement is part of a broader IT modernization effort with UCR, which is an important economic engine in Southern California and a leader in scientific research nationally. Its 26,000 students and 1,100 faculty include two Nobel Prize winners and 15 members of the National Academies of Science and Medicine.  

    Gunkel says, “as part of our new strategic plan, ITS is investing in technology and services that will drive increased research output and a subsequent increase in campus funding, global recognition, and prestigious academic researchers coming to work for the University.” 

    Fueling 2-3x growth in computing and storage capacity

    Gunkel predicts that this move to the cloud will improve the ability of ITS to provide scaled infrastructure, business intelligence, and research computing in a secure environment. With a flexible, fixed cost subscription model, ITS can respond quickly and scale to enterprise and research demands while offloading the costs and headaches of server maintenance, access, and disaster recovery. “We will see a 2-3x growth in our overall available computing and storage capacity,” he says. With a location-agnostic strategy, ITS will be able to seamlessly shift and deploy computing and storage infrastructure into any data warehouse (on-prem or co-location). In their first pilot initiative, Gunkel reports that ITS helped the UCR University Extension center migrate all their data to Google Cloud in just two and a half weeks.

    Transforming higher education – one step at a time

    Our higher education system has a unique opportunity to leverage cloud computing to advance research. With access to advanced computational resources, researchers may be able to analyze data sets at petabyte scale in minutes, perform more complex simulations and modeling, and develop new technologies faster and more efficiently. This can help unlock the power of scientific and engineering research to solve complex problems, fueling American innovation.

    Today’s research discoveries will be tomorrow’s next big breakthroughs in the understanding of our universe, the health of our communities, and the development of our technologies. I’m excited to see what will come from UCR in the years to come. 

    To learn more about how Google Cloud is driving innovation in research: University of California, Riverside Enters Into First-of-its-kind Subscription-based Service with Google Cloud to Transform Research and IT

  • NEAR: Growing Web3 adoption through usability Mon, 20 Mar 2023 16:00:00 -0000

    Editor’s note: Today we hear from NEAR, an open-source collective behind the NEAR blockchain protocol, ecosystem, and foundation, and which is building its infrastructure services on Google Cloud.


    Web3 is a paradigm shift where identity, data, and assets are controlled and owned by users and the web’s services are decentralized. To enable this vision, there’s a requirement for a foundational infrastructure layer that can scale to billions of users, maintain a decentralized and secure environment, and be independently sustainable over time.   

    That infrastructure is a necessary condition for Web3 adoption, but it isn’t sufficient by itself. In particular, usability is one of the biggest hurdles to Web3 adoption. The technology is still relatively new – about ten years old – and the data, tooling, and product layers are even newer. Millions of people use Web3 today, but that’s a long way from the billions on Web2. 

    NEAR was created to address this challenge by providing an open-source Layer 1 blockchain with a scalable and dynamically sharded network that’s actually usable, making it easier for developers to build usable products. 

    NEAR was founded by Illia Polosukhin – a former Google and TensorFlow developer – and Alex Skidanov, formerly a MemSQL and AI researcher. They needed a way to pay contributors worldwide to help them in their AI startup and discovered that it wasn’t practically possible with existing blockchains like Ethereum. They pivoted to focus on this vision in 2018 and released NEAR’s mainnet in 2020. 

    Since then, the chain has grown dramatically, with over 1,000 active projects and 22 million users. However, NEAR’s main purpose is to not just enable the Web3 vision technically, but to do so in a way that enables mass adoption by both developers and users — a challenge it has partnered with Google Cloud’s broader Web3 initiatives to help solve. 

    Supporting continuous growth

    To support the continued adoption of NEAR, there are multiple upgrades coming in our roadmap that culminate with the arrival of Nightshade Sharding, the final phase of NEAR’s architecture roadmap. Sharding scales the network via parallelization so it can handle surges in demand and eventually support billions of users. Divided into four phases, the chain’s sharding mechanism is currently in the second phase with a fixed set of shards. The next stage will shard both state and processing, and the fourth and final stage introduces dynamic resharding to have the network automatically split and merge shards as needed based on usage. 

    Pagoda, the engineering team leading technical development of NEAR Protocol, selected Google Cloud to power the infrastructure services used by the NEAR ecosystem. A core challenge that we face as an infrastructure providers is ensuring the automatic scalability of these services to meet growing demands of builders in the ecosystem. Google Cloud lets our developers focus on the things that matter. They can save time in development and reduce time to delivering value by eliminating the need to reinvent infrastructure with each new project. At its core, providing this ready-to-use infrastructure supports the usability and accessibility vision of NEAR.

    Furthermore, Google Cloud supports new builders on the NEAR ecosystem through a dedicated technical support program for NEAR grant recipients. This partnership lets us provide high-quality resources and support to new projects that are taking their first steps in the ecosystem. 

    Usability as a core differentiator

    NEAR prioritizes usability as one of its core values and has always treated it as an all-encompassing philosophy. A blockchain that is not scalable or safe is not usable. Similarly, if a technology is difficult to use and build with, the apps created with that technology will require heavier polishing and sacrifice either usability or features in that process. 

    NEAR seeks to address this by embedding usability and accessibility into the core protocol itself. That removes complexities introduced by Web3 and abstracts them away for developers, allowing them to build simpler and friendlier interfaces for their users. 

    One example of this approach can be seen in NEAR’s account model, which is known for its flexibility and use of human-readable addresses (e.g. jane.near) over complex hexadecimal strings. This is a unique feature for NEAR among Layer 1 blockchains. 

    The focus on usability is also at the core of NEAR’s two-year technical roadmap, which is about removing barriers for builders at the protocol level. 

    Meta-transactions, one of the upcoming features, will enable accounts and decentralized applications to send transactions on behalf of users and pay necessary network usage fees for them. This removes potential confusion while interacting with an application, and more closely resembles existing end-user web experiences. 

    Another example of accessibility embedded in the protocol is the addition of support for Secp256r1 keys, which enable certain mobile devices such as iPhones to have implicit accounts on-chain. This simplifies the onboarding process for mobile users by removing the need to go through the process of creating an account, which can include dozens of steps on other popular blockchains and often requires purchasing cryptocurrency on an exchange.

    Enhancing usability and accessibility are among the benefits that NEAR is gaining by partnering closely with Google Cloud. We continue to see trust in this relationship across the board, and that is reflected in seeing some of our validators choosing Google Cloud as home for their nodes.

    Related Article

    Introducing Blockchain Node Engine: fully managed node-hosting for Web3 development

    Blockchain is changing the way the world stores and moves its information. Building on our commitment to help Web3 developers build and d...

    Read Article
  • Mr. Cooper is improving the home-buyer experience with AI and ML Mon, 20 Mar 2023 16:00:00 -0000

    As one of the largest home loan servicers in the country, Mr. Cooper has been helping people with homeownership since 1994. Believing that the process could be streamlined, we saw an opportunity to revolutionize the way people bought their dream homes, starting with transforming paper-based processes. And we believed that digitizing and automating as much of that journey as possible was the way forward. 

    Traditionally, mortgage lenders require borrowers to submit various documents such as payslips and W2 wage and tax statements for loan applications. Since each document requires manual classification and verification, the process created significant delays to the borrower’s home-buying journey. 

    Adding to the complexity is the fact that there are more than 3,000 counties across the 50 U.S states. Each county has its own set of fees to record a deed when someone purchases a home. These fees often change and are difficult to determine. In some counties, you have to call the county office and discuss the fee amount over the phone. Every mortgage company must disclose fees including the county recording fee, in every closing statement for every mortgage.  

    To improve the customer experience and efficiency, we needed to streamline the mortgage process, from pre-approval to closing and post-closing to servicing. Our solution is to digitize and optimize business processes (such as classify documents, extract data, and predict fees) using machine learning (ML) during the entire lifecycle of the mortgage process.

    Improving process efficiency using mortgage ML document classification and extraction

    When we started this project in 2018, there wasn’t an off-the-shelf solution that could meet our functional and technical needs. We decided to build Pyro, our document management solution on Google Cloud. It is based on products like BigQuery and Vertex AI, which enabled us to quickly scale resources in the cloud to meet changes in demand.

    Fast forward to today, Document AI is a product suite that provides simple and cost effective solutions to help the document lifecycle. These include pre-trained processors to classify and extract data from business documents.   

    Using Cloud AutoML on Vertex AI, we could quickly build and deploy models with minimal effort at a low cost. With Cloud AutoML, even business analysts with no ML programming background can train models and create endpoints with high confidence scores and accuracy.

    Our ML model processes more than 2,200 pages per minute and classifies documents into predetermined categories with more than 90% accuracy, so customer service agents have accurate and real-time information when they speak to customers. The goal is to digitize as much of the mortgage process as possible so our customer service agents can focus on the customer, not paperwork. Our agents provide a human touch with sympathy and empathy to help customers overcome challenges in the mortgage process. 

    Within a year of launch, Pyro processed more than 932 million pages of mortgage documents, including a backlog of documents that would have taken 4.5 years to process manually. 

    Engaging with Google Cloud early in our product lifecycle journey helped us build our ML team in a meticulously planned manner. They provided the resources we lacked in-house. That allowed us to take our time to hire the right ML talent, rather than adding people too fast. We now have people on our team with different skill sets, ranging from subject matter experts who understand mortgage workflows to data engineers who build data pipelines by bringing data from multiple sources.

    Since the launch of Pyro, we have built a library of more than 300 mortgage-specific machine learning models on Google Cloud

    Moving forward into county fee recording estimation process

    We wanted to broaden our horizons outside the documents world and build use cases and solutions that benefit a larger audience. We worked closely with our business team to identify challenges that we can solve with AI. One such area identified was county fee recording (CFR) estimation during the payoff quote process.

    After the payoff funds are received, the Lien Release is sent for recording, which incurs a fee determined at a county level. The estimation process for this recording fee is difficult as the fee varies by county and depends on various factors. Everything from loan level county rules and property information to lien release page length, borrower information (including number of borrowers), and more. CFR may also change over time. Since there’s no standard formula to calculate the recording fee, customers are sometimes undercharged or overcharged. Any errors in CFR calculation adds to the cost of Mr. Cooper servicing the loan because the mortgage lender has to absorb the difference. 

    Every day, our loan servicing system generates a list of loans along with the loan information, property information, and county information. Typically, we need to calculate recording fees for thousands of loans on weekdays to millions of loans on weekends. In the past, the business team calculated the fee manually using spreadsheet-based tools. Our solution was to create an ML pipeline using regression models. It reads the loan, property, county, and customer information, while predicting the CFR. It then feeds it back to our loan servicing system for faster and more accurate estimates. Here’s how it works:

    • The loan servicing system generates a list of loan estimate requests that are automatically sent to Cloud Storage via our Secure File Transfer Protocol (SFTP) server.

    • Cloud Function is triggered, invoking a Vertex AI inference pipeline, which pre-processes the input information, runs predictions against our recording fee ML regression model, post-processes the predicted results into CSV files, and stores them on Cloud Storage.

    Using historical data to fuel the future

    Our initial training data for CFR is close to five million records, representing a subset of our historical data. To ensure that information remains up to date, we refresh our ML model by creating a training pipeline on Vertex AI to capture any changes in property information and county fees. We then expose the retrained model through the inference pipeline. The model currently runs at 96% accuracy.

    Another key challenge we had to solve was, “how do we know how many pages before we generate the document?” The solution is a regression model that looks at the past five years of historical data to identify patterns. For example, if there are two borrowers with this particular property information in a specific county, the recording fee is estimated to be between $50 and $65. 

    Vertex AI brings MLOps capabilities, so we don’t need to build ML pipelines from scratch. This led to us taking less than 45 days to build this ML model. Since we went live in December 2021, we’ve achieved around 66% improvement in $ savings annually compared to our previous years when the process was run manually.

    We’re now looking to extend the value of AI into our call center to further enhance the customer experience. By looking at call transcripts, we want to identify call cohorts based on the primary reason for their calls, and how we can resolve their queries much faster.

    Our journey from manual, paper processes to the digital world has been transformative. Google Cloud is our partner of choice for creating change and bringing value to our customers, and we look forward to how we can further innovate together.

  • Using GKE workload rightsizing to find — and fix — resource utilization Mon, 20 Mar 2023 16:00:00 -0000

    Kubernetes has become the leading platform for efficiently managing and scaling containerized applications, particularly as more businesses transition to the cloud. While migrating to Kubernetes, organizations often face common challenges such as incorrect workload sizing. This can result in decreased reliability and performance of workloads, and idle clusters that waste resources and drive up costs.

    GKE has built-in tools for workload rightsizing for clusters within a project. In this blog, we'll explore how to use GKE's built-in tools to optimize cost, performance, and reliability at scale across projects and clusters. We'll focus on workload rightsizing and identifying idle clusters.

    Workload rightsizing recommendations at scale

    The Vertical Pod Autoscaler (VPA) recommendations intelligence dashboard uses intelligent recommendations available in Cloud Monitoring combined with  GKE cost optimization best practices to help your organization answer these questions:

    • Is it worth it to invest in workload rightsizing?

    • How much effort does it take to rightsize your workloads?

    • Where should you focus initially?

    • How many workloads are over-provisioned and how many are under-provisioned?

    • How many workloads are under reliability or performance risk due to incorrectly requested resources?

    • Are you getting better over time at workload rightsizing?

    The dashboard presents an overview of all clusters across all projects in one place. Before adjusting resource requests, ensure you have a solid grasp on the actual needs of your workloads for optimal resource utilization.

    1.png

    The following section walks you through the dashboard and discusses how to use the data on your optimization journey.

    Best-effort workloads (reliability risk)

    Workloads with no resource requests or limits configured are at a high risk of failing due to being out-of-memory or the CPU throttling to zero. 

    How to improve?

    1. Navigate to the VPA container recommendations detail view dashboard. 

    2. Use the Memory Filter ‘QoS’ filtering for ‘BestEffort’.

    2.png

    The resulting list will contain identifying information on which workloads resources require an update.

    3.png

    The best practice for updating CPU and memory resources is to do it gradually, in small increments, while monitoring the performance of your applications and services. To update workloads listed:

    3. Use the memory (MiB) request recommendation and memory (MiB) limit recommendation to set memory requests and limits. The best practice for memory is to set the same amount of memory for requests and limits. Note: All values are in mebibytes for memory units.

    4.png

    To identify CPU BestEffort workloads:

    4. Use the CPU filter QoS as ‘BestEffort’ taking note of the mCPU request recommendation and mCPU limit recommendation values. Note: All values are in millicores for the CPU.

    5.png

    5. In your deployment, set CPU requests equal to or greater than the mCPU request recommendation and use mCPU limit recommendation to set the CPU limit or leave it unbound.

    Burstable workloads (reliability or performance risk)

    We recommend running CPU burstable workloads in environments where CPU requests are configured to be less than the CPU limit. This enables workloads to experience sudden spikes (during initialization or unexpected high demand) without hindrance. 

    However, some workloads consistently exceed their CPU request, which can lead to both performance issues and disruptions in your system.

    How to improve?

    1. Use the VPA container recommendations detail view. For example, to address memory workloads at reliability risk, use the Memory filter ‘QoS’ set to Burstable and Provisioning Risk filter for reliability.

    6.png

    As discussed previously, the best practice for setting your container resources is to use the same amount of memory for requests and limits, and a larger or unbounded CPU limit. 

    To update Burstable workloads at performance risk:

    1. Use the VPA container recommendations detail view. For example, to address memory workloads at reliability risk, use the CPU filter ‘QoS’ set to Burstable and Provisioning Risk filter for performance.

    7.png

    2. Use the mCPU request recommendation and mCPU limit recommendation columns to edit your workload’s CPU requests and limits at or above the VPA recommendations.

    Potential CPU savings and Potential Memory savings tiles

    These tiles show a positive value indicating cost savings potential. If the value is negative, the current container settings are under-provisioned, and correcting the workload resources will improve performance and reliability.

    How to improve?

    1. Navigate to the VPA container recommendations detail view dashboard. 

    2. To find opportunities to reduce cost, use the Provisioning Risk filter and filter for “cost”. These workloads are determined to be over-provisioned and waste resources.

    8-s.png

    3. Update your workload’s resource configurations to rightsize your containers for CPU and memory. This will reduce the value displayed in these tiles. 

    Note: It's essential to address reliability or performance risks to ensure workloads remain reliable and performant.

    Top over-provisioned workloads list

    Over-provisioning workloads will add cost to your GKE bill. This list will organize the workloads to prioritize cost reduction. Workloads with the most significant difference between what's currently requested for CPU and memory and what is recommended will be listed first.

    How to improve?

    1. On the Recommendation Overview page, use the “Top over-provisioned workloads” list to identify workloads. 

    2. Review recommendations for requests and limits for memory and CPU listed in the table.

    9.png

    3. Update the container configurations requests and limits values to be closer to the recommended values.

    For more details on the workloads, use the VPA container recommendations detail view and filter ‘over’ as the  Provisioning Status. 

    Top under-provisioned workloads list

    Under-provisioning workloads can lead to containers failing or throttling. This list organizes workloads where the CPU and memory requests are below VPA recommendations.

    How to improve?

    1. Similar to the "Top over-provisioned workloads" section, the "Top under-provisioned workloads" identify workloads that are under-provisioned. 

    2. Review recommendations for CPU and limits are listed in the table.

    10.png

    3. Update the container configurations requests and limit values to be closer to the recommended values

    Note: Concentrating solely on either over or under-provisioned resources separately can lead to inaccuracies in cost savings or costs. To achieve both cost-effectiveness and reliability, consider alternating between addressing the top over-provisioned and top under-provisioned workloads. By toggling between the lists, you can optimize your application's savings and reliability.

    VPA container recommendations detail view

    A detailed view of all clusters across all projects as shown below.

    11.png

    To help you get started on your optimization journey, the table contains three columns to assist in prioritizing which workloads should be tackled first.

    12.png

    The formula takes into account the difference between the requested resources and the recommended resources for CPU and memory, and then uses a ratio of predefined vCPUs and predefined memory to adjust the priority value.

    The “Total Memory" and "Total CPU" columns represent the difference between the current configurations for Memory/CPU and the VPA recommendations for Memory/CPU. A negative value shows the workload is under-provisioned in its respective resource, and a positive value indicates a workload is over-provisioned for that resource.

    Cloud Monitoring Active/Idle cluster dashboard

    Pay-as-you-go is one crucial benefit of cloud computing. It's critical for cost optimization to identify active and idle GKE clusters, so you can shut them down if they are no longer used. One way to do that is to import the 'GKE Active/Idle clusters' dashboard from the sample library in Cloud Monitoring. 

    The dashboard provides two charts. One counts the running containers in the user namespaces in a certain period. The other shows the CPU usage time the user containers consume in the same time window. It's probably safe to identify those clusters as idle if the container count is zero and the CPU usage is low, such as less than 1%.

    Below is a screenshot of an example. You can read the note panel for additional details. After importing the dashboard, you can edit the charts based on your use cases in Cloud Monitoring. Alternatively, you can modify the source file and import it directly. You can find all GKE-related sample dashboards on GitHub in our sample repository.

    13.png

    Get started today

    To deploy the VPA recommendation dashboard and begin your optimization journey check out this tutorial with step-by-step instructions

    Please consider using the solution to rightsize your workloads on GKE and navigate to Cloud Monitoring in the Console to try the GKE dashboards. For more information on GKE optimization, check our Best Practices for Running Cost Effective Kubernetes Applications, the YouTube series, and have a look at the GKE best practices to lessen over-provisioning

    We welcome your feedback and questions. Please consider joining the Cloud Operations group in Google Cloud Communities.

  • Pub/Sub schema evolution is now GA Mon, 20 Mar 2023 14:00:00 -0000

    Pub/Sub schemas are designed to allow safe, structured communication between publishers and subscribers. In particular, the use of schemas provides that guarantee that any message published adheres to a schema and encoding, which the subscriber can rely on when reading the data. 

    Schemas tend to evolve over time. For example, a retailer is capturing web events and sending them to Pub/Sub for downstream analytics with BigQuery. The schema now includes additional fields that need to be propagated through Pub/Sub. Up until now Pub/Sub has not allowed the schema associated with a topic to be altered. Instead, customers had to create new topics. That limitation changes today as the Pub/Sub team is excited to introduce schema evolution, designed to allow the safe and convenient update of schemas with zero downtime for publishers or subscribers.

    Schema revisions

    A new revision of schema can now be created by updating an existing schema. Most often, schema updates only include adding or removing optional fields, which is considered a compatible change.

    1 Pub_Sub schema evolution.jpg

    All the versions of the schema will be available on the schema details page. You are able to delete one or multiple schema revisions from a schema, however you cannot delete the revision if the schema has only one revision. You can also quickly compare two revisions by using the view diff functionality.

    2 Pub_Sub schema evolution.jpg

    Topic changes

    Currently you can attach an existing schema or create a new schema to be associated with a topic so that all the published messages to the topic will be validated against the schema by Pub/Sub. With schema evolution capability, you can now update a topic to specify a range of schema revisions against which Pub/Sub will try to validate messages, starting with the last version and working towards the first version. If first-revision is not specified, any revision <= last revision is allowed, and if last revision is not specified, then any revision >= first revision is allowed.

    3 Pub_Sub schema evolution.jpg

    Schema evolution example

    Let's take a look at a typical way schema evolution may be used. You have a topic T that has a schema S associated with it. Publishers publish to the topic and subscribers subscribe to a subscription on the topic:

    Now you wish to add a new field to the schema and you want publishers to start including that field in messages. As the topic and schema owner, you may not necessarily have control over updates to all of the subscribers nor the schedule on which they get updated. You may also not be able to update all of your publishers simultaneously to publish messages with the new schema. You want to update the schema and allow publishers and subscribers to be updated at their own pace to take advantage of the new field. With schema evolution, you can perform the following steps to ensure a zero-downtime update to add the new field:

    1. Create a new schema revision that adds the field.

    2. Ensure the new revision is included in the range of revisions accepted by the topic.

    3. Update publishers to publish with the new schema revision.

    4. Update subscribers to accept messages with the new schema revision.

    Steps 3 and 4 can be interchanged since all schema updates ensure backwards and forwards compatibility. Once your migration to the new schema revision is complete, you may choose to update the topic to exclude the original revision, ensuring that publishers only use the new schema.

    These steps work for both protocol buffer and Avro schemas. However, some extra care needs to be taken when using Avro schemas. Your subscriber likely has a version of the schema compiled into it (the "reader" schema), but messages must be parsed with the schema that was used to encode them (the "writer" schema). Avro defines the rules for translating from the writer schema to the reader schema. Pub/Sub only allows schema revisions where both the new schema and the old schema could be used as the reader or writer schema. However, you may still need to fetch the writer schema from Pub/Sub using the attributes passed in to identify the schema and then parse using both the reader and writer schema. Our documentation provides examples on the best way to do this.

    BigQuery subscriptions

    Pub/Sub schema evolution is also powerful when combined with BigQuery subscriptions, which allow you to write messages published to Pub/Sub directly to BigQuery. When using the topic schema to write data, Pub/Sub ensures that at least one of the revisions associated with the topic is compatible with the BigQuery table. If you want to update your messages to add a new field that should be written to BigQuery, you should do the following:

    1. Add the OPTIONAL field to the BigQuery table schema.

    2. Add the field to your Pub/Sub schema.

    3. Ensure the new revision is included in the range of revisions accepted by the topic.

    4. Start publishing messages with the new schema revision.

    With these simple steps, you can evolve the data written to BigQuery as your needs change.

    Quotas and limits

    Schema evolution feature comes with following limits:

    • 20 revisions per schema name at any time are allowed.

    • Each individual schema revision does not count against the maximum 10,000 schemas per project.

    Additional resources

    Please check out the additional resources available at to explore this feature further:

  • Solving for what’s next in Data and AI at this year’s Gartner Data & Analytics Summit Mon, 20 Mar 2023 13:00:00 -0000

    The largest gathering of data and analytics leaders in North America is happening March 20 - 22nd in Orlando, Florida. Over 4,000 attendees will join in person to learn and network with peers at the 2023 Gartner® Data & Analytics Summit. This year’s conference is expected to be bigger than ever, as is Google Cloud’s presence!

    We simply can’t wait to share the lessons we’ve learned from customers, partners and analysts! We expect that many of you will want to talk about data governance, analytics, AI, BI, data management, data products, data fabrics and everything in between!

    We’re going big!

    That’s why we’ve prepared a program that is bound to create opportunities for you to learn and network with the industry’s best data innovators.  Our presence at this event is focused on creating meaningful connections for you with the many customers and partners who make the Google Cloud Data community so great.

    We’ll kick off with a session featuring Equifax’s Chief Product & Data Analytics Officer, Bryson Koehler and Google Cloud’s Ritika Gunnar.  Bryson will share how Equifax drove data transformation with the Equifax Cloud™.  That session is on Monday, 3/20 at 4pm. After you attend it, you will realize why Bryson’s team earned the Google Cloud Customer of the Year award twice!

    That night, from 7:30PM on, we will host a social gathering so you can meet with Googlers, SAP leaders and our common customers at the “Nothing But Net Value with Google Cloud and SAP” event.

    On Tuesday, you’ll have at least 4 opportunities to catch me and the rest of the team:

    • At 10:35am, Starburst’s Head of Product, Vishal Singh & I will cover how companies can turn Data Into Value with Data Products.  We’ll discuss the maturity phases organizations graduate through and will even give you a demo live! 

    • At 12:25pm, our panel of experts, LiveRamp’s Kannan D.R & Quantum Metric’s Russell Efird will join Google Cloud’s Stuart Moncada to discuss how companies can build intelligent data applications and how our “Built with BigQuery” program can help your team do the same.

    That night, from 7PM on, I will be speaking at the CDO Club Networking event hosted by Team8, Dremio, data.world and Manta.  Register here to attend! 

    But wait, there is more!  

    On Wednesday, our community will continue to feature great customer success stories and I’ll be there to support them.  

    And if all of this is not enough you will find some of our partners present inside the Google Cloud booth (#434).  LiveRamp, Neo4j, Nexla, Quantum Metric, and Striim have all prepared innovative lighting talks that are bound to make you want to ask questions.

    There are over 900 software companies who have built data products on our platform and while you don't have 900 sessions at the event (we tried!), you can stop by our booth to inquire about the recent integrations we announced with Collibra, Elastic, MongoDB, Palantir, ServiceNow, Sisu, Reltio and more!

    Top 5 Gartner sessions

    I can’t wait to see all of you in person and our team looks forward to hearing how we can help you and your company succeed with data.

    Beyond the above, there are of course many Gartner sessions that you should put on your schedule.  In my opinion, there are at least 5 you can’t afford to miss:  

    1. Financial Governance and Recession Proofing Your Data Strategy with Adam Ronthal

    2. What You Can't Ignore in Machine Learning with Svetlana Sicular. I still remember attending her first session on this topic years ago — it’s always full of great stats and customer stories.

    3. Ten Great Examples of Analytics in Action with Gareth Herschel. If you're looking for case studies in success, sign up for this one!

    4. Ask the Experts series, particularly the one on Cost Optimization with Allison Adams

    5. Data Team Organizations and Efficiencies with Jorgen Heizenberg, Jim Hare and Debra Logan,

    I hope you’ve found this post useful.  If there is anything we can do to help, stop by the Google Data Cloud booth (#434).


    GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

  • Join us at RSA Conference to transform cybersecurity with frontline intelligence and cloud innovation Fri, 17 Mar 2023 19:30:00 -0000

    The promise of digital transformation is being challenged by the increasingly disruptive threat landscape. More sophisticated and capable adversaries have proliferated as nation-states pivot from cyber-espionage to compromise of private industry for financial gain. Their tactics shift and evolve rapidly as workloads and workforces become distributed and enterprises’ attack surface grows. And the security talent needed to help remains scarce and stubbornly grounded in ineffective, toil-based legacy approaches and tooling.

    aside_block
    [StructValue([(u'title', u'Hear monthly from our Cloud CISO in your inbox'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e81c20aa5d0>), (u'btn_text', u'Subscribe today'), (u'href', u'https://go.chronicle.security/cloudciso-newsletter-signup?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY23-Cloud-CISO-Perspectives-newsletter-blog-embed-CTA&utm_content=-&utm_term=-'), (u'image', <GAEImage: gcat small.jpg>)])]

    Join Mandiant and Google Cloud together for the first time at RSA Conference 2023. We’re excited to bring our joint capabilities, products, and expertise together, so you can defend your organization against today’s threats with:

    • Unique, up-to-date, and actionable threat intelligence that can only come from ongoing, frontline engagement with the world’s most sophisticated and dangerous adversaries. 

    • Comprehensive visibility across your attack surface and infrastructure, delivered by a modern security operations platform that empowers you to rapidly detect, investigate, respond to, and remediate security incidents in your environment. 

    • A secure-by-design, secure-by-default cloud platform to drive your organization’s digital transformation.

    • Proven expertise and assistance to extend your team with the help you need - before, during, and after security incidents.

    Join us for insightful keynotes and sessions, new technology demos, and one-to-one conversations with Mandiant | Google Cloud experts at our booth N6058.

    Meet with Mandiant | Google Cloud

    • Join Sandra Joyce, VP, Mandiant Intelligence at Google Cloud and her team for an exclusive, invitation-only threat intelligence briefing Monday, April 24, from 4 p.m. until 5 p.m. to hear about the current threat landscape and engage with the analyst team that conducts the research on the frontlines.

    • Join keynote speaker Heather Adkins, VP of Security Engineering at Google Cloud at the Elevate Keynote and Breakfast on Wednesday, April 26, from 8:30 a.m. until 10 a.m. to hear about her more than 16 years in security at Google, and share insights from “Hacking Google,” where she built resilience within herself and her team while engineering world-class security.

    • Visit booth N6058 for a personal demonstration of our security solutions or book one-on-one time with a Mandiant and Google Cloud cybersecurity expert.

    • Reserve your spot at the Mandiant and Google Cloud Happy Hours

      • Taco Tuesday - Tuesday, April 25 from 6 p.m. until 7:30 p.m.

      • Wine Down Wednesday - Wednesday, April 26, from 6 p.m. until 7:30 p.m.

    Hear from Mandiant and Google Cloud

    Come join us at the Mandiant and Google Cloud booth N6058 to hear from a variety of experts and special guests on topics ranging from Autonomic Security Operations, to threat hunting with VirusTotal, to the latest Mandiant threat landscape trends, to how to protect your apps and APIs from fraud and abuse. You can also hear from our thought leaders during sessions including (all times Pacific Standard Time):

    • The Role of Partnerships in Advancing Cyber Diplomacy

      • Panelists: Niloofar Howe, Sr. Operating Partner, Energy Impact Partners; Kevin Mandia, CEO,  Mandiant; H.E. Nathalie Jaarsma, Ambassador at-Large for Security Policy & Cyber, Ministry of Foreign Affairs of the Netherlands, H.E. Nathaniel Fick, Ambassador at Large for Cyberspace and Digital Policy, U.S. Department of State; Wendi Whitmore, Senior Vice President, Palo Alto Networks

      • Wednesday, Apr. 26, 2023 | 1:15 p.m.

    • How a Secure Enterprise Starts with the Web

      • Speakers: Andrew Whalley, Director of Chrome Security, Google; Toni Gidwani, Security Engineering Manager of the Threat Analysis Group, Google; Katie Purchase, Executive Sales Director, Tech, Media & Telco, The Wall Street Journal

      • Wednesday, Apr. 26, 2023 | 1:15 p.m.

    • The Promise and Peril of a UN Cybercrime Treaty

      • Panelists: Katelyn Ringrose, Global Law Enforcement and Government Access, Google; John Hering, Senior Government Affairs Manager, Microsoft; Megan Stifel, Chief Strategy Officer, Institute for Security and Technology

      • Thursday, Apr. 27, 2023 | 10:50 a.m.

    Experience frontline intelligence and cloud innovation from Mandiant and Google Cloud 

    As your security transformation partner, Mandiant | Google Cloud can help you:

    • Understand threat actors and their potential attack vectors

    • Detect, investigate and respond to threats faster

    • Build on a secure-by-design, secure-by-default cloud platform

    • Extend your team with the expertise you need - before, during, and after a security incident

    • And so much more…

    Come experience Mandiant and Google Cloud frontline intelligence and cloud innovationat RSAC booth N6058, and sign up for one-to-one conversations with Mandiant and Google Cloud experts at our booth. 

    We look forward to seeing you at the RSA Conference and helping you defend your most critical data, applications, and communications.

  • What’s new with Google Cloud Fri, 17 Mar 2023 19:00:00 -0000

    Want to know the latest from Google Cloud? Find it here in one handy location. Check back regularly for our newest updates, announcements, resources, events, learning opportunities, and more. 


    Tip: Not sure where to find what you’re looking for on the Google Cloud blog? Start here: Google Cloud blog 101: Full list of topics, links, and resources.


    Week of March 13 - 17

    • A new era for AI and Google Workspace- Google Workspace is using AI to become even more helpful, starting with new capabilities in Docs and Gmail to write and refine content. Learn more.
    • Building the most open and innovative AI ecosystem - In addition to the news this week on AI products, Google Cloud has also announced new partnerships, programs, and resources. This includes bringing bringing the best of Google’s infrastructure, AI products, and foundation models to partners at every layer of the AI stack: chipmakers, companies building foundation models and AI platforms, technology partners enabling companies to develop and deploy machine learning (ML) models, app-builders solving customer use-cases with generative AI, and global services and consulting firms that help enterprise customers implement all of this technology at scale. Learn more.
    • From Microbrows to Microservices - Ulta Beauty is building their digital store of the future, but to maintain control over their new modernized application they turned to Anthos and GKE - Google Cloud’s managed container services, to provide an eCommerce experience as beautiful as their guests. Read our blog to see how a newly-minted Cloud Architect learnt Kubernetes and Google Cloud to provide the best possible architecture for his developers. Learn more.
    • Now generally available, understand and trust your data with Dataplex data lineage - a fully managed Dataplex capability that helps you understand how data is sourced and transformed within the organization. Dataplex data lineage automatically tracks data movement across BigQuery, BigLake, Cloud Data Fusion (Preview), and Cloud Composer (Preview), eliminating operational hassles around manual curation of lineage metadata. Learn more here.
    • Rapidly expand the reach of Spanner databases with read-only replicas and zero-downtime moves. Configurable read-only replicas let you add read-only replicas to any Spanner instance to deliver low latency reads to clients in any geography. Alongside Spanner’s zero-downtime instance move service, you have the freedom to move your production Spanner instances from any configuration to another on the fly, with zero downtime, whether it’s regional, multi-regional, or a custom configuration with configurable read-only replicas. Learn more here.

    Week of March 6 - 10

    • Automatically blocking project SSH keys in Dataflow is now GA.This service option allows Dataflow users to prevent their Dataflow worker VMs from accepting SSH keys that are stored in project metadata, and results in improved security. Getting started is easy: enable the block-project-ssh-keys service option while submitting your Dataflow job.
    • Celebrate International Women’s Day: Learn about the leaders driving impact at Google Cloud and creating pathways for other women in their industries. Read more.
    • Google Cloud Deploy now supports Parallel Deployment to GKE and Cloud Run workloads. This feature is in PreviewRead more.
    • Sumitovant doubles medical research output in one year using Looker
      Sumitovant is a leading biopharma research company that has doubled their research output in one year alone. By leveraging modern cloud data technologies, Sumitovant supports their globally distributed workforce of scientists to develop next generation therapies using Google Cloud’s Looker for trusted self-service data research. To learn more about Looker check out https://cloud.google.com/looker

    Week of Feb 27 - Mar 3, 2023

    • Add geospatial intelligence to your Retail use cases by leveraging the CARTO platform on top of your data in BigQuery
      Location data will add a new dimension to your Retail use cases, like site selection, geomarketing, and logistics and supply chain optimization. Read more about the solution and various customer implementations in the CARTO for Retail Reference Guide, and see a demonstration in this blog.
    • Google Cloud Deploy support for deployment verification is now GA!  Read more or Try the Demo

    Week of Feb 20 - Feb 24, 2023

    • Logs for Network Load Balancing and logs for Internal TCP/UDP Load Balancingare now GA!
      Logs are aggregated per-connection and exported in near real-time, providing useful information, such as 5-tuples of the connection, received bytes, and sent bytes, for troubleshooting and monitoring the pass-through Google Cloud Load Balancers. Further, customers can include additional optional fields, such as annotations for client-side and server-side GCE and GKE resources, to obtain richer telemetry.
    • The newly published Anthos hybrid cloud architecture reference design guideprovides opinionated guidance to deploy Anthos in a hybrid environment to address some common challenges that you might encounter. Check out the architecture reference design guidehere to accelerate your journey to hybrid cloud and containerization.

    Week of Feb 13- Feb 17, 2023

    • Deploy PyTorch models on Vertex AI in a few clicks with prebuilt PyTorch serving containers - which means less code, no need to write Dockerfiles, and faster time to production.
    • Confidential GKE Nodes on Compute-Optimized C2D VMs are now GA.  Confidential GKE Nodes help to increase the security of your GKE clusters by leveraging hardware to ensure your data is encrypted in memory, helping to defend against accidental data leakage, malicious administrators and “curious neighbors”.  Getting started is easy, as your existing GKE workloads can run confidentially with no code changes required.
    • Announcing Google’s Data Cloud & AI Summit, March 29th!
      Can your data work smarter? How can you use AI to unlock new opportunities? Register for Google Data Cloud & AI Summit, a digital event for data and IT leaders, data professionals, developers, and more to explore the latest breakthroughs.  Join us on Wednesday, March 29, to gain expert insights, new solutions, and strategies to reveal opportunities hiding in your company’s data. Find out how organizations are using Google Cloud data and AI solutions to transform customer experiences, boost revenue, and reduce costs. Register today for this no cost digital event.

    • Running SAP workloads on Google Cloud? Upgrade to our newly released Agent for SAP to gain increased visibility into your infrastructure and application performance. The new agent consolidates several of our existing agents for SAP workloads, which means less time spent on installation and updates, and more time for making data-driven decisions. In addition, there is new optional functionality that powers exciting products like Workload Manager, a way to automatically scan your SAP workloads against best-practices. Learn how to install or upgrade the agent here.

    • Leverege uses BigQuery as a key component of its data and analytics pipeline to deliver innovative IoT solutions at scale. As part of the Built with BigQuery program, this blog post goes into detail about Leverege IoT Stack that runs on Google Cloud to power business-critical enterprise IoT solutions at scale. 

    • Download white paper Three Actions Enterprise IT Leaders Can Take to Improve Software Supply Chain Security to learn how and why high-profile software supply chain attacks like SolarWinds and Log4j happened, the key lessons learned from these attacks, as well as actions you can take today to prevent similar attacks from happening to your organization.

    Week of Feb 3 - Feb 10, 2023

    • Immersive Stream for XRleverages Google Cloud GPUs to host, render, and stream high-quality photorealistic experiences to millions of mobile devices around the world, and is now generally available. Read more here.

    • Reliable and consistent data presents an invaluable opportunity for organizations to innovate, make critical business decisions, and create differentiated customer experiences. But poor data quality can lead to inefficient processes and possible financial losses. Today we announce new Dataplex features: automatic data quality (AutoDQ) and data profiling, available in public preview. AutoDQ offers automated rule recommendations, built-in reporting, and serveless execution to construct high-quality data. Data profiling delivers richer insight into the data by identifying its common statistical characteristics. Learn more.

    • Cloud Workstations now supports Customer Managed Encryption Keys (CMEK), which provides user encryption control over Cloud Workstation Persistent Disks. Read more.

    • Google Cloud Deploy now supports Cloud Run targets in General Availability. Read more.

    • Learn how to use NetApp Cloud Volumes Service as datastores for Google Cloud VMware Engine for expanding storage capacity. Read more

    Week of Jan 30 - Feb 3, 2023

    • Oden Technologies uses BigQuery to provide real-time visibility, efficiency recommendations and resiliency in the face of network disruptions in manufacturing systems. As part of the Built with BigQuery program, this blog post describes the use cases, challenges, solution and solution architecture in great detail.
    • Manage table and column-level access permissions using attribute-based policies in Dataplex. Dataplex attribute store provides a unified place where you can create and organize a Data Class hierarchy to classify your distributed data and assign behaviors such as Table-ACLs and Column-ACLs to the classified data classes. Dataplex will propagate IAM-Roles to tables, across multiple Google Cloud projects,  according to the attribute(s) assigned to them and a single, merged policy tag to columns according to the attribute(s) attached to them. Read more.
    • Lytics is a next generation composableCDP that enables companies to deploy a scalable CDP around their existing data warehouse/lakes. As part of the Built with BigQuery program for ISVs, Lytics leverages Analytics Hub to launch secure data sharing and enrichment solution for media and advertisers. This blog post goes over Lytics Conductor on Google Cloud and its architecture in great detail.
    • Now available in public preview, Dataplex business glossary offers users a cloud-native way to maintain and manage business terms and definitions for data governance, establishing consistent business language, improving trust in data, and enabling self-serve use of data. Learn more here.
    • Security Command Center (SCC), Google Cloud’s native security and risk management solution, is now available via self-service to protect individual projects from cyber attacks. It’s never been easier to secure your Google Cloud resources with SCC. Read our blog to learn more. To get started today, go to Security Command Center in the Google Cloud console for your projects.
    • Global External HTTP(S) Load Balancer and Cloud CDN now support advanced traffic management using flexible pattern matching in public preview. This allows you to use wildcards anywhere in your path matcher. You can use this to customize origin routing for different types of traffic, request and response behaviors, and caching policies. In addition, you can now use results from your pattern matching to rewrite the path that is sent to the origin.
    • Run large pods on GKE Autopilot with the Balanced compute class. When you need computing resources on the larger end of the spectrum, we’re excited that the Balanced compute class, which  supports Pod resource sizes up to 222vCPU and 851GiB, is now GA.

    Week of Jan 23 - Jan 27, 2023

    • Starting with Anthos version 1.14, Google supports each Anthos minor version for 12 months after the initial release of the minor version, or until the release of the third subsequent minor version, whichever is longer. We plan to have Anthos minor release three times a year  around the months of April, August, and December in 2023, with a monthly patch release (for example, z in version x.y.z) for supported minor versions. For more information, read here.
    • Anthos Policy Controller enables the enforcement of fully programmable policies for your clusters across the environments. We are thrilled to announce the launch of our new built-in Policy Controller Dashboard, a powerful tool that makes it easy to manage and monitor the policy guardrails applied to your Fleet of clusters. New policy bundles are available to help audit your cluster resources against kubernetes standards, industry standards, or Google recommended best practices.  The easiest way to get started with Anthos Policy Controller is to just install Policy controller and try applying a policy bundle to audit your fleet of clusters against a standard such as CIS benchmark.
    • Dataproc is an important service in any data lake modernization effort. Many customers begin their journey to the cloud by migrating their Hadoop workloads to Dataproc and continue to modernize their solutions by incorporating the full suite of Google Cloud’s data offerings. Check out this guide that demonstrates how you can optimize Dataproc job stability, performance, and cost-effectiveness.
    • Eventarc adds support for 85+ new direct events from the following Google services in Preview: API Gateway, Apigee Registry, BeyondCorp, Certificate Manager, Cloud Data Fusion, Cloud Functions, Cloud Memorystore for Memcached, Database Migration, Datastream, Eventarc, Workflows. This brings the total pre-integrated events offered in Eventarc to over 4000 events from 140+ Google services and third-party SaaS vendors.
    •  mFit 1.14.0 release adds support for JBoss and Apache workloads by including fit analysis and framework analytics for these workload types in the assessment report. See the release notes for important bug fixes and enhancements.
    • Google Cloud Deploy - Google Cloud Deploy now supports Skaffold version 2.0.  Release notes
    • Cloud Workstations - Labels can now be applied to Cloud Workstations resources.  Release notes 
    • Cloud Build- Cloud Build repositories (2nd gen) lets you easily create and manage repository connections, not only through Cloud Console but also through gcloud and the Cloud Build API. Release notes

    Week of Jan 17 - Jan 20, 2023

    • Cloud CDN now supports private origin authentication for Amazon Simple Storage Service (Amazon S3) buckets and compatible object stores in Preview. This capability improves security by allowing only trusted connections to access the content on your private origins and preventing users from directly accessing it.

    Week of Jan 9 - Jan 13, 2023

    • Revionics partnered with Google Cloud to build a data-driven pricing platform for speed, scale and automation with BigQuery, Looker and more. As part of the Built with BigQuery program, this blog post describes the use cases, problems solved, solution architecture and key outcomes of hosting Revionics product, Platform Built for Change on Google Cloud.
    • Comprehensive guide for designing reliable infrastructure for your workloads in Google Cloud. The guide combines industry-leading reliability best practices with the knowledge and deep expertise of reliability engineers across Google. Understand the platform-level reliability capabilities of Google Cloud, the building blocks of reliability in Google Cloud and how these building blocks affect the availability of your cloud resources. Review guidelines for assessing the reliability requirements of your cloud workloads. Compare architectural options for deploying distributed and redundant resources across Google Cloud locations, and learn how to manage traffic and load for distributed deployments. Read the full blog here.
    • GPU Pods on GKE Autopilot are now generally available. Customers can now run ML training, inference, video encoding and all other workloads that need a GPU, with the convenience of GKE Autopilot’s fully-managed Kubernetes environment.
    • Kubernetes v1.26 is now generally available on GKE. GKE customers can now take advantage of the many new features in this exciting release. This release continues Google Cloud’s goal of making Kubernetes releases available to Google customers within 30 days of the Kubernetes OSS release.
    • Event-driven transfer for Cloud Storage:Customers have told us they need asynchronous, scalable service to replicate data between Cloud Storage buckets for a variety of use cases including aggregating data in a single bucket for data processing and analysis, keeping buckets across projects/regions/continents in sync, etc. Google Cloud now offers Preview support for event-driven transfer - serverless, real-time replication capability to move data from AWS S3 to Cloud Storage and copy data between multiple Cloud Storage buckets. Read the full blog here.
    • Pub/Sub Lite now offers export subscriptions to Pub/Sub. This new subscription type writes Lite messages directly to Pub/Sub - no code development or Dataflow jobs needed. Great for connecting disparate data pipelines and migration from Lite to Pub/Sub. See here for documentation.

    • BigQuery under the hood: Behind the serverless storage and query optimizations that supercharge performance Fri, 17 Mar 2023 17:00:00 -0000

      Customers love the way BigQuery makes it easy for them to do hard things — from BigQuery Machine Learning (BQML) SQL turning data analysts into data scientists, to rich text analytics using the SEARCH function that unlocks ad-hoc text searches on unstructured data. A key reason for BigQuery’s ease of use is its underlying serverless architecture, which supercharges your analytical queries while making them run faster over time, all without changing a single line of SQL. 

      In this blog, we lift the curtain and share the magic behind BigQuery’s serverless architecture, such as storage and query optimizations as well as ecosystem improvements, and how they enable customers to work without limits in BigQuery to run their data analytics, data engineering and data science workloads.

      Storage optimization

      Improve query performance with adaptive storage file sizing 

      BigQuery stores table data in a columnar file store called Capacitor. These Capacitor files initially had a fixed file size, on the order of hundreds of megabytes, to support BigQuery customers’ large data sets. The larger file sizes enabled fast and efficient querying of petabyte-scale data by reducing the number of files a query had to scan. But as customers moving from traditional data warehouses started bringing in smaller data sets — on the order of gigabytes and terabytes — the default “big” file sizes were no longer the optimal form factor for these smaller tables. Recognizing that the solution would need to scale for users with big and smaller query workloads, the BigQuery team came up with the concept of adaptive file sizing for Capacitor files to improve small query performance.

      The BigQuery team developed an adaptive algorithm to dynamically assign the appropriate file size, ranging from tens to hundreds of megabytes, to new tables being created in BigQuery storage. For existing tables, the BigQuery team added a background process to gradually migrate existing “fixed” file size tables into adaptive tables, to migrate customers’ existing tables to the performance-efficient adaptive tables. Today, the background Capacitor process continues to scan the growth of all tables and dynamically resizes them to ensure optimal performance.

      “We have seen a greater than 90% reduction in the number of analytic queries in production that take more than one minute to run.” - Emily Pearson, Associate Director, Data Access and Visualization Platforms, Wayfair

      Big metadata for performance boost

      Reading from and writing to BigQuery tables maintained in storage files can become inefficient quickly if workloads had to scan all the files for every table. BigQuery, like most large data processing systems, has developed a rich store of information on the file contents, which is stored in the header of each Capacitor file. This information about data, called metadata, allows query planning, streaming and batch ingest, transaction processing and other read-write processes in BigQuery to quickly identify the relevant files within storage on which to perform the necessary operations, without wasting time reading non-relevant data files.

      But while reading metadata for small tables is relatively simple and fast, large (petabyte-scale) fact tables can generate millions of metadata entries. For these queries to generate results quickly the query optimizer needs a highly performant metadata storage system.

      Based on the concepts proposed in their 2021 VLDB paper, “Big Metadata: When Metadata is BigData,” the BigQuery team developed a distributed metadata system, called CMETA, that features fine-grained column and block-level metadata that is capable of supporting very large tables and that is organized and accessible as a system table. When the query optimizer receives a query, it rewrites the query to apply a semi-join (WHERE EXISTS or WHERE IN) with the CMETA system tables. By adding the metadata data lookup to the query predicate, the query optimizer dramatically increases the efficiency of the query.

      In addition to managing metadata for BigQuery’s Capacitor-based storage, CMETA also extends to external tables through BigLake, improving the performance of lookups of large numbers of Hive partitioned tables.

      The results shared in the VLDB paper demonstrate that query runtimes are accelerated by 5× to 10× for queries on tables ranging from 100GB to 10TB using the CMETA metadata system.

      The three Cs of optimizing storage data: compact, coalesce, cluster

      BigQuery has a built-in storage optimizer that continuously analyzes and optimizes data stored in storage files within Capacitor using various techniques:

      Compact and Coalesce: BigQuery supports fast INSERTs using SQL or API interfaces. When data is initially inserted into tables, depending on the size of the inserts, there may be too many small files created. The Storage Optimizer merges many of these individual files into one, allowing efficient reading of table data without increasing the metadata overhead.

      The files used to store table data over time may not be optimally sized. The storage optimizer analyzes this data and rewrites the files into the right-sized files so that queries can scan the appropriate number of these files, and retrieve data most efficiently. Why is the right size important? If the files are too big, then there’s overhead in eliminating unwanted rows from the larger files. If the files are too small, there’s overhead in reading and managing the metadata for the larger number of small files being read.

      Cluster: Tables with user-defined column sort orders are called clustered columns; when you cluster a table using multiple columns, the column order determines which columns take precedence when BigQuery sorts and groups the data into storage blocks. BigQuery clustering accelerates queries that filter or aggregate by the clustered columns by only scanning the relevant files and blocks based on the clustered columns rather than the entire table or table partition. As data changes within the clustered table, BigQuery storage optimizer automatically performs reclustering to ensure consistent query performance.

      Query optimization

      Join skew processing to reduce delays in analyzing skewed data

      When a query begins execution in BigQuery, the query optimizer converts the query into a graph of execution, broken down into stages, each of which have steps. BigQuery uses dynamic query execution, which means the execution plan can evolve dynamically to adapt to different data sizes and key distributions, ensuring fast query response time and efficient resource allocation. When querying large fact tables, there is a strong likelihood that data may be skewed, meaning data is distributed asymmetrically over certain key values, creating unequal distribution of the data. Thus, a query of a skewed fact table is likely to cause more records for the skewed data over normal data. When the query engine distributes the work to workers to query skewed tables, certain workers may take longer to complete their task because there are excess rows for certain key values, i.e., skew, creating uneven wait times across the workers.

      Let’s consider data that can show skew in its distribution. Cricket is an international team sport. However, it is only popular in certain countries around the world. If we were to maintain a list of cricket fans by country, the data will show that it is skewed to fans from full Member countries of the International Cricket Council and is not equally distributed across all countries.

      Traditional databases have tried to handle this by maintaining data distribution statistics. However, in modern data warehouses, data distribution can change rapidly and data analysts can drive increasingly complex queries rendering these statistics obsolete, and thus, less useful. Depending on tables being queried on join columns, the skew may be on the table column referenced on the left side of the join or the right side.

      BigQuery.jpg
      More worker capacity is allocated to Left or the Right side of the join depending on where the data skew is detected (Left side has data skew in task 2; Right side has data skew in task 1)

      The BigQuery team addressed data skew by developing techniques for join skew processing by detecting data skew and allocating work proportionally so that more workers are allocated to process the join over the skewed data. While processing joins, the query engine keeps monitoring join inputs for skewed data. If a skew is detected, the query engine changes the plan to process the joins over the skewed data. The query engine will further split the skewed data, creating equal distribution of processing across skewed and non-skewed data. This ensures that at execution time, the workers processing data from the table with data skew are proportionally allocated according to the detected skew. This allows all workers to complete their tasks simultaneously, thereby accelerating query runtime by eliminating any delays caused by waits due to skewed data.

      “The ease to adopt BigQuery in the automation of data processing was an eye-opener. We don’t have to optimize queries ourselves. Instead, we can write programs that generate the queries, load them into BigQuery, and seconds later get the result.” - Peter De Jaeger, Chief Information Officer, AZ Delta

      Dynamic concurrency with queuing

      BigQuery’s documentation on Quotas and limits for Query jobs states “Your project can run up to 100 concurrent interactive queries.” BigQuery used the default setting of 100 for concurrency because it met requirements for 99.8% of customer workloads. Since it was a soft limit, the administrator could always increase this limit through a request process to increase the maximum concurrency. To support the ever-expanding range of workloads, such as data engineering, complex analysis, Spark and AI/ML processing, the BigQuery team developed dynamic concurrency with query queues to remove all practical limits on concurrency and eliminate the administrative burden. Dynamic concurrency with query queues is achieved with the following features:

      1. Dynamic maximum concurrency setting: Customers start receiving the benefits of dynamic concurrency by default when they set the target concurrency to zero. BigQuery will automatically set and manage the concurrency based on reservation size and usage patterns. Experienced administrators who need the manual override option can specify the target concurrency limit, which replaces the dynamic concurrency setting. Note that the target concurrency limit is a function of available slots in the reservation and the admin-specified limit can’t exceed that. For on-demand workloads, this limit is computed dynamically and is not configurable by administrators.

      2. Queuing for queries over concurrency limits: BigQuery now supports Query Queues to handle overflow scenarios when peak workloads generate a burst of queries that exceed the maximum concurrency limit. With Query Queues enabled, BigQuery can queue up to 1000 interactive queries so that they get scheduled for execution rather than being terminated due to concurrency limits, as they were previously. Now, users no longer have to scan for idle time periods or periods of low usage to optimize when to submit their workload requests. BigQuery automatically runs their requests or schedules them on a queue to run as soon as current running workloads have completed. You can learn about Query Queues here.

      “BigQuery outperforms particularly strongly in very short and very complex queries. Half (47%) of the queries tested in BigQuery finished in less than 10 sec compared to only 20% on alternative solutions. Even more starkly, only 5% of the thousands of queries tested took more than 2 minutes to run on BigQuery whereas almost half (43%) of the queries tested on alternative solutions took 2 minutes or more to complete.” - Nikhil Mishra, Sr. Director of Engineering, Yahoo!

      Colossus Flash Cache to serve data quickly and efficiently

      Most distributed processing systems make a tradeoff between cost (querying data on hard disk) and performance (querying data in memory). The BigQuery team believes that users can have both low cost and high performance, without having to choose between them. To achieve this, the team developed a disaggregated intermediate cache layer called Colossus Flash Cache which maintains a cache in flash storage for actively queried data. Based on access patterns, the underlying storage infrastructure caches data in Colossus Flash Cache. This way, queries rarely need to go to disk to retrieve data; the data is served up quickly and efficiently from Colossus Flash Cache.

      Optimized Shuffle to prevent excess resource usage

      BigQuery achieves its highly scalable data processing capabilities through in-memory execution of queries. These in-memory operations bring data from disk and store intermediate results of the various stages of query processing in another in-memory distributed component called Shuffle. Analytical queries containing WITH clauses encompassing common table expressions (CTE) often reference the same table through multiple subqueries. To solve this, the BigQuery team built a duplicate CTE detection mechanism in the query optimizer.This algorithm reduces resource usage substantially allowing more shuffle capacity to be available to be shared across queries.

      To further help customers understand their shuffle usage, the team also added PERIOD_SHUFFLE_RAM_USAGE_RATIO metrics to the JOBS INFORMATION_SCHEMA view and to Admin Resource Charts. You should see fewer Resource Exceeded errors as a result of these improvements and now have a tracking metric to take preemptive actions to prevent excess shuffle resource usage.

      “Our teams wanted to do more with data to create better products and services, but the technology tools we had weren’t letting us grow and explore. And that data was growing continually. Just one of our data warehouses had grown 300% from 2014 to 2018. Cloud migration choices usually involve either re-engineering or lift-and-shift, but we decided on a different strategy for ours: move and improve. This allowed us to take full advantage of BigQuery’s capabilities, including its capacity and elasticity, to help solve our essential problem of capacity constraints.” - Srinivas Vaddadi, Delivery Head, Data Services Engineering, HSBC

      Ecosystem optimization

      Faster ingest, faster egress, faster federation

      The performance improvements BigQuery users experience are not limited to BigQuery’s query engine. We know that customers use BigQuery with other cloud services to allow data analysts to ingest from or query other data sources with their BigQuery data. To enable better interoperability, the BigQuery team works closely with other cloud services teams on a variety of integrations:

      1. BigQuery JDBC/ODBC drivers: The new versions of the ODBC / JDBC drivers support faster user account authentication using OAuth 2.0 (OAuthType=1) by processing authentication token refreshes in the background.

      2. BigQuery with Bigtable: The GA release of Cloud Bigtable to BigQuery federation supports pushdown of queries for specific row keys to avoid full table scans.

      3. BigQuery with Spanner: Federated queries against Spanner in BigQuery now allow users to specify the execution priority, thereby giving them control over whether federated queries should compete with transaction traffic if executed with high priority or if they can complete at lower-priority settings.

      4. BigQuery with Pub/Sub: BigQuery now supports direct ingest of Pub/Sub events through a purpose-built “BigQuery subscription” that allows events to be directly written to BigQuery tables.

      5. BigQuery with Dataproc: The Spark connector for BigQuery supports the DIRECT write method, using the BigQuery Storage Write API, avoiding the need to write the data to Cloud Storage.

      What can BigQuery do for you? 

      Taken together, these improvements to BigQuery translate into tangible performance results and business gains for customers around the world. For example, Camanchaca drove 6x faster data processing time, Telus drove 20× faster data processing and reduced $5M in cost, Vodafone saw 70% reduction in data ops and engineering costs, and Crux achieved 10× faster load times.

      “Being able to very quickly and efficiently load our data into BigQuery allows us to build more product offerings, makes us more efficient, and allows us to offer more value-added services. Having BigQuery as part of our toolkit enables us to think up more products that help solve our customers’ challenges.” - Ryan Haggerty, Head of Infrastructure and Operations, Crux

      Want to hear more about how you can use BigQuery to drive similar results for your business? Join us at the Data Cloud and AI Summit ‘23 to learn what’s new in BigQuery and check out our roadmap of performance innovations using the power of serverless.

    • Early-bird registration for Google Cloud Next ‘23 is open now Fri, 17 Mar 2023 16:00:00 -0000

      San Francisco, here we come. Starting today, you can now register at the Early Bird rate of $899 USD* for Google Cloud Next ‘23, taking place in person, August 29-31, 2023.

      This year’s Next conference comes at an exciting time. The emergence of generative AI is a transformational opportunity that some say may be as meaningful as the cloud itself. Beyond generative AI, there are breakthroughs in cybersecurity, better and smarter ways to gather and gain insights from data, advances in application development, and so much more. It’s clear that there has never been a better time to work in the cloud industry. And there’s no better time to get together to learn from one another, while we explore and imagine what all of this innovation will bring.

      Returning to a large, in-person, three-day event opens up so much opportunity for rich experiences like hands-on previews, exclusive training and on-site boot camps, and face-to-face engagement with each other and our partners across our open ecosystem.

      Our teams are busy designing experiences for you focused on six key topics:

      • Data cloud

      • AI & ML

      • Open infrastructure

      • Cybersecurity

      • Collaboration

      • DEI

      And of course, in addition to dedicated AI and ML sessions, we’ll weave AI, including generative AI, throughout the event sessions to reflect the role these technologies play in innovations across nearly everything in cloud.  

      No matter your role or subject matter expertise, Next has content curated especially for you, including tracks for:

      • Application developers

      • Architects and IT professionals

      • Data and database engineers

      • Data scientists and data analysts

      • DevOps, SREs, IT Ops, and platform engineers

      • IT managers and business leaders

      • Productivity and collaboration app makers

      • Security professionals

      Registerfor Next ’23 before May 31st to take advantage of the $899 USD early bird price – that’s $700 USD off the full ticket price — normally $1,599.* 

      We can’t wait to come back together as a community to welcome you to Next ’23. 


      *The $899 USD early bird price is valid through 11:59 PM PT on Wednesday, May 31, or until it’s sold out.

      google cloud next.jpg
    • Extending Cloud Code with custom templates Fri, 17 Mar 2023 16:00:00 -0000

      Cloud Code is a set of IDE plugins for popular IDEs that make it easier to create, deploy and integrate applications with Google Cloud. Cloud Code provides an excellent extension mechanism through custom templates. In this post, I show you how you can create and use your own custom templates to add some features beyond those supported natively in Cloud Code, such as .NET functions, event triggered functions and more. 

      As a recap, in my Introducing Cloud Functions support in Cloud Code post, I pointed out some limitations of the current Cloud Functions support in Cloud Code:

      • Only four languages are supported (Node.js, Python, Go, and Java) in Cloud Functions templates. I’ve especially missed the NET support.

      • Templates for Cloud Run and Cloud Functions are only for HTTP triggered services. No templates for event triggered services.

      • Testing only works against deployed HTTP triggered services. No testing support for locally running services or event triggered services.

      Let’s see how we can add these features with custom templates!

      Custom sample repositories

      In Cloud Code, when you create a new application with Cloud Code → New Application, it asks you to choose the type of the application you want to create:

      image1.jpg
      Create new application dialog

      For Kubernetes, Cloud Run, and Cloud Functions applications, it uses the templates defined in the cloud-code-samples repo to give you starter projects in one of the supported languages for those application types. 

      It gets more interesting when you choose the Custom application option. There, you can point to a GitHub repository with your own templates and Cloud Code will use those templates as starter projects. This is how you can extend Cloud Code – pretty cool! 

      On the Manage custom sample repositories in Cloud Code for VS Code page, there’s a detailed description on how a custom templates repository should look. There’s also a cloud-code-custom-samples-example GitHub repo and a nice video explaining custom sample templates:

      Getting started with custom samples and Cloud Code

      Basically, it boils down to creating a public GitHub repository with your samples and having a .cctemplate file to catalog each template. That’s it! 

      Our custom sample repository

      We initially wanted to add support for only HTTP triggered .NET Cloud Functions, as this is currently missing from Cloud Code. However, we enjoyed creating these templates so much that we ended up completing a longer wish list:

      1. Added templates for HTTP triggered Cloud Functions and Cloud Run services in multiple languages (.NET, Java, Node.js, Python). 

      2. Added templates for CloudEvents triggered (Pub/Sub, Cloud Storage, AuditLogs) Cloud Functions and Cloud Run services in multiple languages (.NET, Java, Node.js, Python).

      3. Added lightweight gcloud based scripts to do local testing, deployment and cloud testing for each template. 

      You can check out my cloud-code-custom-templates repository for the list of templates. 

      To use these templates as starter projects:

      1. Click on Cloud Code in VS Code

      2. Select New Application → Custom Application → Import Sample from Repo

      3. Point to my cloud-code-custom-templates repository

      Choose a template as a starter project and follow the README.md instructions of the template.

      install.gif
      Import samples from a GitHub repository

      Let’s take a look at some of these templates in more detail. 

      HTTP triggered Cloud Functions templates

      As an example, there’s a .NET: Cloud Functions - hello-http template. It’s an HTTP triggered .NET 6 Cloud Functions template. When you first install the template, the sample code is installed and a README.md guides you through how to use the template:

      image1b.jpg
      HTTP triggered .NET Cloud Functions template

      The code itself is a simple HelloWorld app that responds to HTTP GET requests. It’s not that interesting, but the template also comes with a scripts folder, which is more interesting. 

      In that scripts folder, there’s a test_local.sh file to test the function running locally. This is possible because Cloud Functions code uses Functions Framework, which enables Cloud Functions to run locally. Testing that function is just a matter of sending an HTTP request with the right format. In this case, it’s simply an HTTP GET request but it gets more complicated with event triggered functions. More on that later.

      There’s also setup.sh to enable the right APIs before deploying the function, deploy.sh to deploy the function, and test_cloud.sh to test the deployed function using gcloud. I had to add these scripts as there’s no support in Cloud Code right now to deploy and test a function for .NET. As you see, however, it’s very easy to do with scripts installed as part of the template. 

      Event triggered Cloud Functions templates

      As you might know, Cloud Functions also support various event triggered functions. These events are powered by Eventarc in Cloud Functions gen2. In Cloud Code, there are no templates right now to help with the code and setup of event triggered functions. 

      In Eventarc, events come directly from sources (e.g. Cloud Storage, Pub/Sub, etc.) or they come via AuditLogs. We have some templates in various languages to showcase different event sources such as:

      The event envelope is in CloudEvents format and the payload (the data field) contains the actual event. In .NET, the templates are based on the templates from the Google.Cloud.Functions.Templates package (which you can install and use with the dotnet command line tool to generate Cloud Function samples) and they handle the parsing of CloudEvents envelopes and payloads into strong types using the Functions Framework for various languages.

      As before, each template includes scripts to test locally, deploy to the cloud, and test in the cloud. As an example, test_local.sh for the Cloud Storage template creates and sends the right CloudEvent for a Cloud Storage event:

      code_block
      [StructValue([(u'code', u'curl localhost:8080 -v \\\r\n -X POST \\\r\n -H "Content-Type: application/json" \\\r\n -H "ce-id: 123451234512345" \\\r\n -H "ce-specversion: 1.0" \\\r\n -H "ce-time: 2020-01-02T12:34:56.789Z" \\\r\n -H "ce-type: google.cloud.storage.object.v1.finalized" \\\r\n -H "ce-source: //storage.googleapis.com/projects/_/buckets/MY-BUCKET-NAME" \\\r\n -H "ce-subject: objects/MY_FILE.txt" \\\r\n -d \'{\r\n "bucket": "MY_BUCKET",\r\n "contentType": "text/plain",\r\n "kind": "storage#object",\r\n "md5Hash": "...",\r\n "metageneration": "1",\r\n "name": "MY_FILE.txt",\r\n "size": "352",\r\n "storageClass": "MULTI_REGIONAL",\r\n "timeCreated": "2020-04-23T07:38:57.230Z",\r\n "timeStorageClassUpdated": "2020-04-23T07:38:57.230Z",\r\n "updated": "2020-04-23T07:38:57.230Z"\r\n }\''), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e03b94ff250>)])]

      This is very useful for local testing. 

      Cloud Run templates

      We have similar templates for Cloud Run as well. Some examples are:

      Since these are Cloud Run services, they can’t use the Functions Framework. That means it’s up to you to parse the CloudEvents format using the CloudEvents SDK and the payload (the actual event) using the Google CloudEvents library. The templates take care of all these details and include the right SDKs and libraries for you out of the box.

      Using Cloud Code and templates from Cloud Shell

      At this point, you might be wondering: This is all great but I don’t have .NET or Node.js installed locally, how do I try all these templates? 

      I was pleased to learn that Cloud Code is available in Cloud Shell Editor. You can use Cloud Code and import these custom templates from your browser:

      image2.jpg
      Using Cloud Code and templates from Cloud Shell

      Moreover, since Cloud Shell already comes with .NET or Node.js pre-installed, you can build, run, test, and deploy all the samples using the scripts in the templates, right in your browser. This is pretty neat!

      image3.jpg
      Running .NET 6 templates in Cloud Shell

      What’s next?

      I’m impressed at how easy it is to extend Cloud Code with custom templates. It’s also pretty cool that you can use Cloud Code and any custom templates you create right in your browser without having to install anything, thanks to Cloud Shell. 

      If you’re interested in helping out with templates for other languages (I’d love Go support!), feel free to reach out to me on Twitter @meteatamel or simply send me a pull request in my cloud-code-custom-templates repo and I’ll be happy to collaborate. Thanks to Marc Cohen for contributing with Python templates and GitHub actions to share scripts between templates. 

      To learn more about Cloud Functions support in Cloud Code, try our new Create and deploy a function with Cloud Code tutorial.

    • Verify POST endpoint availability with Uptime Checks Fri, 17 Mar 2023 16:00:00 -0000

      An unreliable app is no fun to use. Ensuring that users experience high levels of availability, consistency, and correctness can go a long way in establishing user trust and positive business outcomes. Over time, as new features are added to applications and modifications made to underlying web services, the ability to comprehensively monitor at the application level becomes increasingly critical.

      Google Cloud Monitoring’s Uptime checks is a lightweight observability tool that enables application owners to easily monitor the performance of an application's critical user journeys. It continuously performs validations on resources to track availability, latency, and other key performance indicators. Uptime checks can be paired with alerts to track the quality of service, detect product degradation, and proactively reduce negative impact on users. 

      Uptime checks for POST requests

      HTTP POST is the standard way to create or update a REST resource. Some common examples of this operation include creating an account, purchasing an item online, and posting on a message board. Monitoring changes and updates to resources is crucial to ensuring product features are working as intended. That’s why we’re excited to announce expanded support for POST requests to allow all content types, including custom content types.

      Previously, Uptime checks only supported POST requests containing `application/x-www-form-urlencoded` bodies. Now, request bodies can be of any type, including but not limited to: `application/json`, `application/xml`, `application/text`, and custom content types. This functionality can be paired with response validation matching (JSON path, regex, response codes, etc.) to ensure POST endpoints are appropriately modifying all resources. Additionally, alerts can be added to notify service owners when their POST endpoints are behaving atypically.

      Creating a new uptime check

      To get started, you can head to Monitoring > Uptime, select “+ Create Uptime Check”, view advanced target options, then populate the new Content Type field.

      content_type_demo.gif

      More information

      Visit our documentation for creating uptime checks, where you can get additional information and step by step instructions for creating your first uptime check.

      Lastly, if you have questions or feedback about this new feature, head to the Cloud Operations Community page and let us know!

    • Helping secure global collaboration at the first federally regulated crypto bank Fri, 17 Mar 2023 16:00:00 -0000

      Editor’s note: Today we hear from Anchorage Digital, a Web3 company with offerings designed for institutions to participate in crypto directly and for those looking to integrate crypto into their own products and services. 


      Launched in San Francisco in 2017, Anchorage Digital is a regulated crypto platform that makes crypto accessible to institutions such as family offices, hedge funds, venture capital firms, fintechs, banks, corporations, and more. Our goal is to provide a safe, regulated way for institutions to participate in crypto. 

      We started with custody, the storage model that’s the basis for all our other services, including: trading, financing, staking, governance, and building crypto-access for institutions. Protecting billions of dollars in digital assets without compromise to access is core to everything we do. Because of that, when it came to daily operations, we wanted a similarly aligned secure and accessible approach.

      Selecting Google Cloud and Google Workspace

      We began in traditional startup fashion at a living room table, but our team grew quickly, first to a San Francisco office and then other offices. Amidst the early stages of the pandemic, we increasingly went remote. Today, with a remote-friendly team of more than 300 employees, we’ve seen the same solution that worked for us as a startup in a single office work across the globe. 

      The combination of Google Cloud and Google Workspace empowers us to collaborate securely and asynchronously on a global scale. Our decision to use Google Cloud centered around practicality and aligned security values. Google Cloud’s open-source tools and pricing appealed to us. By using Google's BeyondCorp framework, we bypassed the expense and complexity of a traditional centralized corporate network infrastructure. This helped us prevent a whole class of security risks and costs that come with the territory, while making it possible to scale the company efficiently.

      BeyondCorp, which provides a Zero Trust security framework that shifts access controls from the perimeter to individual users and devices, meaningfully lowers the cost to maintain straightforward daily management and auditing over access to the secure environment. For example, we can prevent the introduction of untrusted extensions and applications on our endpoints through simple organizational policies. We can vet all extension requests and allow them only after we have inspected their security posture. The extensive logging and monitoring position that Google Workspace provides simplified compliance, providing the governance and oversight necessary for a regulated financial institution. 

      Our digital asset platform is built on infrastructure-as-code, harnessing Google Kubernetes Engine. With GKE we can focus on growing the platform with minimal overhead spent on maintaining the underlying infrastructure. Combined with powerful support for Terraform, we can embrace the latest infrastructure patches and features, often with only a single line of code change. BigQuery and Looker also allow us to visualize our data and derive meaningful insights about our platform and its security. Google Cloud’s continuous improvement in security and consistency enables us to primarily focus on building our platform. 

      Using Chromebooks for additional security

      Our decision to use Chromebooks as our standard employee workstation was, at the time, an unconventional choice. We knew our hardware and operating systems would require a balance of security, developer productivity, and scalability as we rapidly grew. Using Chromebooks with BeyondCorp Enterprise allows us to cleanly restrict internal application and workspace access to company-owned, policy-managed, approved devices only. 

      We took steps to further deepen our resiliency against credentials-based attacks by deploying hardware-based multi-factor authentication, which we require for all our Google Workspace accounts. Plus, device and data management tools like remote wiping and Zero-Touch enrollment make onboarding and offboarding employees reliable and straightforward.

      Additionally, the guiding security principles underlying ChromeOS reduce the common attack surfaces that security teams all too often struggle to deploy at scale: hardening of the OS with defense in depth, default native sandboxing capabilities, and strong hardware-based device integrity features. This baseline security level allows our security engineers to focus on building secure code and deploying new features instead of spending countless hours with incident response and recovery.

      Within Chromebooks and the ChromeOS operating system, using Google Workspace allows us to perform common daily functions that would otherwise require expensive annual software renewals. We also benefit from the immediate, continually updated versions of the tools we use from Meet to shared drives, docs, presentations, and sheets. Lastly, we benefit from new ChromeOS updates that keep our hardware hardened from security risks, since employees are prompted to restart their devices upon each new rollout.

      Google Cloud, Google Workspace, and Chromebooks help us achieve our business goals as we operate under strong security principles. Even as we faced the challenges of a remote-based team during the business-continuity conditions of a pandemic, we grew our company in terms of both clients and employees. Our employees have shown flexibility in adjusting to this novel system, and we’ve been pleased with the progress and continual improvements in security posture of the ecosystem. We look forward to continued partnership and day-to-day use of these technologies.

    • Announcing higher VM- to-internet throughput for several Compute Engine families Fri, 17 Mar 2023 16:00:00 -0000

      We are excited to announce the General Availability of higher egress bandwidth for VM-to-internet traffic on the Compute Engine N2, N2D C2D and M3 families, as part of per VM Tier_1 networking performance. With this enhancement, VMs in these families that have enabled per VM Tier_1 networking performance have a higher VM-to-internet egress limit — up to 25Gbps. The default egress bandwidth limit for these VMs is up to 7 Gbps when sending traffic from a VM to the internet.

      This feature enables workloads that need higher VM-to-internet throughput to operate with fewer VMs, resulting in significant cost-savings. Workloads such as multi-session WebRTC, media processing and streaming, firewall appliances and many others, can now take advantage of this feature.

      Bandwidth options for N2, N2D, C2D and M3 families

      Bandwidth options.jpg

      You can expect the same bandwidth options for high-memory, high-cpu and custom configurations in N2, N2D, C2D, and M3 VMs as long as they meet the above vCPU requirements.

      This higher VM-to-internet configuration is included as part of the per VM Tier_1 networking performance for no additional charge, and as such, does not appear as a separate line item in the Billing report besides the line item for per VM Tier_1 networking performance. For more details, please visit our pricing1 page. 

      Here’s an example using the gCloud SDK to create an N2 instance with per VM Tier_1 networking performance that enables up to 75 Gbps VM-to-VM network bandwidth and up to 25 Gbps VM-to-internet network bandwidth (documentation):

      code_block
      [StructValue([(u'code', u'gcloud compute instances create instance_name_1 \\\r\n --project=project_name \\\r\n --zone=us-central1-a \\\r\n --machine-type=n2-standard-64 \\\r\n --image=projects/project_name/global/images/image_name \\\r\n --network-interface=nic-type=GVNIC \\\r\n --network-performance-configs=total-egress-bandwidth-tier=TIER_1'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ee27ae3e410>)])]

      Using the TIER_1 setting on the network-performance-configs flag automatically upgrades your instance with increased network bandwidth.

      Adding more value to per VM Tier_1 networking

      This feature is made possible by leveraging Andromeda's unique architecture, allowing us to upgrade our SDN infrastructure to scale out the processing of Internet-bound traffic "under the hood," adding value — at no extra cost — to our 100Gbps VM-to-VM Tier_1 networking product on existing VM families.

      Whether you need enhanced VM-to-VM performance or additional VM-to-internet throughput, Tier_1 networking enables you to upgrade your VMs to use increased bandwidth in any zone where you can create a supported instance.

      At Google Cloud, we always strive to offer best-in-class networking features based on customer input. Please stay tuned as we bring out more improvements to our service and offerings. We would love to hear suggestions from you on improving our products and offerings.

      Happy networking!


      1. This feature is currently not available for Tier_1 networking performance on C2 families.

      Related Article

      Turbo boost your Compute Engine workloads with new 100 Gbps networking

      Updates to Google Cloud’s Andromeda host networking stack bring higher bandwidth connectivity to Compute Engine C2 and N2 VM families.

      Read Article
    • Google Cloud and FS-ISAC team up to advance financial services security Thu, 16 Mar 2023 16:00:00 -0000

      Google Cloud is committed to strengthening the security and resiliency of financial services organizations and making the Internet a safer place for all organizations to conduct transactions and business. While building a secure and resilient ecosystem is a joint responsibility, we want to ensure that we’re working together to build a community of trust.

      To advance this mission and strengthen our commitment to the financial sector, Google Cloud is announcing today that we have joined the Financial Services Information Security and Analysis Center’s Critical Providers Program. While Google Cloud has been a long-standing supporter of FS-ISAC, as have our colleagues at Mandiant, we are the first and only major cloud provider to join FS-ISAC’s Critical Providers Program.

      aside_block
      [StructValue([(u'title', u'Hear monthly from our Cloud CISO in your inbox'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8b9a2a6dd0>), (u'btn_text', u'Subscribe today'), (u'href', u'https://go.chronicle.security/cloudciso-newsletter-signup?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY23-Cloud-CISO-Perspectives-newsletter-blog-embed-CTA&utm_content=-&utm_term=-'), (u'image', <GAEImage: gcat small.jpg>)])]

      As a Critical Provider, Google Cloud will bring experts and resources – including unique insights from our Threat Horizon reports – to partner with the financial services community and its leadership. Googlers will work with defenders and leaders in the global financial services sector, sharing knowledge we’ve learned building and deploying secure technology at Google.

      We’ll also offer services from our Google Cybersecurity Action Team, which is composed of former industry CISOs and global leaders that understand the complex financial ecosystems and the challenges that the industry faces. This makes us uniquely positioned to support these financial services organizations through their most challenging problems – because we’ve been there. 

      “This partnership is a crucial step in the development of the Critical Providers Program and forging deeper relationships between financial services institutions and the critical providers of network infrastructure and security,” said Steve Silberstein, CEO, FS-ISAC. “Our partnership will enhance overall resilience and strengthen the security of the financial sector’s supply chain through access to Google’s unique perspective and expertise, as well as timely threat intelligence to help anticipate, defend, and respond to cyber incidents.

      In August 2021, Google announced its commitment to invest at least $10 billion over the next 5 years to advance cybersecurity. This announcement advances our commitment to support the security and digital transformation of high-risk sectors through community partnerships and other means.

      “We take our responsibility as one of the world’s largest tech providers very seriously – and working with organizations like FS-ISAC and those on the forefront of building communities and protecting societies is a critical component of this,” said Phil Venables, CISO, Google Cloud. “Cybersecurity and resilience are a team sport. In joining FS-ISAC in this capacity, we are proud to be on a team working to protect an essential part of society.” 

      At Google Cloud, we adhere to a shared fate model, in which we are active partners with our customers in their security journey. This partnership between Google Cloud and FS-ISAC is part of the solution to achieve better security for the broader technology ecosystem. We look forward to expanding our partnerships and working closely with industry partners like FS-ISAC to help build more resiliency and safety in the financial services sector.

    • Coop reduces food waste by forecasting with Google’s AI and Data Cloud Thu, 16 Mar 2023 16:00:00 -0000

      Although Coop has a rich history spanning nearly 160 years, the machine learning (ML) team supporting its modern operations is quite young. Its story began in 2018 with one simple mission: to leverage ML-powered forecasting to help inform business decisions, such as demand planning based on supply chain seasonality and expected customer demand. The end goal? By having insight into not only current data but also projections of what could happen in the future, the business can optimize operations to keep customers happy, save costs, and support its sustainability goals (more on that later!).

      Coop’s initial forecasting environment was one on-premises workstation that leveraged open-source frameworks such as PyTorch and TensorFlow. Fine tuning and scaling models to a larger number of CPUs or GPUs was cumbersome. In other words, the infrastructure couldn’t keep up with their ideas.

      So when the question arose of how to solve these challenges and operationalize the produced outcomes beyond those local machines, Coop leveraged the company’s wider migration to Google Cloud to find a solution that could stand the test of time.

      Setting up new grounds for innovation

      Over a two-day workshop with the Google Cloud team, Coop kicked things off by ingesting data from its vast data pipelines and SAP systems to BigQuery. At the same time, Coop’s ML team implemented physical accumulation cues of incoming new information and sorted out what kind of information this was. The team was relieved to not have to worry about setting up infrastructure and new instances.

      Next, the Coop team turned to Vertex AI Workbench to further develop its data science workflow, finding it surprisingly fast to get started. The goal was to train forecasting models to support Coop’s distribution centers so they could optimize their stock of fresh produce based on accurate numbers. 

      Achieving higher accuracy, faster, to better meet customer demand

      During the proof-of-concept (POC) phase, Coop’s ML team had two custom-built models competing against an AutoML-powered Vertex AI Forecast model, which the team ultimately operationalized on Vertex AI: a single Extreme Gradient Boosting model and a Temporal Fusion Transformer in PyTorch. The team established that using Vertex AI Forecast was faster and more accurate than training a model manually on a custom virtual machine (VM).

      On the test set in the POC, the team reached 14.5 WAPE (Weighted Average Percentage Error), which means Vertex AI Forecast provided a 43% performance improvement relative to models trained in-house on a custom VM.

      After a successful POC and several internal tests, Coop is building a small-scale pilot (to be put live in production for one distribution center) that will conclude with the Coop ML team eventually streaming back the forecasting insights to SAP, where processes such as carrying out orders to importers and distributors take place. Upon successful completion and evaluation of the small-scale pilot in production in the next few months, they could possibly scale it out to full blown production across distribution centers throughout Switzerland. The architecture diagram below approximately illustrates the steps involved in both stages. The vision is of course to leverage Google’s data and AI services, including forecasting and post-forecasting optimization, to support all of Coop’s distribution centers in Switzerland in the near future

      1 Coop.jpg

      Leveraging Google Cloud to increase the relative forecasting accuracy by 43% over custom models trained by the Coop team can significantly affect the retailer’s supply chain. By taking this POC to pilot and possibly production, the Coop ML team hopes to improve its forecasting model to better support wider company goals, such as reducing food waste.

      Driving sustainability by reducing food waste

      Coop believes that sustainability must be a key component of its business activity. With the aim to become a zero-waste company, its sustainability strategy feeds into all corporate divisions, from how it selects suppliers of organic, animal-friendly, and fair-trade products to efforts for reducing energy, CO2 emissions, waste materials, and water usage in its supply chains. 

      Achieving these goals boils down to an optimal control problem. This is known as a Bayesian framework: Coop must carry out quantile inference to determine the scope of its distributions. For example, is it expecting to sell between 35 and 40 tomatoes on a given day, or is its confidence interval between 20 and 400? Reducing this amount of uncertainty with more specific and accurate numbers means Coop can order the precise number of units for distribution centers, ensuring customers can always find the products they need. At the same time, it prevents ordering in excess, which reduces food waste. 

      Pushing the envelope of what can be achieved company-wide

      Having challenged its in-house models against the Vertex AI Forecast model in the POC, Coop is in the process of rolling out a production pilot to one distribution center in the coming months, and possibly all distribution centers across Switzerland later thereafter. In the process, one of the most rewarding things was realizing that the ML team behind the project could use different Google Cloud tools, such as Google Kubernetes Engine and BigQuery, and Vertex AI to create its own ML platform. Beyond using pre-trained Vertex AI models, the team can automate and create data science workflows quickly so it’s not always dependent on infrastructure teams.

      Next, Coop’s ML team aims to use BigQuery as a pre-stage for Vertex AI. This will allow the entire data streaming process to flow more efficiently, serving data to any part of Vertex AI when needed. “The two tools integrate seamlessly, so we look forward to trying that combination for our forecasting use cases and potentially new use cases, too. We are also exploring the possibility of deploying different types of natural language processing-based solutions to other data science departments within Coop that are relying heavily on TensorFlow models,” says Martin Mendelin, Head of AI/ML Analytics, Coop. 

      “By creating and customizing our own ML platform on Google Cloud, we’re creating a standard for other teams to follow, with the flexibility to work with open-source programs but in a stable, reliable environment where their ingenuity can flourish,” Mendelin adds. “The Google team went above and beyond with its expertise and customer focus to help us make this a reality. We’re confident that this will be a nice differentiator for our business.”

    • Peacock: Tackling ML challenges by accelerating skills Thu, 16 Mar 2023 16:00:00 -0000

      At Peacock, we are acutely aware of global trends and changes in adopting machine learning (ML) techniques, particularly in the field of media and entertainment. We anticipate that, within several years, most software applications will have an element of ML and will require fine tuning of a model, putting increasing demand on ML training infrastructure.

      As the Director of Analytics Tooling at Peacock, I head up a team of engineers whose primary aim is to build scalability into our tools and processes, enabling us to keep up with the ever-changing field of ML engineering and create a better user experience. We use ML for a wide range of tasks, such as learning more about users’ viewing preferences and interests so we can provide more personalized content recommendations.

      But the discipline of ML engineering is unique. It requires people from two significantly different backgrounds, applied ML and software engineering, to meet in the middle, collaborate, and learn from each other. Both groups face challenges they have not experienced before, regularly pushing them out of their comfort zones and catalyzing true innovation.

      Continuous education in life is empowering and beneficial, but in the case of remaining competitive in our industry, it is necessary. As we build, scale, and iterate our tools and processes, we rely on training, resources, and education from Google Cloud. Here’s a look at how we invest in upskilling initiatives to scale our data science organization.

      Accelerated training with Google Cloud Advanced Solutions Lab

      As a relatively new and complex discipline, ML engineering has not yet seen established patterns and standards, making ongoing training essential for keeping up with industry changes. We have worked closely with Google Cloud for several years to develop our data analytics based on solutions like BigQuery, so when it came to providing ongoing education to our data scientists and engineers, Google Cloud was the natural choice.

      We encourage our engineers and data scientists to earn Google Cloud certifications, such as the Professional Cloud Architect and Professional Data Engineer. In addition, Google Cloud offers on-demand training to understand, implement, and scale data science and ML tools, like these role-based learning paths for Data Engineers and ML Engineers. We also use Cloud Hero, a gamified Google Cloud training experience that uses hands-on labs to teach skills in an interactive learning environment.

      Perhaps the most impactful offering has been the Advanced MLOps workshop from the Google Cloud Advanced Solutions Lab. Participants were split roughly fifty-fifty between engineers and data scientists, from six or seven teams across multiple organizations. Being in the same workshop allowed us to establish a baseline among multidisciplinary teams, and to create a common vocabulary and a shared understanding of the problems we face.

      This immersive MLOps deep dive offered something for everyone, no matter the participant’s background. The first week focused on containers, Kubernetes, CI/CD, and ML pipelines—in other words, the topics that would have been a refresher to a good engineer but unfamiliar to some data scientists. This changed in the second week, as we moved on to building models and the basics of TensorFlow and TFX components. At this point, the data scientists felt more in their element, while some engineers were experiencing it for the first time. The Advanced Solutions Lab served as a melting pot for teams that previously had not collaborated with each other, helping everyone learn and grow together. 

      "Peacock’s ML Engineering team has done an amazing job in creating and implementing a robust training program and we are proud to be their Partner of choice. The team members have thoroughly adopted a growth mindset and are continually pursuing strategic opportunities to learn and grow their technical capabilities, and we’re excited to be a part of it."—Heather Remek, Head of Customer Experience - Telecommunications Industry, Google Cloud

      Educating today to build for tomorrow

      We are operating in a new and fast-changing industry, but tooling, processes, or compute resources should not stand in the way of creating better ML models. With the right tools, techniques, and mindset, data scientists and ML engineers can develop the skills necessary to excel and progress in our field.

      As ML evolves, we’ll continue to stay informed, practice on real-world problems, and invest in our upskilling programs. Our collaboration with the Google Cloud team on our ML journey has allowed us to increase the knowledge and competency of our data science teams as they build, productionalize, and scale end-to-end ML solutions. By educating today, we can empower our organization to build for tomorrow and make better data-driven decisions.

      Learn more about the Google Cloud Advanced Solutions Lab and explore on-demand role based training for data scientists and ML engineers.

    • Mitigate mainframe migration risks with Dual Run Thu, 16 Mar 2023 16:00:00 -0000

      CIOs are again evaluating their mainframe investments, balancing rising operational costs and difficulty finding talent with the perceived costs and risks of moving critical applications to the cloud. Increased business agility, technical innovation, computing elasticity, customer insights, and a growing talent pool all encourage CIOs to migrate and modernize from “Big Iron” onto public cloud platforms. 

      Google Cloud recently announced Dual Run, a new mainframe modernization solution, to help customers mitigate the risk involved in mainframe migrations and accelerate their migration to the cloud. Leaders can leverage Dual Run in their quest to ensure their mainframe modernization projects will succeed and pay off so let’s dive a little deeper into what Dual Run is, how it works, and how it can help you. 

      Mainframe modernization with a proven technology

      Since so many businesses still run mainframes, we decided to partner with Banco Santander—one of the largest banks in the world—to bring Dual Run to our enterprise customers, since they had already built a solution. In fact, Dual Run was built on top of Banco Santander's unique technology which has already demonstrated proven results in the regulated financial services industry. Now that Dual Run is available, Banco Santander has been using it to bring their data and workloads onto Google Cloud's trusted infrastructure.

      The concept is not new, but the solution is unique

      Dual Run enables you to run a parallel production system, allowing you to simultaneously run workloads on your mainframes and on Google Cloud. While many enterprises running mainframes have thought about parallel production concept and a few even tried before, Google Cloud is unique among the hyperscalers to provide such a solution as an offering to its customers. 

      With a parallel production run, you can perform real-time testing of your applications on Google Cloud and quickly gather data on performance and stability with no disruption to your business. Once you’re satisfied with the functional and performance equivalence of the two systems, you can make the new Google Cloud environment your system of record, while existing mainframe systems can be used as a backup or decommissioned. 

      In addition to the transformative benefits you get from moving to Google Cloud–such as AI-based scalability, speed, and security–migrating mainframes with Dual Run offers you even more benefits:

      Mitigate migration risk: Dual Run reduces risk during the migration by running your business critical systems in parallel with powerful reporting to track the difference between your current and target systems. This ensures there is no impact or risk to your existing mainframes while migrating to Google Cloud. 

      Secure migration investments: Avoid costly migration mistakes by basing your decisions and actions on empirical data acquired from your production system.

      Reduce business testing effort: Compare the functional equivalence of outcomes in the current and target system with production data and drastically reduce the testing cycles of your migrated workload. 

      Accelerate migration: Speed up the entire mainframe migration process with a well-defined framework, automation components, predefined dashboards, and a tested approach.  Empirical reporting available in Dual Run also enables customers in regulated industries to more readily respond to regulator reviews and requests for information.

      Your migration journey with Dual Run

      Dual Run is packaged with several automation components to aid your migration journey, from assessment all the way through to production. 

      This chart shows how Dual Run plays a key role throughout your mainframe modernization journey:

      1 Exploring Dual Run.jpg

      Let’s explore this illustration in a bit more detail, phase by phase: 

      Current state: This is your starting point, when your production workloads are still running in your mainframe. Dual Run helps you assess your mainframe workload for compatibility on Google Cloud. 

      After this assessment, Dual Run's conversion engine helps address the incompatibilities in your current application and then migrates the application and data to Google Cloud. At this stage, you have your current application executing on the mainframe and your migrated application is ready to be executed or tested on Google Cloud.

      Dual Run state: In this state, the migrated workload will be executed in two stages.

      Dual Run stage 1:

      In the first stage of Dual Run, your mainframe will remain as the “primary” system — meaning the response and outputs to other systems are sent from your mainframe — while the migrated workload will be executed in parallel in Google Cloud as “secondary.”

      Dual Run performs the following actions as a cyclical process, repeated until you reach your desired migrated application quality is achieved:

      • All workloads — batch & transactions — executed in the mainframe are replicated in Google Cloud 

      • The outcomes from both systems are validated to report any differences, enabling you to take corrective actions in migrated applications

      • The functional and performance differences between the two systems will be observed, and the mainframe and Google Cloud data are periodically synchronized to bring both the systems in sync 

      Typically, most of your migration time will be spent in the first stage of Dual Run until you are satisfied with the results. The key goal for this stage is that your primary, business-critical mainframe workload is not disturbed while your migrated application is tuned to provide the exact same results as your current application.

      Dual Run stage 2:

      When the Dual Run reporting and results confirm that the migrated application matches your mainframe system, Google Cloud then becomes the “primary” system, while your mainframe will still be executed in parallel as “secondary.” Dual Run will enable you to do a smooth switch between primary and secondary systems through a configuration management system.

      Target state: In this final state, the mainframe can be decommissioned while the Dual Run components are removed, enabling an optimal and efficient business execution with Google Cloud.

      Summary

      For any business or organization that has to migrate or modernize their mainframes, Dual Run offers a unique solution to achieve this with reduced risk and time. In fact, what we’re seeing from our customers is that Dual Run offers the right combination of proven experience, engineering, and strategic partnership that is essential for mainframe migration success. If you would like to learn more, check out our mainframe modernization website.

    • Accelerating Ulta Beauty’s modernization with managed containerized microservices Thu, 16 Mar 2023 16:00:00 -0000

      Founding the digital store of the future

      With a perfectly symmetrical cat-eye toward tomorrow, Ulta Beauty has been laying the foundation for its upcoming Digital Store. This transformational, digital touchpoint delivers a redesigned, highly personalized, and compelling e-commerce experience. To do this right, Ulta Beauty requires a modern, containerized platform, an agile infrastructure to support rapid application development, and managed services to reduce operational burden on an already busy team. With Google Cloud, Ulta Beauty is delivering an e-commerce platform that scales globally and efficiently to deliver an even more engaging, enjoyable, and accessible shopping experience for all. 

      In 2019, Ulta Beauty began moving away from its legacy e-commerce platform that ran in their own on-prem data centers. The monolithic platform had become increasingly difficult to update and upgrade, hampering new features and capabilities. With such bottlenecks in mind, Ulta Beauty decided to refactor into multiple, distinct microservices to perform unique functions so development teams can create and test new features and release fixes faster. Microservices can scale independently yet are interconnected and visible to one another, and they can introduce new system complexities at scale. It was critical for Ulta Beauty to deploy microservices quickly, to avoid unnecessary distractions in integrating services with one another — or managing the underlying infrastructure themselves.

      Ulta Beauty chose Google Cloud for its leadership and expertise with containers and Kubernetes and its unified managed container services, Google Kubernetes Engine (GKE) and Anthos. Google Cloud’s cloud-based managed services, knowledgeable technical support, and cost-efficient pricing were added perks. 

      A blushing partnership

      Upon joining Ulta Beauty, Senior Cloud Architect Michael Alderson had no experience with Kubernetes or Google Cloud, but he understood containers. To begin this modernization journey, Michael sought a new way to create and manage infrastructure, putting it in developers’ hands quickly. 

      Recognizing the many ways productivity could be impacted, Michael needed to ensure his environment was ready with properly configured cloud ‘landing zones’ whenever developers needed them. His developers worked hard to build containers but lacked production-like dev or test environments. Michael said, “They couldn’t test containers with the new APIs needed for integration. With GKE and Anthos, developers have dev environments on Google Cloud whenever needed, making them ten times more productive.” 

      As a newly minted Cloud Architect, Michael familiarized himself with Google Cloud’s training pathways and certifications. With his experimental mindset, he got to work studying Kubernetes, containers, and the role of a service mesh to manage large fleets of containerized microservices at scale.

      First up was putting GKE to work. Michael and team soon discovered a key benefit of ephemeral container environments, managed by Kubernetes: they could quickly try something, learn, and try again — without a huge upfront investment. 

      With Google Cloud and GKE, the Ulta Beauty IT team could now create, manage, optimize, and secure container platforms for developers in record time. “We probably built over a thousand clusters and burnt them down learning Kubernetes — this would have taken months or years to accomplish before.” GKE allowed them to stand up and tear down new environments for developers. “We replicated what five teams would have to do with a monumental effort to integrate vendors and services — and we did that as a single, cohesive unit. This technology accelerated our efforts beyond what we were able to achieve previously.” Michael added, “With ephemeral container development environments on GKE, spun up on demand, we can deploy a new feature to a microservice in about 10 minutes — globally — a fraction of the time required in the past. And importantly, the risk factor in any update is drastically reduced with microservices.”

      Yet, Michael still needed to maintain control over his growing, increasingly complex web of interconnected e-commerce microservices. 

      Maximizing engineering resources with Anthos 

      Enter Anthos, Google Cloud’s managed platform for consistent, holistic management, observability, and security for distributed containerized apps wherever they are built or run. Anthos includes a managed service mesh, which dramatically streamlines service delivery, eases traffic management, secures communication between services, and speeds up troubleshooting with deep visibility into inter-service networking. A service mesh also streamlines inter-app communication, asserts rules over which services can talk to each other, and assures high availability of services in the event of a failure.

      Anthos Service Mesh is Google’s fully managed implementation of open-source Istio, which alone is powerful, but can be difficult to install, configure, and maintain. Anthos Service Mesh allows Michael and team to rely on Google Cloud to manage the Istio components, so they can focus on optimizing and troubleshooting Ulta Beauty’s apps. “The fact that we have metrics built in and can use those metrics to auto-scale and auto-heal, which is native with Anthos Service Mesh, fuels a better guest experience.” 

      “We can better quantify guest experiences because we see errors, reporting, and where we haven’t gotten it right yet. It enables our software development and release processes (DevOps) to mature, leading to better business choices and better guest experiences. Today, this is a competitive advantage for Ulta Beauty.” Michael added that each microservice is dynamically instrumented by Anthos, so developers don’t have to learn the entire system to debug one component or to track monitor and log data.

      Anthos Config Management, another managed service of Anthos, allows the team to spin up new environments from a standard template with predefined networking and security policies. Leveraging the declarative power of Kubernetes, Ulta Beauty can quickly deploy uniform configurations across its fleet of clusters, enforce consistent security guardrails, and share load balancing across clusters and regions. To Michael, Anthos let him define and then “...establish environments and automatically keep them the same, from dev to prod, which was a necessity. We have 70-80 different integrations to test in our new ecommerce platform — without Anthos, we’d need to spend a week every month sorting them across dev and test environments. Anthos keeps everything automatically up to date.” 

      Because of Anthos Multi Cluster Ingress’ built-in traffic routing, Ulta Beauty can eliminate the need for certain third-party services to assist orchestration. When the company’s security team requested an upgrade to the existing firewall to harden the organization’s security posture, he showed them why it was unnecessary and how it would actually hamper performance. “Rather than significantly invest in a next-generation firewall we didn’t need, we can solve it within the mesh, as it should be. It should be part of the network, not a separate element that slows us down.” Ulta Beauty saved several hundred thousand dollars in additional licensing fees by using Anthos built-in traffic routing.

      For Michael, the flexibility of GKE and Anthos to spin up servers means his developers can learn hands-on by testing a code, tearing it down, and trying again. “It’s important to make mistakes and not be crucified, we all need the ability to learn by doing and that’s what Google Cloud allows.” 

      Anthos’ ease of use is accelerating Ulta Beauty’s e-commerce modernization and already improving the organization’s business workflows. The shift to Anthos cloud services, which are available anywhere, reduces risk and the severity of errors by offloading platform availability to Google Cloud. When Michael was forced to work remotely for over two months, his team ran without a problem and continued innovating. 

      Since implementing its new service mesh, Ulta Beauty’s modernization has been progressing rapidly, especially because the development team no longer needs to focus on security or platform operations; they’re free to concentrate on building engaging guest services and apps. Mean time to recover (MTTR) from errors is now measured in seconds versus hours, at least two orders of magnitude faster. Errors are now traced quickly to the clusters or pods affected, and can be remediated without impacting any other components – importantly, without impacting guest experiences. With faster response times and much shorter MTTR, Ulta Beauty’s guests enjoy a superior experience on its website and throughout the purchase process. 

      Primer for the future

      As Ulta Beauty looks ahead, it's taking stock of how far it has come in such a short time and will continue leveraging Anthos Service Mesh as a critical component. Over the next year, Michael and team will leverage Google Cloud’s multi-region capabilities to improve application availability and disaster recovery in the event of a cluster failure. Anthos and GKE seamlessly handle workload redistribution when infrastructure is unavailable. As the company continues innovating and deploying new services and experiences, Google will continue to manage and administer its environment as developers make the shopping experience as beautiful as its guests. Google Cloud is proud to partner with Ulta Beauty, supporting the company’s developers through thick, thin, and other eyebrow stages.

      Related Article

      How Ulta Beauty manages holiday surges and supports year-round innovation

      Ulta Beauty enjoys stability amid holiday season surges in demand, as well as flexibility to innovate, thanks to Google Cloud and MongoDB...

      Read Article
    • Helping Arapahoe County operate more efficiently with Google Cloud Thu, 16 Mar 2023 12:44:00 -0000

      Arapahoe, Colorado's first county, has been focused on cultivating growth opportunities for its residents for more than 150 years1. Home to roughly 655,000 people from diverse backgrounds, Arapahoe County is projected to be one of the largest counties in Colorado within ten years2. The County supports a variety of industries, including ranching, technology, and aviation with Centennial Airport – one of the 10 busiest general aviation airports in the nation. To serve its growing population and economy at scale, Arapahoe County needed a better way to manage its rising data center maintenance costs while balancing flat year-over-year budgets.

      The County recognizes that maintaining its data centers will increase operational expenditures for the next five years, a problem that would undermine its mission. To help operate more efficiently, Arapahoe County partnered with Google Cloud – announcing a five-year agreement to migrate two costly data centers to Google Cloud VMware Engine (GCVE). This transition will enable Arapahoe County to save millions of dollars and help solidify the county's financial viability for years to come.

      Modernizing infrastructure for more cost-effective & efficient services

      Arapahoe County’s migration to the cloud is part of its shift toward embracing digital technologies as a way to bring government services to residents in a more efficient and effective way. The County will work with Google Cloud and premier cloud partner SADA to support a successful migration to Google Cloud throughout its five-year journey. 

      The County chose GCVE because it offered a risk-free way to move workloads into the cloud in a rapid and agile fashion, and a Google-provided subscription agreement guaranteed cost predictability. Moving to GCVE will also allow the County to leverage its existing IT talent for disaster recovery and production workloads.

      A familiar platform lets IT teams focus on mission-critical work

      Arapahoe County employees are learning Google Cloud platform and tools, which will help cut down on training and speed up the timeline for implementation. Working on Google Cloud platform will also help free up IT teams to work on other critical county initiatives. 

      Google Cloud’s VMware Engine allows agencies to easily lift and shift their VMware-based applications to Google Cloud without changes to existing apps, tools, or processes. This means the migration will maintain the same level of security, compliance, and performance agencies expect from their existing VMware environment. It also unlocks access to Google Cloud’s global network, which offers advanced security features and machine learning capabilities, nurturing growth opportunities for Arapahoe County’s digital future.

      Powering the future of Arapahoe County

      Arapahoe County’s community-focused drive to cultivate sustainable and diverse growth opportunities will create a bright future for its residents. Google Public Sector is proud to partner with them and looks forward to powering new ways to reimagine serving both current and future residents.

      To learn more about our solutions for state and local government, visit the Google Cloud for state and local government page.

      1: https://www.arapahoegov.com/94/About
      2: https://www.arapahoegov.com/2241/Arapahoe-County-Where-Good-Things-Grow



    Google has many products and the following is a list of its products: Android AutoAndroid OSAndroid TVCalendarCardboardChromeChrome EnterpriseChromebookChromecastConnected HomeContactsDigital WellbeingDocsDriveEarthFinanceFormsGboardGmailGoogle AlertsGoogle AnalyticsGoogle Arts & CultureGoogle AssistantGoogle AuthenticatorGoogle ChatGoogle ClassroomGoogle DuoGoogle ExpeditionsGoogle Family LinkGoogle FiGoogle FilesGoogle Find My DeviceGoogle FitGoogle FlightsGoogle FontsGoogle GroupsGoogle Home AppGoogle Input ToolsGoogle LensGoogle MeetGoogle OneGoogle PayGoogle PhotosGoogle PlayGoogle Play BooksGoogle Play GamesGoogle Play PassGoogle Play ProtectGoogle PodcastsGoogle ShoppingGoogle Street ViewGoogle TVGoogle TasksHangoutsKeepMapsMeasureMessagesNewsPhotoScanPixelPixel BudsPixelbookScholarSearchSheetsSitesSlidesSnapseedStadiaTilt BrushTranslateTravelTrusted ContactsVoiceWazeWear OS by GoogleYouTubeYouTube KidsYouTube MusicYouTube TVYouTube VR


    Google News
    TwitterFacebookInstagramYouTube



    Think with Google
    TwitterFacebookInstagramYouTube

    Google AI BlogAndroid Developers BlogGoogle Developers Blog
    AI is Artificial Intelligence


    Nightmare Scenario: Inside the Trump Administration’s Response to the Pandemic That Changed. From the Washington Post journalists Yasmeen Abutaleb and Damian Paletta - the definitive account of the Trump administration’s tragic mismanagement of the COVID-19 pandemic, and the chaos, incompetence, and craven politicization that has led to more than a half million American deaths and counting.

    Since the day Donald Trump was elected, his critics warned that an unexpected crisis would test the former reality-television host - and they predicted that the president would prove unable to meet the moment. In 2020, that crisis came to pass, with the outcomes more devastating and consequential than anyone dared to imagine. Nightmare Scenario is the complete story of Donald Trump’s handling - and mishandling - of the COVID-19 catastrophe, during the period of January 2020 up to Election Day that year. Yasmeen Abutaleb and Damian Paletta take us deep inside the White House, from the Situation Room to the Oval Office, to show how the members of the administration launched an all-out war against the health agencies, doctors, and scientific communities, all in their futile attempts to wish away the worst global pandemic in a century...


    GoogBlogs.com
    TwitterFacebookInstagramYouTube



    ZDNet » Google
    TwitterFacebookInstagramYouTube



    9to5Google » Google
    TwitterFacebookInstagramYouTube



    Computerworld » Google
    TwitterFacebookInstagramYouTube

    • With ChromeOS in 2023, Google's got its eye on the enterprise Tue, 14 Mar 2023 03:00:00 -0700

      Google's ChromeOS is the Rodney Dangerfield of modern operating systems: It's been around for ages now, and it pops up in all sorts of unexpected places. No matter how hard it works, though — boy, oh boy — it just can't get no respect.

      Much like Dangerfield, ChromeOS's struggle to be taken seriously dates back to its childhood. When the software first entered the universe in the simpler tech times of 2010, it truly was a barebones effort. The entire platform was essentially just a browser in a box — a full-screen view of Google's Chrome browser, with no real apps to speak of and not much else around it.

      To read this article in full, please click here

    • 13 hidden tricks for making the most of Android gestures Fri, 03 Mar 2023 02:45:00 -0800
    • Q&A: ChatGPT isn't sentient, it’s a next-word prediction engine Mon, 27 Feb 2023 03:00:00 -0800

      ChatGPT has taken the world by storm with its ability to generate textual answers that can be indistinguishable from human responses. The chatbot platform and its underlying large language model — GPT-3 — can be valuable tools to automate functions, help with creative ideas, and even suggest new computer code and fixes for broken apps. 

      The generative AI technology — or chatbots — have been overhyped and in some cases even claimed to have sentience or a form of consciousness. The technology has also had its share of embarassing missteps. Google's Bard stumbled out of the gate this month by providing wrong answers to questions posed by users.

      To read this article in full, please click here

    • 5 smart secrets for a better Google Tasks experience Wed, 22 Feb 2023 03:00:00 -0800

      Quick: When's the last time you used Google Tasks?

      If you're like a lot of folks I know, the answer to that question might be: "Wait a sec — what? Google has a Tasks app?!"

      Tasks is one of those services that's all too easy to forget about — or maybe even overlook entirely. Sure, there's an Android app for it (and even an iOS app, if you know anyone who swings that way). And Google's in the midst of reframing its cross-service reminders system so that it relies on Tasks as a primary hub for all those things you tell your Android phone or Smart Display to help you remember.

      To read this article in full, please click here

    • Android 13 Upgrade Report Card: Surprise! Thu, 16 Feb 2023 03:00:00 -0800

      It's funny how much a popular narrative can differ from reality once you start bringing cold, hard data into the equation.

      Over the past couple years, Samsung has worked hard to improve its reputation as a reliable provider of software updates in the Android arena. And to be sure, it's made some meaningful strides along the way.

      But when you go beyond the broad impressions and objectively measure the company's commitment to timely Android operating system rollouts — well, the story gets a teensy bit more complicated.

      That's precisely why I started doing these Android Upgrade Report Cards way back in the platform's prehistoric ages. From the get-go, we've seen some wild variance in how well different device-makers support their products after you've finished paying for 'em — and as an average phone-owner, there's no great way to know what's gonna happens six months or a year after you shell out your dollars for a top-of-the-line phone.

      To read this article in full, please click here

    • Download: UEM vendor comparison chart 2023 Wed, 15 Feb 2023 03:00:00 -0800

      Unified endpoint management (UEM) is a strategic IT approach that consolidates how enterprises secure and manage an array of deployed devices including phones, tablets, PCs, and even IoT devices.

      As remote and hybrid work models have become the norm over the past two years, “mobility management” has come to mean management of not just mobile devices, but all devices used by mobile employees wherever they are. UEM tools incorporate existing enterprise mobility management (EMM) technologies, such as mobile device management (MDM) and mobile application management (MAM), with tools used to manage desktop PCs and laptops.

      To read this article in full, please click here

    • Bard, Bing, and the 90% problem Wed, 15 Feb 2023 03:00:00 -0800

      I don't know if you've heard, but Search As We Know It™ is, like, totally on the brink of being changed forever.

      Forever, gersh dern it! Did you process that?! THINGS MAY NEVER BE THE SAME!!

      In all seriousness, if you've read much tech news over the past few weeks, you've probably been inundated with endless certain-sounding statements about how Microsoft's move toward an AI-powered search setup and Google's slightly more awkward stumbling in the same direction are about to usher in an entire new era of online searching — one where the tried and true pattern of hunting for information is completely replaced by a simple chat interface. Here, you just ask a bot a single question and then instantly get the answer you need.

      To read this article in full, please click here

    • Bing vs. Google: the new AI-driven search wars are on Mon, 13 Feb 2023 04:55:00 -0800

      Not so long ago, in the 1990s, online users had their choice of a variety of search engines. They included Excite, WebCrawler, Lycos, and my favorite at the time, AltaVista.

      Then, along came Google and PageRank. With PageRank, Google rates the relevancy of web pages to queries based not only on whether the pages contain the search terms (the technique used by all search engines) but by how many relevant pages link to it. It made Google's results much better than its rivals.

      To read this article in full, please click here

    • 12 Gboard shortcuts that'll change how you type on Android Fri, 10 Feb 2023 02:45:00 -0800

      If there's one thing we Android-totin' pterodactyls take for granted, it's just how good we've got it when it comes to typing out text on our pocket-sized phone machines.

      It's all too easy to lose sight of over time, but Goog almighty, lemme tell ya: Typing on Android is an absolute delight. And all it takes is 10 seconds of trying to wrestle with the on-screen keyboards on that other smartphone platform to appreciate our advantage.

      We've got plenty of exceptional keyboard choices 'round these parts, too, but Google's Gboard keyboard has really risen up as the best all-around option for Android input as of late. That's in large part because of its top-notch typing basics and its seamless integration of tasty Google intelligence, but it's also because of all the clever little shortcuts it has lurking beneath its surface.

      To read this article in full, please click here

    • Google Maps gets more immersive live views, even from above Wed, 08 Feb 2023 09:57:00 -0800

      Google today announced several upgrades to its search, translate and maps applications that use artificial intelligence (AI) and augmented reality (AR) to provide expanded answers and live search features that can instantaneously identify objects and locations around a user.

      In at an event in Paris, the company demonstrated upgrades for its Google Maps Immersive View and Live View technology, along with new features for electric vehicle (EV) drivers and people who walk, bike or ride public transit that show 3D routes in real time. Live View, announced in 2020, allows users to get directions placed in the real world and on a mini map at the bottom of a mobile screen.

      To read this article in full, please click here

    • Microsoft Bing is about to get chatty with OpenAI Tue, 07 Feb 2023 13:18:00 -0800

      Microsoft today unveiled a new AI-powered Edge browser and Bing search engine with a “chat” functionality.

      The new search engine allows users to ask questions and receive answers from GPT-4, the latest version of the artifical intelligence (AI) language model built by research lab OpenAI. The new, AI-powered search engine and Edge browser are available in preview now at Bing.com.

      The announcement today highlighted Microsoft’s partnership with OpenAI, the research venture that created ChatGPT, a chatbot that can generate natural language, essay-like answers to user-submitted text questions. The new AI-powered search capability will generate answers similar to how ChatGPT does, the company said.

      To read this article in full, please click here

    • A game-changing Gboard Android discovery Tue, 07 Feb 2023 03:00:00 -0800

      I'll admit it: I've spent far too much time tapping, swiping, and generally just ogling Gboard on Android. Hey, what can I say? We've gotten really close over the years.

      It may seem like an unhealthy obsession to some (hi, honey!), but it really isn't not quite as crazy as you'd think. After all, for us Android-adoring animals, Gboard is the gateway to gettin' stuff done on our modern mobile devices — and as anyone who reads Android Intelligence regularly knows, it's practically overflowing with out-of-sight time-savers.

      To read this article in full, please click here

    • Q&A: Fintech expert: digital wallets need this tech 'magic' or they'll fail Mon, 06 Feb 2023 03:00:00 -0800

      Bank after bank has unsuccessfully tried to compete with the likes of Apple Pay, Google Pay, PayPal, and other established digital wallet players. And now, a consortium of US banks, including three of the largest, hopes to cash in on digital wallets again.

      The problem: what they’re apparently pitching doesn’t offer any real advantage for consumers, according to analysts who are relying on details given exclusively to The Wall Street Journal.

      To read this article in full, please click here

    • 5 more out-of-sight options to supercharge Google Assistant on Android Fri, 03 Feb 2023 02:45:00 -0800

      Google Assistant may not be the shiny new A.I. superstar of the moment, but it's a surprisingly useful resource just waiting to hop in and help on any Android phone you're carrying.

      And some of its most helpful options are buried deep within the service's virtual bowels.

      Continuing on the theme of hidden settings for a smarter Assistant Android experience, today, we'll pick up where we left off on Wednesday and explore another five easily overlooked Google Assistant Android options. Dig 'em up, check 'em out, and add 'em into your own personal Assistant setup, and you'll find your favorite familiar helper growing ever more helpful and tuned into your needs.

      To read this article in full, please click here

    • 5 hidden settings for a smarter Google Assistant Android experience Wed, 01 Feb 2023 03:00:00 -0800
    • Google Forms cheat sheet: How to get started Fri, 27 Jan 2023 03:00:00 -0800

      Need to make a quiz, survey, registration form, order form, or other web page that gathers feedback from co-workers, customers, or others? You can design and deploy one right from your web browser with Google Forms. It’s integrated with Google Drive to store your forms in the cloud.

      Anyone with a Google account can use Forms for free. It’s also part of Google Workspace, Google's subscription-based collection of online office apps for business and enterprise customers that includes Google Docs, Sheets, Slides, Gmail, and more. Forms is lesser known than these other productivity apps, but it's a useful tool to know how to use. This guide takes you through designing a form, deploying it online, and viewing the responses it gathers.

      To read this article in full, please click here

    • 9 handy hidden features in Google Docs on Android Fri, 27 Jan 2023 02:45:00 -0800

      Few apps are as essential to mobile productivity as the humble word processor. I think I've probably spent a solid seven years of my life staring at Google Docs on one device or another at this point, and those minutes only keep ticking up with practically every passing day.

      While we can't do much about the need to gaze at that word-filled white screen, what we can do is learn how to make every moment spent within Docs count — and in the Docs Android app, specifically, there are some pretty spectacular tucked-away time-savers just waiting to be discovered.

      Make a mental note of these advanced shortcuts and options, and put 'em to good use the next time you find yourself staring at Docs on your own device.

      To read this article in full, please click here

    • How layoffs at Google could affect enterprise cloud services Wed, 25 Jan 2023 08:52:00 -0800

      An investor with a $6 billion stake in Google parent Alphabet is calling for more layoffs at the company, although it has already cut 12,000 jobs.

      The managing partner of London-based TCI Capital Fund Management wrote to Alphabet’s chief executive, Sundar Pichai, asking him to cut thousands more jobs and to reduce the compensation of its remaining employees.

      Alphabet already plans to cut its workforce by 6%, it said on January 20, 2023, a move that will affect staff across the company including in its enterprise cloud computing division.

      To read this article in full, please click here

    • Big banks' proposed digital wallet payment system likely to fail Wed, 25 Jan 2023 03:00:00 -0800

      A group of leading banks is partnering with payment service Zelle’s parent company to create their own “digital wallet” connected to consumer credit and debit cards to enable online or retail store payments.

      The new payment service, however, must compete with entrenched digital wallets such as Apple Pay and Google Pay that are embedded on mobile devices. It’s also not the first attempt for some in the consortium to create a digital wallet payment service.

      The consortium includes Wells Fargo & Co., Bank of America, JPMorgan Chase, and four other financial services companies, according to The Wall Street Journal (WSJ). The digital wallet, which does not yet have a name, is expected to launch in the second half of this year.

      To read this article in full, please click here

    • Google's parent company Alphabet to cut 12,000 jobs Fri, 20 Jan 2023 03:48:00 -0800

      Google’s parent company Alphabet is cutting 12,000 jobs, around 6% of its global workforce, according an internal memo from Sundar Pichai, Alphabet's CEO.

      Pichai told employees in an email first reported by Bloomberg on Friday that he takes “full responsibility for the decisions that led us here” but the company has a “substantial opportunity in front of us” with its early investments in artificial intelligence.

      The layoffs are global and will impact US staff immediately, news outlet Reuters also reported. They will affect teams across Alphabet, including recruiting and some corporate functions, as well as some engineering and products teams.

      To read this article in full, please click here



    Pac-Man Video Game - Play Now

    A Sunday at Sam's Boat on 5720 Richmond Ave, Houston, TX 77057