Home

Google Fi SIM Card Kit. Choose between the Simply Unlimited, Unlimited Plus and Flexible plans based on your data usage. 4G LTE and nationwide 5G coverage included for compatible phones.

Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. Google was launched in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Some of Google’s products are Google Docs, Google Sheets, Google Slides, Gmail, Google Search, Google Duo, Google Maps, Google Translate, Google Earth, and Google Photos. Play our Pac-Man videogame.

Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California. The project initially involved an unofficial "third founder", Scott Hassan, the original lead programmer who wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company. Read the full story...
Clothing & Jewelry —— Cellphones —— Microsoft Products —— All Products


Good Pedicures Make Feet and Toes Pretty
Pretty Feet and Toes


Google Blog
TwitterFacebookInstagramYouTube



Google Ads
Many books were created to help people understand how Google works, its corporate culture and how to use its services and products. The following books are available: Ultimate Guide to Google AdsThe Ridiculously Simple Guide to Google Docs: A Practical Guide to Cloud-Based Word ProcessingMastering Google Adwords: Step-by-Step Instructions for Advertising Your Business (Including Google Analytics)Google Classroom: Definitive Guide for Teachers to Learn Everything About Google Classroom and Its Teaching Apps. Tips and Tricks to Improve Lessons’ Quality.3 Months to No.1: The "No-Nonsense" SEO Playbook for Getting Your Website Found on GoogleUltimate Guide to Google AdsGoogle AdSense Made Easy: Monetize Your Website and Blogs Instantly With These Proven Google Adsense TechniquesUltimate Guide to Google AdWords: How to Access 100 Million People in 10 Minutes (Ultimate Series). #ad


Good Pedicures Make Feet and Toes Pretty
Pretty Feet and Toes


Google Cloud Blog
TwitterFacebookInstagramYouTube

  • Announcing Cross-Cloud Interconnect: seamless connectivity to all your clouds Wed, 31 May 2023 16:00:00 -0000

    Enterprises continue to adopt hybrid and multicloud as they migrate and leverage the cloud to build and run their applications. According to IDC studies conducted in 2022, 64% of customers were using multiple cloud providers for public cloud IaaS services1 and 79% indicated a need to simplify and unify how they manage and secure dedicated cloud infrastructure to improve business resiliency2. Enterprises need simple connectivity at the physical and service layers, and the ability to stand up distributed cloud applications and SaaS workloads within minutes. 

    Networking is often the first step, but connecting and securing distributed environments can be complex. Today, we are announcing a significant expansion to our Cloud Interconnect portfolio with Cross-Cloud Interconnect, which lets you connect any public cloud with Google Cloud through a secure, high performance network, allowing organizations to run applications on multiple clouds, simplify SaaS networking in a multicloud environment, and migrate workloads from one cloud to another. Cross-Cloud Interconnect is now generally available in many global locations for customers connected to the following cloud providers — Amazon Web Services, Microsoft Azure, Oracle Cloud Infrastructure, and Alibaba Cloud — with support planned for additional cloud providers based on customer demand. 

    We also made significant enhancements for Private Service Connect to support Cross-Cloud Interconnect, and automated service connection policies. In addition, we added new Cloud Interconnect capabilities, and released a new total cost of ownership (TCO) analysis showing lower costs of Google Cloud Networking. 

    Seamless multicloud connectivity with Cross-Cloud Interconnect

    The rise of distributed applications has been challenging for cloud infrastructure teams trying to ensure connectivity, performance, security, and reliability. Connecting clouds together often requires complex configurations, dedicated hardware, and lengthy operational processes. Cross-Cloud Interconnect simplifies the configuration, minimizes hardware, and reduces overhead.  

    Available in 10 Gbps or 100 Gbps options, Cross-Cloud Interconnect does not require new hardware, has the same features as Cloud Interconnect, and is backed with a SLA. Customers with multicloud environments can leverage Cross-Cloud Interconnect to enable the following: 

    • Private and secure connectivity across clouds 

    • Line-rate performance and high reliability with 99.99% SLA

    • Lower TCO without the complexity and cost of managing infrastructure 

    image1
    Architectural diagram of Cross-Cloud Interconnect between Google Cloud and a cloud service provider.

    Cross-Cloud Interconnect is available in major cities around the world with many of our partners, and because Cross-Cloud Interconnect is a fully managed service, setting it up is simple, as you can see in this demo. 

    “As enterprises, globally, look to adopt hybrid and multicloud network services – as part of their digital transformation journeys, they increasingly recognize the strategic value of cross-cloud interconnection capabilities as integral to their business success. Google Cloud’s Cross-Cloud Interconnect offerings address a broad range of hybrid and multicloud network connectivity use cases that span the datacenter, cloud core, and edge, while offering multi-gigabit speeds, disruptive pricing, high-availability, and ease of use.” – Vijay Bhagavath, Research VP for Cloud and Datacenter Networking, IDC 

    Early access customers like Walmart and Pexip are leveraging Cross-Cloud Interconnect to enable faster business outcomes. 

    “Walmart runs a seamless platform across multiple cloud providers to accelerate innovations. We partner with Google Cloud to leverage their global network for hybrid and multicloud networking. With Cross-Cloud Interconnect, we were able to simplify connectivity between cloud providers, shorten time to production, and reduce overall costs.” - Gerald Bothello, Senior Director of Software Engineering, Walmart 

    "Pexip provides a secure, multi-technology video conferencing experience that helps our customers protect and personalize their meetings. With Cross-Cloud Interconnect and our partnership with Google, we can seamlessly connect platforms from multiple clouds to expand our reach and bring our solution to a broader audience at scale." - Thomas Guggenbuhl, Principal Engineer Infrastructure Operations, Pexip

    We continue to work with our partner ecosystem to extend co-location facilities worldwide and jointly drive the simplification of multicloud connectivity for our customers. 

    "Equinix provides the world’s digital infrastructure enabling enterprises to connect anywhere. We are excited to collaborate with Google Cloud to expand on its multicloud networking strategy to deliver scalable, seamless, high bandwidth connectivity for enterprises." - Bill Bushman, Business Development Director, Platform Alliances, Equinix

    Hybrid connectivity to SaaS with Private Service Connect

    Configuring the network to run producer services can be time-consuming, slowing down services rollouts and delaying cloud integrations. Private Service Connect makes it easier to create private and secure connections from your VPCs to Google, partner, or your own service, simplifying network configuration for published services and SaaS applications. In addition to Google services such as Apigee, BeyondCorp Enterprise, Cloud Composer, and Google Kubernetes Engine, Private Service Connect supports many partner services such as Datastax, Elasticsearch, MongoDB, etc. 

    image2

    Recently, we announced hybrid and global access for Private Service Connect, allowing on-premises and multicloud clients to access producer services in Google Cloud privately, from anywhere. Private Service Connect global access, now generally available, allows customers to access Google and partner published services from their on-prem locations and between regions. 

    Today, we announced the following new enhancements to Private Service Connect: 

    • Private Service Connect over Cross-Cloud Interconnect is now generally available. Customers can access producer services in Google Cloud from Cross-Cloud Interconnect.  

    • Service Connection Policies, in preview, lets network admins create policies that automate the connectivity of Private Service Connect for managed services. This makes it easy for service admins to deploy, update, and delete published services connected by Private Service Connect without opening networking tickets. 

    A secure and highly resilient foundation 

    Our Dedicated and Partner Interconnect offerings provide secure, highly resilient connectivity for hybrid cloud networking. Dedicated Interconnect is now available globally with 10 Gbps and 100 Gbps options in 144 locations, 14 of which we added recently. 

    image3

    We also added new enhancements to Cloud and Cross-Cloud Interconnect with new encryption options, higher performance, and greater network resiliency with faster link detection to ensure your mission-critical applications are available and performing well. The enhancements include the following: 

    Secure hybrid connectivity

    • MACsec for Cloud Interconnect, in preview in early 3Q23, provides point-to-point line-rate L2 encryption to protect “first mile/last mile” communications between the customer and Google Cloud. 

    • HA VPN over Interconnect is generally available, and provides IPSec encryption to protect communications between a customer’s on-prem VPN gateway and the HA VPN gateway in Google Cloud.

    Higher performance 

    • 9K MTU support, currently in preview, enables large packet sizes to deliver higher throughput over Interconnect offerings.

    • Bidirectional forwarding detection, now generally available, is a path-outage detection protocol that expedites detection of link failure to initiate traffic rerouting. 

    • BGP enhancements including support for custom learned routes, increased prefix scale, and non-link local addressing are currently in preview. 

    Reduce TCO 

    Google Cloud Networking provides planet-scale, enterprise-ready networking that provides seamless access to partner and third-party services with consistent management policy extending from on-prem to cloud — and it does so at a lower total cost of ownership (TCO) compared to other cloud service providers. 

    According to a recent paper from Enterprise Strategy Group, customers running on Google Cloud can achieve a TCO reduction of up to 28%. In The Economic Advantage of Google Cloud's Advanced Networking Services, ESG considered both the direct costs for networking services, as well as network administration and on-going management costs. ESG predicted that customers can see up to 22% lower connectivity and egress costs, and up to 28% lower cloud network administration costs when using Google Cloud compared to other major public cloud vendors. A blog with more details can be found here

    These savings are a direct result of both the global scale and scope of Google Cloud’s networking services, and a rich set of native capabilities allowing customers to deploy, manage, scale, optimize, and secure hybrid or multi-cloud workloads easily. The new products and features discussed earlier in this article extend these capabilities even further. 

    Learn more about Google Cloud Networking 

    At Google Cloud Next ‘23, we’ll be offering several sessions to provide deep dives into the network innovations to help you build hybrid and multicloud networking transformation initiatives. Register today!


    1. What Are Enterprise Multicloud Adoption Trends, IDC, Andrew Smith, IDC #US48902122, March 2022   
    2. Multicloud Networking - An Essential Component of Digital Infrastructure, IDC, Brad Casemore, IDC #US48965022, March 2022

  • The economic advantages of Google Cloud Networking Wed, 31 May 2023 16:00:00 -0000

    Enterprise Strategy Group (ESG) recently published a 15-page report on the Economic Advantages of Google Cloud’s Advanced Networking Services, detailing how these advantages can help customers realize up to a 28% total cost savings for their cloud networking. In this blog, we explore the six key areas customers should consider when evaluating public cloud providers, and some of the advantages gained by building on Google’s globally scaled cloud network.

    The network

    Google has an advanced, globally scaled, fiber-optic software-defined network. This global network supports billions of users accessing various Google services like YouTube, Search, and Maps, as well as Google Cloud. You can capitalize on this global network for your Google Cloud workloads with full confidence in its scalability and robustness. You can find the current count of regions, zones and edge location on the Cloud Locations page.

    1-edgepoint.jpg

    6 considerations when evaluating Cloud Networking

    The report goes into quantitative and qualitative details as to areas that enterprises should look at when choosing between cloud provider networks. Let's explore these areas and how Google Cloud Networking services support each of them.

    # 1 - There are differences between cloud network architectures.
    Robust cloud networks should be software-defined, scalable, simple, and automatable. A well-documented architecture, built upon consistent innovation and that is transparent about network performance, can greatly reduce administrative overhead. Google Cloud’s network meets all of these requirements and more. Google has documented its Andromeda stack for software-defined networking, contributes significantly to Internet Engineering Task Force (IETF) RFC standards, and provides a public global view of inter-region network latency and throughput. Unlike other cloud providers, its network capabilities are globally scoped, which helps customers reduce architectural complexity and simplify network management.

    # 2 - A cloud network should provide simple and flexible hybrid cloud connectivity.
    Google Cloud hybrid connectivity options offer flexibility for customers’ business requirements. Cloud VPN provides easy-to-set-up high-availability connectivity. For organizations that need more stable and higher bandwidth connections, Cloud Interconnect options like Dedicatedand Partner Interconnect provide connectivity. Private Service Connect, meanwhile, allows secure and private access to Google Cloud, third-party, or customer-managed cloud services.

    In Google Cloud, a Virtual Private Cloud (VPC) is a global construct that is different from other cloud providers’. You can create a single VPC and provision different isolated subnets in any region within the same VPC. This allows customers to create and manage fewer VPCs, thus lowering operational overhead.

    # 3 - A cloud network should make it easy to scale and accelerate workloads.
    Google Cloud Managed Instance Groups and cluster autoscaling help you scale your resources automatically. In Google Cloud, there exist a robust set of Load Balancers that support various traffic options to meet both global and regional requirements. Load balancers distribute traffic across your backend targets, which can exist within and outside of Google Cloud.

    With Cloud CDN for static content and Media CDN for streaming content, customers benefit from the same global footprint and network performance as other Google services such as YouTube. Customers can verify CDN performance as measured by Cedexis and made publicly available here.  

    # 4 - A cloud network should ensure secure hybrid cloud operations.
    Google Cloud services like Cloud NAT, Cloud Firewall and Cloud Armor can be utilized at different points in the environment to provide a layered defense-in-depth approach. Cloud Armor was in the news last year when it helped a customer mitigate a Layer 7 DDoS attack at 46 million requests per second. In addition to native capabilities, Google Cloud also supports third-party appliances including many partner solutions that are directly available in the Google Cloud Marketplace. 

    # 5 - A cloud network should provide visibility and control.
    Google Cloud Network Intelligence Center provides real-time observability for your network. It does this through individual modules (Network Topology, Connectivity test, Performance Dashboard, Firewall Insights, Network Analyzer) which provide targeted visibility into aspects of your network infrastructure including latency, throughput, connectivity between specific resources, and resource configuration. This resource analysis highlights configuration issues that can cause network failures, lead to resource exhaustion, or otherwise result in sub-optimal performance — often proactively before an issue can be observed by the end user.  

    # 6 - Initiatives must support modernization efforts.
    Google Cloud supports application modernization with many services to help customers both migrate and build applications on the platform. Google Kubernetes Engines (GKE) provides high-scale Kubernetes deployments with up to 15,000 nodes per cluster while introducing next-generation features such as the GKE Gateway Controller, a production implementation of the new Kubernetes Gateway API. Google also continues to develop and/or contribute to many popular open standards and open-source projects such as HTTP3, QUIC, gRPC, eBPF, Envoy, Istio, and, of course, Kubernetes itself.

    Get the report

    The full report goes into much greater detail with comparisons, percentages, charts, and diagrams to evaluate the economic benefits that customers can, and should, expect from their cloud provider. To read more, download your copy of the ESG Report: The Economic Advantage of Google Cloud’s Advanced Networking Services today. 

    For more on cloud networking visit https://cloud.google.com/products/networking.

    Related Article

    Everything you need to know about architecting reliable infrastructure for Google Cloud workloads

    Understand the building blocks of reliability in Google Cloud. Learn to architect reliable infrastructure to deploy highly available clou...

    Read Article
  • Config Connector: An easy way to manage your infrastructure in Google Cloud Wed, 31 May 2023 16:00:00 -0000

    Today many companies manage their infrastructure and configure environments using multiple tools that are either stand-alone or a part of a larger CI/CD pipeline solution. Tools such as Cloud Build in Google Cloud, HashiCorp’s Terraform, or AWS CloudFormation, allow developers to use purpose-built languages such as HashiCorp’s HCL or Cloud Build Configuration language to define their environment's infrastructure and/or automate its provisioning.

    One of the popular tools, Terraform, is a widely used Infrastructure as Code (IaC) software tool to provision infrastructure on Google Cloud and other cloud platforms. Google is actively supporting Terraform by contributing to the Google Cloud Provider for Terraform and developing the Cloud Foundation Toolkit which includes many useful Terraform modules.

    We would like to evaluate another solution called Config Connector (a.k.a., KCC), available on Google Cloud, and to show how cloud users can improve their operational processes using this solution compared with other available tools. Google announced it first in 2020. Config Connector is a Kubernetes operator that allows you to manage Google Cloud resources. Config Connector utilizes the Kubernetes Resource Model to enforce a contract between the configuration a developer has defined and infrastructure. This is often referred to as Configuration as Data. You can read more about Configuration as Data in this blog post. Compared to Terraform, Config Connector applies a reconciliation strategy to keep cloud infrastructure as close to the declared configuration as possible in real time.

    Config Connector can provide a developer with a number of advantages:

    • Native integration with GKE and Anthos Configuration Management simplifies provisioning of both Google Cloud resources and application workloads across multiple environments.

    • Automated reconciliation observes the infrastructure state and repairs any discrepancies between the desired and observed states without need for additional monitoring or manual intervention.

    • Centralized configuration management lets you manage workload and infrastructure configurations for all environments in one place and in one format.

    • As a managed solution, Config Connector reduces operational and maintenance overload on DevOps teams, saving time and helping to speed up onboarding new team members.

    You can reference the following decision tree when deciding which tool to use when provisioning Google Cloud infrastructure:

    provisioning_tool_decision_tree.jpg

    Using Config Connector also lets developers benefit from extensive observability capabilities. Leveraging integration with GKE and Cloud Operations suite, you can audit Config Connector operations and the reconciliation state of the configuration. Additionally, you can automate incident handling by defining alert policies to be triggered when there are problems with configuration, provisioning or reconciliation. For example, the following set of log filters can be used to query problems with configuration references (e.g., a resource references a Kubernetes Secret that cannot be found):

    code_block
    [StructValue([(u'code', u'resource.type="k8s_pod"\r\nresource.labels.namespace_name="cnrm-system"\r\nlabels.k8s-pod/cnrm_cloud_google_com/component="cnrm-controller-manager"\r\njsonPayload.kind="Event"\r\njsonPayload.message:"DependencyInvalid"'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e7aa9acf090>)])]

    See the Config Connector documentation about monitoring and troubleshooting for more information.

    Getting started with Config Connector is simple. All you need is a GKE cluster. Then, you can enable the Config Connector add-on to have Config Connector automatically installed on the cluster. There are several options to install Config Connector. The following paragraphs summarizes pros and cons of each option.

    • Config Controller is a great choice if you are looking to minimize maintenance cost and add support for GitOps components. To use it, you would have to enable Anthos in your projects which may introduce management and cluster fees. If you already use Anthos Config Management (ACM), Config Controller is already available for you. ACM hosts Config Connector and automatically upgrades it to the latest stable version.

    • Manual installation is useful when you need a high level of customization and control over Config Connector. Using this method you install a Kubernetes operator and additional CRDs on your GKE cluster. This also enables you to install Config Connector on other Kubernetes distributions. It comes at higher operational costs since you will own the hosting and configuration of Config Connector.

    • GKE Config Connector add-on is a good choice as a jump start solution. It can be installed on any new or existing GKE Standard cluster (starting version 1.15) using a single configuration setting. However, we would like to discourage you using it in production because of the significant lag behind the latest Config Connector version. It also comes with operational costs of provisioning and maintaining the hosting GKE cluster.

    Once Config Connector is installed, you can provision Google Cloud resources like you do your Kubernetes workloads. For example, the following code snippet will create a BigQuery dataset:

    code_block
    [StructValue([(u'code', u'apiVersion: bigquery.cnrm.cloud.google.com/v1beta1\r\nkind: BigQueryDataset\r\nmetadata:\r\n name: bigquerydataset-sample-for-creation\r\nspec:\r\n resourceID: bigquerydataset_sample_with_resourceid\r\n defaultTableExpirationMs: 3600000\r\n description: "BigQuery Dataset Sample"\r\n friendlyName: bigquerydataset-sample-with-resourceid\r\n location: US'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e7aa9acffd0>)])]

    (This example uses the user-specified resource ID to identify the BigQuery dataset)

    In many scenarios Config Connector can replace multiple other tools while minimizing the time it takes to reconcile configured and actual states. The managed nature of the Connector together with a large coverage of Google Cloud resources and services, as well as integration with Anthos configuration, makes it a universal Swiss Army Knife of DevOps pipelines for Google Cloud users. You can familiarize yourself with Config Connector by reading the documentation. Give it a try!

  • Cloud CISO Perspectives: Late May 2023 Wed, 31 May 2023 16:00:00 -0000

    Welcome to the second Cloud CISO Perspectives for May 2023. I hope you all enjoyed our previous newsletter from my Office of the CISO colleague MK Palmore, on Google’s new cybersecurity certification and how it can help prepare aspiring cybersecurity experts for their next career steps.

    Before I jump into my column today, I’d like to encourage everyone to sign up for our annual Security Summit, coming in just a few weeks on June 13-14. This year, we’ll explore the latest technologies and strategies from Google Cloud, Mandiant, and our partners to help protect your business, your customers, and your cloud transformation. You can register for the broadcast in your choice of two time zones here. We hope to see you there.

    As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

    aside_block
    [StructValue([(u'title', u'Board of Directors Insights Hub'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8bb36a0950>), (u'btn_text', u'Visit the Hub'), (u'href', u'https://cloud.google.com/solutions/security/board-of-directors'), (u'image', None)])]

    Integrating digital sovereignty with cloud security

    Today, I’d like to talk about one of the more complex and important topics in our current cloud discourse: digital sovereignty. Simply put, digital sovereignty is an organization’s intention to retain control over their data and how that data is stored, processed and managed when using third-party services — including cloud providers. 

    Organizations should feel that they have control over their data. When those controls have been designed well, they should encourage even more organizations to use the cloud and benefit from all that the cloud offers. 

    Digital sovereignty is a subject that we feel strongly about, and over the past few years, Google Cloud has worked extensively with customers, partners, policy makers, and governments to understand their evolving sovereignty requirements.

    We take an expansive view of sovereignty requirements encompassing data, operations, and software. We also see the control of encryption as vital to addressing these requirements and have engineered leading encryption solutions. Along with having these solutions in our cloud, we also have led the industry on establishing partnerships with trusted local partners to address concerns of working with foreign providers.

    aside_block
    [StructValue([(u'title', u'Hear monthly from our Cloud CISO in your inbox'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8bb04eb0d0>), (u'btn_text', u'Subscribe today'), (u'href', u'https://go.chronicle.security/cloudciso-newsletter-signup?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY23-Cloud-CISO-Perspectives-newsletter-blog-embed-CTA&utm_content=-&utm_term=-'), (u'image', <GAEImage: gcat small.jpg>)])]

    Google Cloud has been leading dialogue and developing digital sovereignty solutions since 2019. Our ongoing discussions in the market have taught us that designing a digital sovereignty strategy that balances control and innovation is challenging because of four main reasons:

    1. Foundational concepts are not always well-understood, including regulatory requirements, legal safeguards, and risk management.

    2. Many organizations struggle to articulate their specific requirements, particularly when it comes to how sovereign strategies enable digital transformation. 

    3. Choosing the best technologies and solutions to meet those requirements can be difficult. 

    4. A shortage of advisory capacity and expertise in the market can make these challenges even harder to overcome.

    While digital sovereignty challenges cross boundaries and oceans, we’ve focused many of our initial efforts in Europe. This has resulted in our “Cloud. On Europe’s Terms” initiative and a broad portfolio of Sovereign Solutions we have already brought to market to help support customers’ current and emerging needs as they bring more workloads to the cloud. 

    We’ve also developed the Digital Sovereignty Explorer, a tool designed to help you make progress on your understanding of key concepts and potential solutions, which we introduced in March. Initially focused on the needs of European organizations, the Explorer is an online, interactive tool that takes individuals through a guided series of questions about their organizations’ digital sovereignty requirements.

    One benefit of our early digital sovereignty investments has been that it has helped strengthen other areas we’re focused on. Confidential Computing has also proven to be a helpful additional control for organizations implementing digital sovereignty strategies, providing an encryption capability, and protection for data-in-use where encryption keys are not accessible by the cloud provider.

    Innovating to address digital sovereignty requirements is important to advance digital transformation and technological creativity, and to join in the benefits of the cloud. We’re going to continue to engage with customers, our partners, governments, and regulators to deliver novel solutions that meet local requirements. 

    In case you missed it

    Here are the latest updates, products, services, and resources from our security teams so far this month: 

    • Get ready for Google Cloud Next: Discounted early-bird registration for Google Cloud Next ‘23 is open now. This year’s Next comes at an exciting time, with the emergence of generative AI, breakthroughs in cybersecurity, and more. It’s clear that there has never been a better time to work in the cloud industry. Register now.

    • Partnering with Health-ISAC to strengthen the European healthcare system: We’re growing our relationship with Health-ISAC to include CISOs and security leaders in Europe, the Middle East, and Africa (EMEA), starting with a joint 17-city tour across the region, as part of its European Healthcare Threat Landscape Tour. Read more.

    • 4 ways to improve cybersecurity from the boardroom: Here are four ways that boards and cybersecurity teams can keep their organizations more secure and reduce risk. Read more.

    • How does Google protect the physical-to-logical space in a data center? Each Google data center is a large and diverse environment of machines, networking devices, and control systems. In these complex environments, the security of your data is our top priority. Learn how we keep it secure. Read more.

    • Introducing reCAPTCHA Enterprise Fraud Prevention: We are pleased to announce the general availability of reCAPTCHA Enterprise Fraud Prevention, a new product that uses Google's own fraud models, machine learning, and intelligence from protecting more than 6 million websites to help stop payment fraud. Read more.

    • How Apigee can help government agencies adopt Zero Trust: Securely sharing data is critical to building an effective government application ecosystem. Rather than building new applications, APIs can enable government leaders to gather data-driven insights within their existing technical environments, which Google Cloud’s Apigee can help achieve. Here's how.

    News from Mandiant

    • New OT malware possibly related to Russian emergency response exercises: Mandiant identified COSMICENERGY, a novel operational technology (OT) and industrial control system (ICS)-oriented malware possibly related to Russian emergency response exercises, which has demonstrated a cyber impact to physical systems. Read more.

    • Don't @ me: URL obfuscation through schema abuse: Mandiant has found attackers distributing multiple malware families by obfuscating the end destination of a URL by abusing the URL schema. This technique can increase the likelihood of a successful phishing attack. Read more.

    • A requirements-driven approach to cyber threat intelligence: Mandiant’s latest report on applying threat intelligence outlines what it means to be requirements-driven in practice, offering actionable advice on how intelligence functions can implement and optimize such an approach within their organizations. Read more.

    • Cloudy with a chance of bad logs: As organizations increasingly move to cloud and security teams struggle to keep up, Mandiant provides a hypothetical scenario of a cloud platform compromise with multiple components that would require investigation. Read more.

    Google Cloud Security Podcasts

    We launched a weekly podcast focusing on Cloud Security in February 2021. Hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today. Earlier this month, they discussed:

    • The good, the bad, and the epic possibilities of threat detection at scale: Good detection is hard to build, whether defined for a rule or a piece of detection content, or for a program at a company. Reliably producing good detection content at scale is even trickier, so we chatted with Jack Naglieri, founder and CEO, Panther Labs. Listen here.

    • Firewalls in the cloud: Nevermind the difference between firewalls and firewalling (although we discuss that, too) — does the cloud even need firewalls? Our own senior cloud security advocate, Michele Chubirka, gets us grounded on all things cloud firewall. Listen here.

    To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back at the end of the month with more security-related updates.

  • Realizing cloud value for a render platform at Wayfair — Part 2 Tue, 30 May 2023 16:00:00 -0000

    In our previous post, we discussed various cost optimization strategies that we here at Wayfair identified to use on a render platform that we had recently moved to Google Cloud using a lift-and-shift migration strategy. In this post, we’ll shift our focus to how we applied those strategies to plan and execute the initiatives, and share what we learned.

    Plan and execute

    Once we identified the key cost and usage levers, our workgroup partnered with engineers, analytics, and Google Cloud to come up with a plan and execution framework. We set up high-level objectives and key results with a baseline and target to align the team and get stakeholders’ buy-in.

    Objective: Render each asset most efficiently without compromising on quality
    KR1: Reduce Render Core Cost Per Hour by 30%
    KR2: Reduce Avg Render Time Per Image Request by 35%

    Initiatives and prioritization

    We identified our key initiatives and pushed teams to come up with efforts and ROI calculations for each of them, and placed them on a quadrant of savings and efforts. Using this quadrant we identified low-effort initiatives that will yield mid-to-high savings. We also identified other high-cost high savings initiatives, and saved them for later. You can find other examples of balancing effort with spend in the following diagram.

    image2.png
    A four-quadrant prioritization matrix for identifying cost optimization opportunities. Source: Understand the principles of cloud cost optimization.

    Using the framework above we executed the following initiatives in the order outlined:

    Jun 2022

    • Implement cost dashboards: We created a deep-dive Render cost dashboard using Cloudability to track every aspect of the spend on the Render Farm by providing daily, weekly and monthly trends on each bucket of cost on the Google Cloud project for Rendering, providing engineers and leaders a clear view of spend on Google Cloud.

    • Schedule shutdowns: One of the first things we did was shut down a high percentage of farm capacity on the weekend; this was a no-brainer after looking at the render-hour usage data on weekends. 

    • Optimize render settings: We adjusted Global Illumination, Max Subdivision, and Radiance for scenes used in modeling to reduce the number of hours needed to produce images with similar quality.

    • Rightsize Automated farm: We also cut the Automated farm size by 30% to improve the overall farm utilization based on the usage models.

    Jul 2022

    • Deploy multiple MIGs with Instance Types: For our on-prem render nodes, we initially optimized them for the top 10th percentile workload with the Google C2D-Standard-30 instance type. Based on recommendations from Google we benchmarked the new TAU instances and found their T2D-Standard-16 performs better for 90% of our use cases with a savings of more than 50%.

    • Reduce images per round: We noticed that some images rendered on the farm did  not add any value, and in certain cases were never utilized at all. We removed certain class-specific images from rendered requests to reduce the wasted renders per round of work, and hence reduce workload requirements further.

    • Implement self-service render usage dashboard: We worked closely with our partners in Data Engineering to create real-time visibility into render-hours usage along with the ability to slice data around various dimensions to allow them to identify any waste as early as possible and address it the same.

    Aug 2022

    • Autoscaling: In close partnership with Google Cloud and the analytics team, we created a custom scaling algorithm that looks into current farm usage, submission volume and patterns to control the deployed nodes on the farm at regular intervals; this helped us achieve a target utilization of 90%+ on the render farm.

    In a period of 5 months from May 2022 to Sep 2022, our monthly costs went down by approximately 85%. We achieved all this without any impact on core business objectives around imagery quality or speed of delivery. And we’re not done yet: we’re planning to drive further savings of ~25%+ by eliminating license costs over the next few months. In addition, we will be exploring Spot instances and optimizing Artist pools further to drive further savings on the cloud.

    Gaining knowledge through experience 

    Throughout the cost optimization process, we learned a lot. Here are some highlights. 

    Work collaboratively

    The speed and level of optimization we saw were possible due to a very tight collaboration between engineering, business, infrastructure, and Google teams. The business played an instrumental role in identifying opportunities to optimize and rightsizing the quality and throughput of the pipeline. Google Cloud team jumped in multiple times during design phases to point us in the right direction when selecting machine types or building algorithms to auto-scale within constraints, helping us save more. They even helped for cost modeling. The  Google teams were tremendously insightful.

    Plan and execute

    Going in, we set clear rules for ourselves: Design and pressure-test initiatives. . Whiteboard before keyboard to validate each initiative. And prioritize initiatives ruthlessly during deep dives. There are so many ways to achieve the end goal, but sequencing them using Google’s FinOps and Cost optimization framework helped us plug the leaks immediately with low-effort, high-savings initiatives. Once we identified the initiatives, we delivered them in small increments every couple of weeks, driving immediate impact on our spend.

    Measure and iterate

    Finally, we created realistic objectives and measurable key results for the team and provided complete transparency to every member of the team through weekly metric reporting. To drive accountability and ownership on an ongoing basis, we created reports and dashboards along with proactive monitors to provide teams with deep-dive data on render-farm usage, and daily costs. Best of all, we’re just getting started: Thanks to the visibility provided by these data points, we continue to identify opportunities to fine-tune both cost per hour and render hour usage. To learn more about how to identify and implement cost savings in your environment, we highly recommend Google Cloud’s whitepaper, Understand the principles of cloud cost optimization


    Googler Hasan Khan, Lead Principal Architect, Retail Global, contributed to this post.

    Related Article

    Realizing cloud value for a render platform at Wayfair - Part 1

    Working with Google Cloud, Wayfair identified ~$9M of annual savings for a newly migrated rendering workload.

    Read Article
  • Realizing cloud value for a render platform at Wayfair - Part 1 Tue, 30 May 2023 16:00:00 -0000

    At Wayfair, we have content creation pipelines that automate some portions of 3D model and 3D scene creation, and render images from those models/scenes. At a high level, suppliers provide us with product images and information about dimensions, materials, etc., and we use them to create photorealistic 3D models and generate proprietary imagery. But creating these 3D renders requires significant computation (rendering) capabilities. Last year, we performed a lift-and-shift migration to the cloud, but because we hadn’t optimized our workloads for the cloud, our costs bubbled up substantially. We worked closely with Google Cloud to optimize our render platform, driving an estimated ~$9M of savings on an annualized basis. 

    Let’s take a look at how we did it.

    Lift and shift to Google Cloud

    We’ve been working with the Google Cloud team to complete the transition from a hybrid cloud to a Unified Public Cloud strategy. We have two different “farms'' that we use, one primarily for automation tasks and the other for rendering tasks with: 

    • The Automation farm is managed using OpenCue to dispatch jobs to the nodes 

    • The Render Farm uses Deadline to dispatch jobs to the nodes; we completed migrating it from on-premises to the cloud In Q2 2022.

    Here’s our lift-and-shift deployed architecture on Google Cloud:

    image3.png

    During the migration, our goal was to provide as-is SLAs to our customers without compromising the quality of the pipelines. Post-migration, we recognized inefficiencies in the deployed architecture, which was not well suited for the economics of the pay-as-you-go cloud model. In particular, the architecture had:

    • Poor infrastructure optimization with fixed farm size and one-size-fits-all machines

    • Missed opportunities for automation and consolidation

    • Minimal visibility into cost and render-hour usage across the farm

    • Wasted usage on rendering due to non-optimized workflows and cost controls

    We realized that we could do better than a one-size-fits-all model. With a variety of compute available from Google Cloud, we decided to take advantage of it for the farm. This would help us not only optimize but also provide better visibility and the render-hour usage across the rendering farm, for greater savings.

    Cost optimization strategy

    We followed the three Cloud FinOps principles — inform, optimize and operate — to create a holistic strategy to optimize our spending and drive sustained governance going forward.

    Simplified view

    To create an execution plan, the first step was to thoroughly understand what was driving our cloud spending. When on-prem, we didn’t have a lot of insight into our usage and infrastructure costs, as those were managed by a centralized Infrastructure team. During deep dives, we realized that due to a lack of visibility into usage in our current state, , we had many inefficiencies with our deployed infrastructure footprint and how the farm is used by artists and modelers. 

    We formed a focused team of engineers, business stakeholders, infrastructure experts, and Google Cloud to drive discussions. To optimize rendering costs we needed to not only drive down the cost of the rendering platform but also optimize the workflows to reduce the render hours usage per asset. We developed a simplified formula of all-inclusive render cost per core hour and time needed for each asset, making it easier for each team to drive objectives with focus and transparency. On Google Cloud, we were shifting the focus from an owned asset to pay-per-usage model.

    image2.png

    Identifying cost levers

    One of our goals was to optimize all-inclusive cost to render per hour. We categorized the overall spend on the farm into various funnels, and assigned weights to the impact each lever can drive. At a high level, we looked into the following key areas:

    • Nodes - Are we using the right machine size and configurations on the farm? The current deployment had a single pool which forced the machine size to be optimized for worst-case usage, leading to waste for 90% of our use cases. Can we use GPU acceleration to optimize render times? What about leveraging instance types like Spot?

    • Utilization - How is the per node and overall farm utilization over 24X7X365? We looked at usage and submission patterns on the farm along with utilization to find out ways to drive efficiency. 

    • Licenses - Because of the change from Enterprise License on-prem to Data Center license on Google Cloud, we were seeing license costs around 45% of the overall spend on the farm. What are the software license fees leveraged on the farm? What constraints do they enforce on scaling needs?

    • Other - We looked at storage, network transfers, and other miscellaneous costs on the farm. Together, they only accounted for 8% of the overall spend, so we deemed them insignificant to optimize initially.

    Levers for optimizing usage

    As part of our holistic strategy, we also set goals for improving workflow efficiency to optimize rendering hours. We realized during cost modeling that we could unlock large benefits by reducing the hours needed for the same unit of work. At a high level, we looked into the following areas:

    • Render quality - Can we optimize render settings like Irradiance, Noise, Threshold, and Resolution to reduce the render hours needed for each request without substantially impacting the quality of final renders?

    • Work unit - Can we reduce the number of render frames, render rounds, and angles for each request to reduce the number of renders needed per request and reduce waste in the pipeline?

    • Complexity - Can we look into optimizing specific materials or lightning settings and frames to reduce render complexity? Can we look at render requests at p90 based on render hours and create a feedback loop? 

    • Artist experience - Can we improve the artist workflow by providing them with cloud workstations with local rendering and storage management to reduce the indirect invisible costs associated with rendering?

    In our next article, we discuss how we applied the above strategies to plan and execute these initiatives, and share what we learned.


    Googler Hasan Khan, Lead Principal Architect, Retail Global, contributed to this post.

    Related Article

    Realizing cloud value for a render platform at Wayfair — Part 2

    Following Google Cloud’s cost optimization principles, Wayfair executed against a plan to optimize its render farm for the cloud.

    Read Article
  • Partnering with RegTech companies to transform financial services Tue, 30 May 2023 16:00:00 -0000

    For years, Google Cloud has been driving business transformation across the financial services industry with native and partner solutions. These solutions help customers drive top line growth, improve efficiency, and better manage risk. Among these are a growing number of tools for improving how financial services organizations solve for increasingly complex regulatory requirements. Collectively known as RegTech, these solutions not only make it easier to manage regulatory requirements, but they also represent an opportunity for organizations to become more agile and efficient in an increasingly digitized marketplace. RegTech can also simplify the work of the regulators themselves and provide better industry-wide insights. 

    The driver behind RegTech is the same as it is for FinTech: digital transformation. Banks, insurance companies, and other financial services organizations are now leveraging Google Cloud across the board for a number of strategic use cases, including anti-financial crime and risk modeling. 

    Google Cloud’s role is to support customers where they are in their modernization journeys with our own technology fortified by a robust ecosystem of partners. As a platform provider, Google Cloud supports these efforts in three ways: 

    • Offering ways for vendors to modernize their legacy reporting applications 

    • Supporting organizations in building net-new cloud-native solutions

    • Providing our own technology with which financial services organizations can streamline and automate the regulatory reporting process

    Robust infrastructure and security for modernizing legacy reporting applications

    Google Cloud's robust, highly scalable global infrastructure provides a secure-by-design foundation with a shared-fate model for risk management supported by products, services, frameworks, best practices, controls, and capabilities to help meet digital sovereignty requirements. This infrastructure creates an opportunity for established software providers with a wide on-premises user base to modernize their tech stacks so they can leverage cloud capabilities either as a managed service or a SaaS solution. 

    For example, Regnology, a provider of regulatory reporting solutions, has partnered with Google Cloud to bolster its regulatory reporting offering with a fully-fledged cloud service known as Rcloud. The platform uses Google Cloud infrastructure to enhance Regnology’s regulatory reporting offering across its complete set of cloud-native solutions and managed services, with vertical and horizontal scaling for better performance and greater efficiency. Underpinned by Google Cloud, Rcloud benefits from improved deployment and infrastructure-as-code services, run and change management automation, high scalability, and future-proofed architecture for additional services and products. Furthermore, Regnology Rcloud’s integration with BigQuery allows organizations to build a granular and cohesive finance and risk warehouse, which can be leveraged to improve the efficiency of the end to end data supply chain.

    “We are excited to be partnering with Google Cloud to develop an enhanced platform for our customers, presenting a seamless delivery of service as part of a one-stop shop offering,” says Rob Mackay, Chief Executive Officer at Regnology. “Our mission is to connect regulators and the industry to drive financial stability, and as such it is important to us to build the future of regulatory reporting on energy efficient and scalable architecture.”

    Powering cloud-native RegTech solutions with data and AI

    Google Cloud’s advanced data and AI capabilities, such as BigQuery, Vertex AI, and Apigee API management, have attracted newer RegTech players as well. Google Cloud partner Quantexa, for example, offers a Decision Intelligence platform powered by Google Kubernetes Engine (GKE) and tools such as Dataproc. The solution gives customers the ability to understand their data by connecting siloed systems and visualizing complex networks. The result is a single view of data that becomes their most trusted and reusable resource across the organization. Quantexa provides an intelligent "model-based" system that probes networks and behavior patterns to uncover financial crime, allowing customers to better comply with anti-money laundering (AML) regulations. Quantexa helps customers establish a culture of confident decision making at strategic, operational, and tactical levels to mitigate risk and seize opportunities.

    Google Cloud is key to Quantexa's ability to generate real-time, AI-driven financial crime alerts, according to Quantexa CTO Jamie Hutton. In particular, GKE provides Quantexa the scaling power to effectively deploy Elasticsearch, an enterprise search engine for data querying that's central to Quantexa's entity resolution and network analytics. "We deploy Elasticsearch in Google Kubernetes Engine, which gives us the ability to scale as required on a granular process level," says Hutton. "And because Google is the architect of Kubernetes, we know that we always have the latest updates and features available with Google Kubernetes Engine, ahead of any other provider."

    Google Cloud tools to facilitate regulatory reporting 

    Google Cloud powers out-of-the-box solutions, such as Google Cloud Regulatory Reporting Platform, a scalable, on-demand serverless solution with data controls built into the architecture at each step to maximize performance and reliability while minimizing operational effort. You can store and query large, granular datasets in BigQuery as a consistent (and cost-effective) source of high-quality data to power multiple report types (e.g. across risk, financial and regulatory reporting). 

    Google Kx for CAT is an enterprise-grade application designed to accommodate the sophisticated and complex reporting transformations required by The Consolidated Audit Trail (CAT). CAT is a regulatory reporting obligation for U.S. broker-dealer firms and demands the kind of intensive compute power and storage requirements historically associated with bespoke implementations.

    Google Cloud Analytics Hub is a data exchange that allows you to efficiently and securely exchange data assets across organizations to address challenges of data reliability and cost. Analytics Hub enables data providers like Dun & Bradstreet to publish their data sets for analysis in BigQuery. Regulators, for example, can access this information to see how individual banks and financial services institutions are being compliant.

    Where will the RegTech journey take us?

    RegTech is a rapidly growing and maturing area with tremendous potential not only to make it easier for financial services organizations to meet their regulatory obligations, but also to deliver real business value. Google Cloud’s goal is to enable customers to maximize that value, whether the solutions are native to Google Cloud or offered by one of our many partners. As the field progresses, it’s likely to bring new capabilities and insights that go beyond compliance to manage risk, support growth and improve customer experience.

    Learn more about how Google Cloud’s RegTech capabilities are making it easier for organizations to meet their regulatory responsibilities and more.

    Related Article

    A reference architecture for transforming insurance claims processing with Google Cloud

    Through this Google Cloud reference architecture, organizations that process Insurance claims can dramatically enhance the policyholder e...

    Read Article
  • Cloud Bigtable under the hood: How we improved single-row read throughput by 20-50% Tue, 30 May 2023 16:00:00 -0000

    Bigtable is a scalable, distributed, high-performance NoSQL database that processes more than 6 billion requests per second at peak and has more than 10 Exabytes of data under management. Operating at this scale, Bigtable is highly optimized for high-throughput and low-latency reads and writes. Even so, our performance engineering team continually explores new areas to optimize. In this article, we share details of recent projects that helped us push Bigtable’s performance envelope forward, improving single-row read throughput by 20-50% while maintaining the same low latency.

    Image1 - benchmark.png
    Throughput improvements in point read/write benchmarks

    Below is an example of the impact we delivered to one of our customers, Snap. The compute cost for this small-point read-heavy workload reduced by 25% while maintaining the previous level of performance.

    Image2 - snap.png

    Performance research

    We use a suite of benchmarks to continuously evaluate Bigtable’s performance. These represent a broad spectrum of workloads, access patterns and data volumes that we see across the fleet. Benchmark results give us a high-level view of performance opportunities, which we then enhance using sampling profilers and pprof for analysis. This analysis plus several iterations of prototyping confirmed feasibility of improvements in the following areas: Bloom filters, prefetching, and a new post-link-time optimization framework, Propeller.

    Bloom filters

    Bigtable stores its data in a log-structured merge tree. Data is organized into row ranges and each row range is represented by a set of SSTables. Each SSTable is a file that contains sorted key-value pairs. During a point-read operation, Bigtable searches across the set of SSTables to find data blocks that contain values relevant to the row-key. This is where the Bloom filter comes into play. A Bloom filter is a space-efficient probabilistic data structure that can tell whether an item is in a set, it has a small number of false positives (item may be in the set), but no false negatives (item is definitely not in the set). In Bigtable’s case, Bloom filters reduce the search area to a subset of SSTables that may contain data for a given row-key, reducing costly disk access.

    Image3 - bloomfilter.png

    We identified two major opportunities with the existing implementation: improving utilization and reducing CPU overhead.

    First, our statistics indicated that we were using Bloom filters in a lower than expected percentage of requests. This was due to our Bloom filter implementation expecting both the “column family” and the “column” in the read filter, while a high percentage of customers filter by “column family” only — which means the Bloom filter can’t be used. We increased utilization by implementing a hybrid Bloom filter that was applicable in both cases, resulting in a 4x increase in utilization. While this change made the Bloom filters larger, the overall disk footprint increased by only a fraction of a percent, as Bloom filters are typically two orders of magnitude smaller than the data they represent.

    Second, the CPU cost of accessing the Bloom filters was high, so we made enhancements to Bloom filters that optimize runtime performance: 

    • Local cache for individual reads: When queries select multiple column families and columns in a single row, it is common that the query will use the same Bloom filter. We take advantage of this by storing a local cache of the Bloom filters used for the query being executed.

    • Bloom filter index cache: Since Bloom filters are stored as data, accessing them for the first time involves fetching three blocks — two index blocks and a data block — then performing a binary search on all three. To avoid this overhead we built a custom in-memory index for just the Bloom filters. This cache tracks which Bloom filters we have in our block cache and provides direct access to them.

    Overall these changes decreased the CPU cost of accessing Bloom filters by 60-70%.

    Prefetching

    In the previous section we noted that data for a single row may be stored in multiple SSTables. Row data from these SSTables is merged into a final result set, and because blocks can either be in memory or on disk, there’s a risk of introducing additional latency from filesystem access. Bigtable’s prefetcher was designed to read ahead of the merge logic and pull in data from disk for all SSTables in parallel.

    Image4 - prefetch.png

    Prefetching has an associated CPU cost due to the additional threading and synchronization overhead. We reduced these costs by optimizing the prefetch threads through improved coordination with the block cache. Overall this reduced the prefetching CPU costs by almost 50%.

    Post-link-time optimization

    Bigtable uses profile guided optimizations (PGO) and link-time optimizations (ThinLTO). Propeller is a new post-link optimization framework released by Google that improves CPU utilization by 2-6% on top of existing optimizations.

    Propeller requires additional build stages to optimize the binary. We start by building a fully optimized and annotated binary that holds additional profile mapping metadata. Then, using this annotated binary, we collected hardware profiles by running a set of training workloads that exercise critical code paths. Finally, using these profiles as input, Propeller builds a new binary with an optimized and improved code layout. Here is an example of the improved code locality.

    Image5 - propeller.png

    The new build process used our existing performance benchmark suite as a training workload for profile collection. The Propeller optimized binary showed promising results in our tests, showing up to 10% improvement in QPS over baseline. 

    However, when we released this binary to our pilot production clusters, the results were mixed. It turned out that there was overfitting for the benchmarks. We investigated sources of regression by quantifying profile overlap, inspecting hardware performance counter metrics and applied statistical analysis for noisy scenarios. To reduce overfitting, we extended our training workloads to cover a larger and more representative set of use cases. 

    The result was a significant improvement in CPU efficiency — reducing fleetwide utilization by 3% with an even more pronounced reduction in read-heavy workloads, where we saw up to a 10% reduction in CPU usage.

    Conclusion

    Overall, single-row read throughput increased by 20-50% whilst maintaining the same latency profile. We are excited about these performance gains, and continue to work on improving the performance of Bigtable. Click here to learn more about Bigtable performance and tips for testing and troubleshooting any performance issues you may encounter.

  • How Igloo manages multiple insurance products across channels with Google Cloud Mon, 29 May 2023 08:00:00 -0000

    Insurance management has come a long way in recent years, with new technologies and tools emerging to streamline processes and improve customer experiences. However, many insurance companies are still using legacy systems that are slow, inflexible, and difficult to integrate across different channels.

    One of the biggest problems with legacy insurance management systems is their lack of agility. These systems are often built around specific channels or products, and are not designed to adapt to new technologies or changing customer needs. When companies want to introduce new products or channels, they need to go through a new development cycle, which results in a long time to launch. 

    To help solve this issue with legacy systems, Igloo, a regional insurance technology company that provides digital solutions to players in the insurance value chain, developed its platform Turbo, which operates across multiple business lines including B2B2C, B2A (business to insurance sales intermediaries such as agents), and B2C. Through Turbo, Igloo is able to deliver the same products and services across multiple distribution channels, including e-commerce, offline retail stores, and Igloo's own digital solution for insurance sales intermediaries, the Ignite mobile app. To achieve this level of consistency and flexibility, Turbo allows insurance experts without coding knowledge to self-manage the product launch process.

    One example of this system in action is the way Igloo provides gadget insurance (covering electronics accidental damage, water damage, and extended warranty). The same product — with consistent benefits and levels of service excellence — can be distributed at scale via e-commerce platforms, sales agents from retail stores, or through direct channels. This not only ensures a consistent customer experience and, hence, customer satisfaction, it also allows Igloo and its insurer partners to reach a wider audience.

    1 Igloo.png
    Turbo architecture
    2 Igloo.png
    Analogy of Turbo architecture

    A no-code platform for any user to easily and quickly launch new insurance products across channels

    Another key issue associated with managing multiple channels and product launches is that it can be a complex and time-consuming process. Past methods of launching insurance products often require coding knowledge, limiting the involvement of non-technical staff. This can lead to delays, errors, and a lack of speed and flexibility when adapting to changing market demands.

    Whether it’s launching a new product, or making changes or updates to existing insurance policies, Turbo’s no-code approach allows insurance experts to self-manage the product launch process. A user-friendly interface guides users through the process of setting up new products and launching them across multiple channels. This not only allows for faster and more efficient product launches, but also gives insurance experts more control and flexibility over the process.

    In addition to providing more control and flexibility, Turbo reduces the risk of errors and inconsistencies. By centralizing the product launch process, Igloo can ensure that all channels receive the same information and that products are launched with the same level of quality and accuracy. This helps to build trust with customers and ensures that Igloo maintains its reputation as a leading insurance provider.

    The diagram below illustrates how Turbo functions, following the insurance logic and process required for every new policy signup.

    3 Igloo.png
    Turbo for insurance configuration

    There are nine key benefits that Turbo provides to its users, namely: 

    • No-code - Anyone and everyone can use the platform, since no technical expertise is required

    • Re-utilization degree - Basic information is pre-filled so no reconfiguration is required, speeding up the process of filling in forms 

    • Streamlined collaboration - Anyone with access to the cloud-driven platform can use it

    • Insure logic and process variety - Easy set up with a step-by-step guide for every insurance journey

    • Presenting flexibility - Enable sales across channels

    • Purchase journey flexibility - Automate configuration of information for insurance purchasing flexibility to accommodate a variety of needs and use cases 

    • Low usage threshold - Simple interface and highly intuitive

    • Short learning curve - User friendly platform

    • Single truth domain definition - A centrally managed platform where all business logic is managed on the platform for consistency and reliability

    4 Igloo.png
    no-code interface
    5 Igloo.png
    Insurance journey flexibility

    “By utilizing Google Cloud’s cloud-native solutions, our insurance product engine, Turbo, has effectively leveraged scalable, reliable, and cost-efficient technologies. This has led to the creation of a sturdy and high-performance platform that enables rapid digitization and seamless deployment of high-volume insurance products across various distribution channels.” - Quentin Jiang, Platform Product Director, Igloo

    Helping insurance carriers, channels, and businesses make more informed decisions

    In addition to providing a user-friendly interface for insurance experts to self-manage product launches, Igloo's Turbo system also collects and analyzes valuable data insights after getting users’ consent, without Google Cloud having any visibility into the data. This data includes user views, clicks, conversions, and feedback, which can provide important insights into customer preferences. By automating the collection and analysis of this data using BigQuery, Igloo is able to make faster and more informed business decisions for insurers and insurance agents. For example, if a particular product is underperforming on a particular channel, Igloo can substitute this with a similar product while running analysis to identify issues and make improvements to the underperforming product. This helps to ensure that Igloo is always offering the best possible products and services to its customers, while also maximizing its own business performance. 

    Overall, Igloo's Turbo platform is a powerful tool that allows Igloo to leverage data-driven insights to make faster and more informed business decisions, thereby helping to reinforce its ongoing success as a leading insurtech.

  • AI in software development: What you need to know Fri, 26 May 2023 20:00:00 -0000

    It is amazing how much technology has evolved over the years and continues to do so. AI is no exception to this trend, and it is exciting to see how it can assist us in various ways. There is no denying that many of us using AI have intentionally or unknowingly benefited from it over the past decade.

    Regardless of where you are on your AI journey, there are many myths circulating about the role AI will play in our future work lives. Still, no matter your role in the tech stack, the promising reality is that AI is likely to make our lives easier and work more efficient.

    In this blog, we are going to walk through a few of the myths floating around related to AI and our future.


    Myth: AI will take over all technical jobs.

    Reality: AI may automate certain tedious tasks in technical fields, but AI can not replace the creativity, intuition, and problem-solving abilities of human developers.

    Today and more heavily in the future, technical job roles will leverage to assist developers and reduce developer toil. AI will help automate tedious and repetitive tasks, such as code reviews, testing, and debugging, which can minimize the time developers spend on these tasks while simultaneously allowing them to focus on more meaningful and innovative work. Overall, this can lead to faster development cycles and better quality software. 

    Moreover, the development of AI itself requires human input, including data scientists, machine learning engineers, and software developers. AI is a tool that can enhance human capabilities and helps them to be more efficient and productive in their work.  

    No doubt jobs will shift, as they always have. But these AI technologies will complement many jobs and create entirely new jobs we can’t imagine today.


    Myth: Only data science experts can use AI.

    Reality: While an understanding of data science is useful, you can use pre-trained models, or even experiences powered by AI, without understanding a lick about ML.

    The myth that one needs to have a deep understanding of data science to take advantage of AI can be intimidating for those unfamiliar with the field. While understanding the basics of data science can certainly be helpful, it's not necessary to take advantage of AI in many cases.

    One example is pre-trained models, which are models that have already been trained on large amounts of data and are ready to be used for specific tasks, such as classifying images or translating languages1. These pre-trained models can be accessed through APIs and used to power experiences or applications without any knowledge of data science or machine learning required2.

    Another example is AI-powered experiences, such as voice assistants or chatbots, that use natural language processing to understand and respond to user input. These experiences are typically powered by pre-trained models and can be integrated into applications without requiring any knowledge of machine learning3.

    However, it's important to note that while it's possible to use AI without understanding data science, having a basic understanding of the field can make it easier to understand the limitations and potential biases of AI-powered solutions.


    Myth: Training custom AI models is too expensive and resource intensive.

    Reality: You can customize a pre-trained foundation model

    We all know that training a machine learning model can be very resource intensive. It requires a lot of data, computing power, and time which will be a barrier for people who want to train their own models, but don't have the resources to do so.

    There are a number of ways to customize an already-trained foundation model. This can be a good option for people who want to use machine learning, but do not have the resources to train their own models. These uber-models are pre-trained on massive amounts of data and can be fine-tuned to perform specific tasks or cater to specific industries. By fine-tuning a pre-trained model, you can take advantage of the benefits of the original training while tailoring the model to your specific needs.

    Another option is to use cloud-based machine learning platforms that offer scalable infrastructure and pre-built tools and frameworks for model development. These platforms can help reduce the computational burden of training your own models and provide access to pre-trained models and APIs.


    Myth: AI is just another hype-y technology trend.

    Reality: Don’t get left behind!

    Unlike the revolving door of much hyped technologies over the past five years, AI is already proving to have an impact on many industries and will likely continue to do so in the future. AI is a disruptive technology that is already transforming businesses and industries by enabling automation, improving decision-making, and unlocking new insights from data. AI-powered solutions are being used in healthcare, finance, manufacturing, transportation, and many other fields, and the use of AI applications is only expected to grow.

    Beyond the potential benefits, there are also potential risks associated with AI, such as job displacement, bias, and privacy concerns.

    Even for those not in the technical field, AI can provide new job opportunities. Emerging roles like the Prompt Engineer will become increasingly important as the ability of the user to create a “good prompt” that is clear, concise, and easy to understand. It should also be specific enough to elicit the desired output, but not so specific that it limits the creativity of the language model. 

    Waiting out AI is not a practical or wise approach. Instead, individuals and businesses should stay informed about the latest developments in AI and explore potential applications in their fields. Beyond career related benefits of AI, it can also improve your personal life by providing commute optimization, home automation or even personal finance recommendations to help you save money4.


    Myth: No code/low code AI platforms are only for non technical users.

    Reality: No code/low code platforms help bridge the gap between technical and non-technical users

    One of the biggest benefits of no code/low code AI platforms is that they make it possible for anyone to build AI applications—think chatbots, or specialized search—regardless of their technical skills. These platforms can help bring technical and non-technical users closer together by empowering both groups to participate in the software development process. Non-technical users can create simple applications using visual interfaces and pre-built components, while technical users can customize these applications and integrate them with other systems.

    Additionally, no code/low code platforms can also be useful for technical users, especially those who want to focus on higher-level tasks rather than getting bogged down in the details of coding. For example, a data scientist might use a no code/low code platform to quickly prototype a machine learning model without having to write code from scratch.

    No code/low code platforms are extremely powerful and can be used for a wide range of applications, from simple forms and workflows to more complex applications that require data integration, machine learning, and other advanced features. This makes them a valuable tool for organizations of all sizes and industries to benefit from AI without hiring expensive AI developers, enabling both technical and non-technical users to contribute in the software development process, streamline business processes, and accelerate innovation.

    human touch.jpg

    AI still needs a human touch

    AI is a powerful tool that can be used to make many different technical and non-technical tasks more efficient. However, it is important to remember that it’s not a replacement for human creativity and ingenuity. AI can help us generate ideas, but it is up to us to decide how to use them. 

    For example, I actually used AI to help me develop and write this blog, including brainstorming ideas on where to start and how to structure my content. This allowed me to write faster and keep my thoughts organized, but AI did not (and could not) capture my creativity or unique perspective that is needed to make this content relatable and engaging for the right audience. All in all, the reality is that it continues to be up to humans to help AI do its job better. 

    Interested in learning more about AI? Follow Google Cloud on Twitter and join us for upcoming Twitter Spaces on June 1st discussing all these AI myths and more!


    1. VertexAI
    2. Vertex AI API
    3. Conversational AI
    4. The 10 Best Examples Of How AI Is Already Used In Our Everyday Life

  • How to integrate a Virtual Agent using Google Dialogflow ES into a Twilio Conversation Fri, 26 May 2023 16:00:00 -0000

    Twilio Flex is an omni-channel CCaaS (Contact Center as a Service) that makes it easy to build personalized support that’s unique to your business. In this post, we will show you how to integrate Flex’s asynchronous channels with Google DialogFlow.

    Flex uses Twilio Conversations to natively support conversational messaging use cases such as customer support and conversational commerce via SMS, MMS, WhatsApp, Chat, GBM and FBM.

    To integrate Flex with Dialogflow, we will leverage Flex’ Conversations API and SDKs to connect customers to virtual agents so you can use state-of-the-art Agent Assist and Dialogflow CX to handle customer inquiries. If you are looking for a natural language understanding platform to power your Flex asynchronous channels, these are great options; they are easy to implement, flexible, scalable, secure, and cost-effective.

    This Dialogflow integration will enable you to easily create rich, natural language conversations using AI over your Twilio digital channels. 

    Google Dialogflow with Twilio Conversations Differentiator  

    Integrating Twilio’s Conversations API with Google Virtual Agent using Dialogflow allows you to get features that you can tweak and tune to each individual customer’s experience and you can use the power of programmability to drive deep personalization into your conversations. A virtual agent integration can help your organization to improve customer satisfaction while also reducing labor costs and increasing operational efficiency.   

    Virtual agents can handle a high volume of simple, repetitive tasks, such as answering frequently asked questions, freeing up human agents to focus on more complex interactions. Additionally, virtual agents can offer 24/7 availability, quick response times, and personalized experiences through the use of data and can interact with customers through multiple channels.

    Furthermore, in case a Virtual Agent is not able to resolve an issue for the customer, you can easily hand the conversation over to a human agent to continue the conversation.

    Introducing a Google Open-Source middleware to integrate Dialogflow with Twilio Conversation 

    At Google our mission is to build for everyone, everywhere. With this commitment in mind, the Google Cloud team has developed and open-sourced the solution for a middleware that is easily accessible and can be used as a foundational building block that integrates conversational messaging over text-based channels from Twilio with Google Virtual Agents. 

    How it works

    The provided open-source middleware processes the messages from a conversation by invoking Dialogflow API and returns the response from the virtual agent back to the conversation. Therefore,  two layers of communication will be handled by this solution, with Twilio and with Dialogflow. The following diagram describes a high level architecture to be presented.

    Blog_Twilio.jpeg

    After going through Twilio onboarding process the middleware can be built and deployed on a fully managed platform such as Cloud Run  

    The middleware responsibilities are

    1. Receiving callbacks for incoming messages from conversations connected with a virtual agent participant. 

    2. Managing virtual agent lifecycle while connected with the participant.

    3. Managing the conversation lifecycle between Twilio Conversations and Dialogflow Conversations

    4. Mapping between Twilio Conversations and Dialogflow Conversations with Memorystore.

    5. Processing conversation natural language understanding events from a participant and issuing responses, status and action callbacks to the conversations service.

    You can find an examples of the implementation on Github:

    Continue Exploring

    In this blog post we described how to integrate Twilio Conversations and Dialogflow CX to deflect customer interactions to a Virtual agent. But this is just the tip of the iceberg. Here are some more ideas you can explore:

    • Extract the intents detected by the virtual agent to build a snapshot of what the customers are asking for

    • Enable sentiment analysis and the Virtual Agent will include a score of the customer's sentiment

    • Explore handing the conversation to a Twilio Flex Agent if the customer asks to talk to an agent and power with Google Agent Assistfeatures.

    • Explore using Twilio Studio SendToFlex widget

    • Check how you can integrate Voice channel here

    Conclusion

    In this post,  Google has open-sourced a solution for a middleware that can be used to integrate text-based channels conversations from Twilio with Google Virtual Agents powered by Google Dialogflow. Your organization can easily and rapidly build and deploy this middleware on a fully managed platform such as Cloud Run. This can help your organization improve customer satisfaction while also reducing labor costs and increasing operational efficiency in a contact center.  If you want to learn more, review our Agent Assist basics and Dialogflow Virtual Agents pages, and the Twilio Dialogflow CX Onboarding guide to get you started.


    Thanks to Aymen Naim at Twilio and Ankita Sharda at Google for their support providing feedback on our content.  We could not have done this without you.

  • The search experience within the Google Cloud console just got a bit easier Fri, 26 May 2023 16:00:00 -0000

    The Google Cloud console is a powerful platform that lets users manage their cloud projects end-to-end with an intuitive web-based UI. With over 120 Google Cloud products across thousands of pages, it can be challenging to navigate through the console quickly, and many users don’t like to use the command line interface. To help, Google Cloud console team recently made several improvements to our search and navigation features, making it easier to find what you need, when you need it.

    1 - SRP and Dropdown.png
    The Cloud console’s search dropdown and result page

    Expanded resource-type search

    One of the most significant improvements to the console's search feature is the increased resource-type coverage. You can now find instances of nearly all of the over 120 products that Google Cloud offers directly from the search bar, rather than by manually clicking to the details section nested within specific product pages. This wider coverage saves you time compared to browsing through the different product categories to find what you're looking for. This improvement also allows the console experience team to continuously add new resource types as we develop new features and capabilities in our various product offerings.

    2 - Resource Types.png
    Search for more than 90 types of resources using Google Cloud console

    Improved documentation search

    We also improved the coverage and accuracy of the search experience for documentation, making it easier to find specific pages or interactive tutorials. Developers looking to get started with something they’re unfamiliar with may want to undertake tutorials in the context of their own working environment. Even experienced developers just looking for a quick answer to their question may not want to leave the console, so we expect these improvements can help users of all experience levels.

    We’ve heard from you that finding the right documentation from within the console can be tricky, and so we hope this change can save you from needing to switch over to web search to be confident that they’ll find what you're looking for.

    3 - Docs Results.png
    Cloud documentation and tutorials are available for search in the console

    A refreshed search results page

    To make browsing search results easier, we also overhauled our search results page by dividing results into tabs, similar to Google web search. This means that you can refine your search by diving into one of those categories individually. In addition to maintaining an “all results” tab, there are tabs for:

    • Documentation and tutorials

    • Resource instances, and

    • Marketplace and APIs

    Each tab also has a better filtering experience that's unique to that search result category. For instance, the Resources tab lets you filter by metadata like the last time a resource was interacted with, while the Documentation tab allows you to search for interactive tutorials only. If you can’t find what you’re looking for in the autocomplete dropdown, try the search results page. It can provide additional results or context to help you find exactly what you’re looking for.

    4 - Resources Tab.png
    The Resources tab lists results in a denser, metadata-rich format with filtering available

    Accurate industry-wide synonyms

    Google Cloud console search now interprets industry-wide synonyms accurately — if you're coming from AWS or Azure and know the name of a product in that ecosystem, searching for it will return the Google Cloud equivalent. A full list of synonyms can be found here

    General usability improvements

    We also shipped several smaller quality-of-life improvements related to search, including:

    • Improvements to accessibility, including better color contrast and zoom behaviors

    • Faster latency targets and shorter load times for results

    • A search keyboard shortcut — just type “/” to begin searching without using your mouse

    • The ability to search for an API resource by its key

     We also updated the look and feel of our platform bar to a cleaner, more modern experience that’s more in line with the branding across other Google Cloud interfaces.

    5 - Old-New PBar.png
    Google Cloud console’s old platform bar (top) and the updated version (bottom)

    A customizable navigation menu

    Finally, we made usability improvements to the left-hand navigation menu by allowing you to pin specific products. Products can be pinned from the left nav menu or our brand new All Product Page. This is a shift away from the default ordering of products that attempted a "one size fits all" approach to navigation. This customization feature lets you tailor the console to your specific needs and work more efficiently.

    6 - Left Menu Pin.png
    Products can be pinned to the left nav menu for easy access

    Final thoughts

    The Google Cloud console's navigation and search features have come a long way. With these recent improvements, you can find what you need quickly and efficiently, making it easier to manage your cloud resources. From expanded resource-type search to improved documentation search and refined search results, the console is more user-friendly than ever before.

  • Get more insights out of your Google Search data with BigQuery Fri, 26 May 2023 16:00:00 -0000

    Many digital marketers and analysts use BigQuery to bring marketing data sources together, like Google Analytics and Google Ads, to uncover insights about their marketing campaigns and websites. We’re excited to dive deeper into a new type of connection that adds Google Search data into this mix. 

    Earlier this year, Search Console announced bulk data exports, a new capability that allows users to export more Google Search data via BigQuery. This functionality allows you to analyze your search traffic in more detail, using BigQuery to run complex queries and create custom reports. 

    To create an export, you’ll need to perform tasks on both Cloud Console and Search Console. You can follow the step-by-step guide in the Search Console help center or in the tutorial video embedded here.

    Intro to Search performance data

    The Performance data exported to BigQuery has three metrics that show how your search traffic changes over time:

    • Clicks: Count of user clicks from Google Search results to your property.

    • Impressions: Count of times users saw your property on Google search results.

    • Position: The average position in search results for the URL, query, or for the website in general.

    Each of those metrics can be analyzed for different dimensions. You can check how each of the queries, pages, countries, devices, or search appearances driving traffic to your website is performing. 

    If you’d like to learn more about the data schema, check out the table guidelines and reference in the Search Console help center.  

    Querying the data in BigQuery

    If you need a little help to start querying the data, check the query guidelines and sample queries published in the help center, they can be handy to get up and running. Here's one example, where we pull the USA mobile web queries in the last two weeks.

    code_block
    [StructValue([(u'code', u"SELECT\r\n query,\r\n device,\r\n sum(impressions) AS impressions,\r\n sum(clicks) AS clicks,\r\n sum(clicks) / sum(impressions) AS ctr,\r\n ((sum(sum_top_position) / sum(impressions)) + 1.0) AS avg_position\r\nFROM searchconsole.searchdata_site_impression\r\nWHERE search_type = 'WEB'\r\n AND country = 'usa'\r\n AND device = 'MOBILE'\r\n AND data_date between DATE_SUB(CURRENT_DATE(), INTERVAL 14 day) and CURRENT_DATE()\r\nGROUP BY 1,2\r\nORDER BY clicks\r\nLIMIT 1000"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8e45559a10>)])]

    Benefits of Bulk data exports

    There are several benefits of exporting Search Console data to BigQuery:

    • Analyze Google Search traffic in more detail. If you have a large website, this solution will provide more queries and pages than the other data exporting solutions. 

    • Run complex queries and create custom reports. While the Search Console interface allows you to perform simple analyses, it’s optimized for speed and for the average user. Using BigQuery will open many possibilities in data processing and visualization.

    • Store data as long as you want. Search Console stores up to sixteen months of data; using BigQuery you can store as much data as it makes sense to your organization. Please note that by default data is kept forever in your BigQuery dataset, if you'd like to limit your storage costs you can update the default partition expiration times

    • Create and execute machine learning models. Machine learning on large datasets requires extensive programming and knowledge of frameworks; using BigQuery ML, you can increase development capabilities and speed with simple SQL.

    • Apply pre-existing data security rules. If you use BigQuery data security and governance features, you can expand them to include your search data on BigQuery. This means you don’t need separate rules for separate products.

    We hope that this solution will help you store, analyze, and visualize your Search data in a more effective and scalable way. If you want to try out the Search Console export in BigQuery, you’ll need a billing account to do so. You can sign up for afree trial and add your billing account to get started analyzing Search Console data.

  • Forecasting “relevant inventory” at “relevant nodes” for retailers’ success Fri, 26 May 2023 16:00:00 -0000

    During quarterly earnings calls, retail CxOs often talk about the impact to margins of extra shipping costs, increases in split shipments and monitoring inventory levels. Across the U.S. apparel sector, for example, earnings before interest and taxes dropped by over 30% due to accelerated demand of online business. 

    Today’s retailers are focused on improving profit margins by keeping their inventory low, offering fewer end-of-season markdowns and looking to pair anticipated demand to location by better optimizing allocations around their networks. Of course, shipping costs will continue to be challenging, but they should be less of a drag on margins if the retailer’s inventory is more optimally positioned. In other words, retailers need to forecast “relevant inventory” and also place that inventory at the “relevant node” if they want to maximize profitability. 

    Awash in inventory challenges 

    In fact, inventory remains one of retailers’ biggest and most complex  challenges, especially post-pandemic and the shift from in-person to online shopping. Here’s just a sampling of the challenges retailers face: 

    • Overstocking - Retailers look at average days to sell inventory, a.k.a.  inventory turn, as one of their key performance indicators. Extensive analysis of retailers’ annual reports highlights the overstocking challenges retailers experience. Overstocking inventory causes cash flow problems, capacity constraints at fulfillment centers and stores, and pushes them to dilute the inventory through “markdowns” that impact profitability.

    • Elevated shipping costs - According to analysts, growth in a retailer’s online business can lead to a corresponding decline in profit margins. COVID-19 added a new wrinkle: the problem of inventory placement, which is not uncommon in the age of faster and more agile delivery times. If the shipper wants to avoid exorbitant inventory costs, the right inventory needs to be at the right location. Many retailers do not adequately pair anticipated demand to location and costs. 

    How cloud and data analytics can help

    Today’s retailers are not only looking to optimize inventory but also how to optimally place the inventory at the right location. Increasingly, retailers are moving away from manually forecasting demand for seasonal merchandise based on past years’ actuals and their intuition. Instead, they’re turning to data and advanced analytics to transform their transactional data into insights and gather recommendations for future seasonal demand.

    Functional reference architecture

    As enumerated above, retailers’ challenges are twofold: to improve profit margins by forecasting the right demand and keeping inventory low, and pairing anticipated demand to location, keeping costs low by better optimizing allocations around its network. Google Cloud’s Retail consulting team has developed a functional reference architecture that suggests Google Cloud technologies, including AI/ML services, to help retailers forecast “relevant demand” and placing the inventory at the “relevant node.” This approach helps retailers improve profit margins by keeping inventory low and better optimizing allocations around their networks. Let’s take a closer look.

    Demand forecasting service

    The demand forecasting service predicts the guidance quantity for a defined period of the time into the future. The demand forecasted differs by retailers by: 

    • the level (i.e., at SKU level or subclass level of the product hierarchy) and at the fulfillment level, i.e., Ship to home (STH) [common, web exclusive], buy online pickup from stores [BOPUS], buy online and ship from stores [BOSS] and walk-in. 

    •  the period, ranging from 9 to 18 months into the future.

    Profitability forecasting service

    The profitability forecasting service calculates the profitability of the SKUs and aggregates at the higher level of the product hierarchy (i.e. subclass). Profitability is calculated one of two ways: 

    • From average unit retail (AUR), landed unit cost, cost of fulfillment and cost of shipping 

    • Using a weighted calculation based on the number of units sold for the SKU. Calculated data is stored for every subclass, fulfillment type (STH common or web exclusive, BOPUS and walk-in) and the node (brick & mortar, satellite stores, fulfillment and distribution centers) 

    The functional reference architecture also considers capacity constraints, which include the in-bound units that can be processed in a given week, how many outbound units that can be processed given the inbound flow, and capacity. This is used to determine the maximum demand that can be allocated to a node for a given week

    There’s also an optimization solver, whose goal is to maximize profitability by taking in the weekly demand predictions and the capacity constraints as inputs. The solver’s output is the guidance quantity at the product hierarchy level (as decided earlier), fulfillment node (be it the ecommerce fulfillment centers (EFC), retail distribution centers (RDC), satellite stores (SS), or brick and mortar stores) and for the fulfillment type (walk-in, BOPUS, STH) and for the decide future forecasting weeks.

    Finally, merchants have the flexibility to override the guidance from the optimization solver based on business needs, priorities and other constraints.

    image3.png
    Figure 1: Functional reference architecture for “Forecasting and placing, relevant inventory at relevant node”

    Technical reference architecture 

    Working off the above functional reference architecture, here is a technical reference architecture, providing guidelines on the Google Cloud technologies that can be leveraged across different layers of the architecture. The basic building blocks of the architecture are as follows:

    A data Integration and ingestion layer includes all data management and data integration-related Google Cloud services required to ingest and process data. Potential Google Cloud services include: 

    • Dataflow, a unified stream and batch processing that’s serverless, fast and cost-effective. Dataflow is leveraged to transfer sales, inventory, product hierarchy and promo pricing and promo calendar event data into BigQuery.

    • Dataproc, a service for running Apache Spark and Apache Hadoop clusters. Dataproc is leveraged for all batch jobs.

    • Cloud Composer, a workflow orchestration service built upon Apache Airflow. Cloud Composer is leveraged to transfer any batch jobs that needs to be scheduled.

    • Apigee, Google Cloud’s native API management tool that provides abstraction to the backend services by fronting services with API proxies. All services, be it data loading, aggregated sales, profitability, demand and optimization solver, are treated as services leveraging Apigee to build, manage and secure APIs.

    A Data Cloud or storage layer, provides a unified, open approach to store both the input and processed data. The Data Cloud enables data-driven transformation built for speed, scale and security with AI built in. Google Cloud services for this layer include:

    • Cloud Storage, a managed service for storing unstructured data. Store any amount of data and retrieve it as often and whenever you need it. Data related to capacity constraints, fulfillment, shipping and labor costs from external systems are stored in Cloud Storage before it’s processed and stored in BigQuery.

    • BigQuery, a serverless, highly scalable and cost-effective data warehouse for business agility and insights. BigQuery is leveraged to store the input data and processed data (that includes sales aggregated data, weekly demand and product profitability data).

    • If retailers are in Google Cloud, other database services including Cloud Spanner (a fully-managed relational database with unlimited scale and strong consistency and availability) to store product hierarchy, promo pricing data, and Cloud SQL (a fully managed database for MySQL, PostgreSQL and SQL Server) to store sales and inventory related transactional data.

    Advanced AI/ML Services, the AI/ML layer on top of the data cloud, storage layer leveraged to build advanced AI/ML services includes demand forecasting, profitability forecasting and optimization solver service. Offerings in this layer include:

    • Vertex AI, a unified platform for training, hosting and managing ML models. Vertex AI is leveraged to build, deploy and scale ML models including demand, profitability forecasting and optimization solver faster with fully managed ML tools. 

    • A visualization layer, or  front-end platform, that retailers can build with cloud-native technologies and deploy into Google Cloud.

    image2.png
    Figure 2 : Technical reference architecture for “Forecasting and placing, relevant inventory at relevant node”

    Considering the accelerated demand for digital, changes in customer buying patterns, and the associated costs in shipping and returns processing, it’s high time retailers start looking at leveraging their own data and the advanced data services tools and technology capabilities, including AI/ML services with Google Cloud, to forecast “relevant demand” and place inventory at the “relevant node” to maximize profitability. This approach helps retailers to improve profit margins by keeping inventory low, offering few end-of-season markdowns, and looking at a model to pair anticipated demand to that of location and costs by better optimizing allocations around its network. To learn more about how to implement this reference architecture, or to talk through your organization’s unique challenges, reach out to  your Google Cloud sales representative.

  • Subskribe brings full insights to its quote-to-revenue platform with embedded Looker Fri, 26 May 2023 16:00:00 -0000

    Subskribe helps SaaS companies keep up with modern demands by delivering a unified system for configure, price, quote (CPQ), billing, and revenue recognition. Now that it's added comprehensive business intelligence (BI) and real-time data-exploration capabilities to its platform using Looker Embedded, Subskribe can also help SaaS providers improve decision making and drive growth with on-demand insights. By adopting Looker, Subskribe is also helping to drive its own growth. Not only has it accelerated customer onboarding by weeks and empowered business people to create customer-facing dashboards and reports, but its engineers can now quickly develop revenue-generating services such as embedded self-service BI tools for customers.

    image2.png
    Subskribe’s quote-to-revenue platform

    SaaS for SaaS providers

    Virtually all software today is delivered as a service. But CPQ, billing, and revenue systems that support SaaS providers' operations have traditionally been siloed, creating costly and time-consuming integration and reconciliation challenges as well as limited agility in pricing and selling. To address these challenges, Subskribe developed an adaptive quote-to-revenue system natively designed to support dynamic SaaS deals, packaging these systems in a single unified offering that delivers faster time-to-market, increased top-line growth, and operational savings to customers.

    Improving the agility and value of our SaaS business platform

    As Subskribe experienced rapid growth, it soon found that its manual SQL-based reporting processes were hindering its efficiency and innovation potential. Every new customer required weeks of engineering effort to develop custom reports for them, and engineers had to manually manage ongoing reporting changes. Subskribe needed a BI solution that made it easier to create custom reports with composable analytics, so its employees could easily create their own data experiences. And by adding embedded analytics to the Subskribe platform, including dashboards and self-service features, the company could make its platform more sticky by solving advanced BI challenges for their customers.

    After evaluating the feasibility of authoring its own custom BI solution and evaluating various third-party tools, Subskribe chose to embed Looker into its platform. Not only does Looker best meet Subskribe's product and compliance requirements, but it also provides the maturity and long-term reliability required for embedding it in the Subskribe platform.

    Looker delivers advanced, enterprise BI capabilities for multi-tenancy, security, and embedded analytics — and it's easy to use. Despite our system complexity, we got Looker up and running in about one month, with just two people. Durga Pandey, CEO, Subskribe

    Delivering customer-specific insights from one multi-tenant platform

    Subskribe connected Looker to its existing database without building any data pipelines, and integrated Looker with the company’s test and production environments, saving time for engineers. Now when product changes are made, they can be pushed to Looker using one consistent set of processes and pipelines. As a result, global product iteration is faster, collaborative, and controlled. Additionally, Subskribe implemented controls that ensure secure insights by using security technologies in Google Cloud and built-in features in Looker such as user attributes.

    We designed our semantic model so that, in just a matter of hours, anyone can use Looker to build their own dashboards and reports with the data they're authorized to see. Ugurcan Aktepe, Software Engineer, Subskribe

    Keeping resources focused on what they do best

    Looker facilitates composable analytics, so Subskribe's customer-success and product-management teams quickly learned how to develop and update accurate and sophisticated reports without having to write any code. The company held a quick Looker training session and within a few days, Subskribe’s product managers built multiple dashboards that are now used as templates, and which Looker automatically populates for customers using their data. Additionally, product managers and analysts are now using fact and dimension tables to easily create other types of custom reports that provide aggregated insights into key figures such as the momentum of accounts receivable, monthly sales bookings, and canceled subscriptions.

    With Looker, we now onboard customers weeks faster because it takes just a few hours to create their custom BI. We provide better customer experiences including real-time insights from dashboards. We respond faster to new BI requests. And we achieve all of this with fewer resources. Durga Pandey, CEO, Subskribe

    Greater insights improve experiences, control, and outcomes 

    Subskribe’s first use case — which took just six weeks to complete — vastly improved user experience. From dashboards, customers can now instantly see key metrics about their entire revenue-generating process such as quoting and billing, waterfall forecasts, and revenue recognition that includes annual and deferred insights. They can also drill down and explore the data behind their metrics to answer new questions.

    image1.png
    Subskribe’s advanced analytics dashboard that leverages Looker Embedded

    By offloading routine BI tasks for engineers with Looker, Subskribe has more bandwidth and opportunities to innovate. Teams are building an embedded analytics solution with Looker that will enable customers to create their own dashboards and reports. Expanded personalization options will drive product adoption and customer success by serving up trusted analytics that are tailored for user roles such as executives, finance staff, and client success teams. Subskribe also plans on using Looker to help customers streamline their business operations by providing alerts and helping to deliver data-informed recommendations such as when it's time to close a deal and when it's time to collect on payment due. 

    Subskribe says the flexibility gained with Looker is game changing. Not only can it pivot faster to meet customers' immediate needs but it's also easier for Subskribe to continually evolve its platform to stay ahead of industry demands and achieve its long-term product vision.

    All our customers have different requirements and processes, and they ask us to tailor their insights this way or that. With Looker, we have the agility to quickly build what they want. Tim Bradley, Director of Engineering, Subskribe

    To create your own custom applications with unified metrics, learn more about Looker Embedded. To learn more about Subskribe, visit www.subskribe.com.

  • What’s the future of DevOps? You tell us. Take the 2023 Accelerate State of DevOps survey Fri, 26 May 2023 16:00:00 -0000

    For the past eight years, Google Cloud's DevOps Research and Assessment (DORA) team has published the Accelerate State of DevOps Report. With input from over 33,000 professionals worldwide, the annual report provides an independent view into the practices and capabilities that organizations, irrespective of their size, industry, and region, can employ to drive better performance of their DevOps practices, and is the largest and longest running research of its kind. 

    Today, the DORA team is inviting you to participate in the 2023 Accelerate State of DevOps survey, which we’ll use to produce our next report. Like with previous reports, our goal is to perform detailed analysis to help teams benchmark their performance against the industry and provide strategies that they can use to improve their performance. And like always, we’re depending on you to take the survey, because the more respondents and vantage points are incorporated into the data, the more valid, robust, precise and useful the findings will be.

    But first, let’s quickly recap what we learned last year and some of the themes that we’re exploring this year. 

    2022 findings 

    The 2022 Accelerate State of DevOps Report took a deep dive into security. One notable finding was that the biggest predictor of an organization's software security practices is cultural, not technical. More specifically, high-trust, low-blame cultures — as defined by Westrum — focused on performance were significantly more likely to adopt emerging security practices.  

    But perhaps the most surprising finding was that organizations' overall DevOps capabilities dropped off substantially. Every year, we evaluate teams against four key software delivery metrics: deployment frequency, lead time for changes, time-to-restore, and change fail rate. The results tend to cluster into four groups: Low, Medium, High and Elite performers. But in the 2022 report, there was no “Elite” cluster in the data. Why? We plan to conduct further research that will help us better understand this shift, but for now, our hypothesis is that the pandemic may have hampered teams' ability to share knowledge, collaborate, and innovate, contributing to a decrease in the number of High performers, and an increase in the number of Low performers.

    1 State of DevOps survey.png

    2022 was also the first time that we produced the report in multiple languages, responding to feedback from the DORA community. The 2022 Accelerate State of DevOps Report is available for download in 11 different languages. Simply select your preferred language from the drop-down on the top right corner of the page, complete the form, and check your email for a localized version of the report.

    2 State of DevOps survey.gif

    Looking ahead

    We are also excited to explore other questions in this year’s report:

    • Building on existing findings. DORA is committed to continuous improvement. We don't just publish our findings and move on, but rather, continually test our hypotheses and reexamine our data. We're excited to revisit some of the surprises from last year's report and continue to expand our understanding of the role of operational performance.

    • AI. This is the hot topic on nearly everyone’s mind nowadays, and given how pervasive this subject has become in our community, we want to understand how teams are leveraging AI and what impacts this has on key outcomes.

    • Expanding our outcomes. We are expanding the set of outcomes we are interested in from just software delivery performance and organizational performance, to outcomes that capture the wellbeing of individuals and the performance of a team.

    • Exploring interdependencies. Process, culture, individuals, structure, and tools all interact in complex ways to produce various outcomes. We are structuring our analyses to capture these meaningful nuances.

    Because we follow the data in a hypothesis-driven manner (i.e., we are not just reporting on ad hoc analyses), we’re always in a position for surprises or unanticipated, yet promising themes to emerge. 

    The 2023 Accelerate State of DevOps survey

    Achieving high DevOps performance is a team endeavor and diverse, inclusive teams drive the best performance. The research program benefits from the participation of a variety of people. Please help us encourage more voices by taking this survey yourself, and sharing it with your network — especially with your colleagues who are underrepresented in our industry. 

    Ultimately though, this survey is for everyone, no matter where you are on your DevOps journey, the size of your organization, your organization's industry, or how you identify. There are no right or wrong answers; in fact, we often hear feedback that questions in the survey prompt ideas for improvement. The survey takes just about 15 minutes to complete and will remain open until midnight PDT on June 30, 2023.

    Let us know what you think about the survey and stay up to date with all things DORA by joining the DORA community, which provides opportunities to learn, discuss, and collaborate on software delivery and operational performance with fellow colleagues. We look forward to hearing from you and your teams!

  • What’s new with Google Cloud Thu, 25 May 2023 19:00:00 -0000

    Want to know the latest from Google Cloud? Find it here in one handy location. Check back regularly for our newest updates, announcements, resources, events, learning opportunities, and more. 


    Tip: Not sure where to find what you’re looking for on the Google Cloud blog? Start here: Google Cloud blog 101: Full list of topics, links, and resources.


    Week of May 22 - 26

    • Security Command Center (SCC) Premium pricing for project-level activation is now 25% lower for customers who use SCC to secure Compute Engine, GKE-Autopilot, App Engine and Cloud SQL. Please see our updated rate card. Also, we have expanded the number of finding types available for project-level Premium activations to help make your environment more secure. Learn more.
    • Vertex AI Embeddings for Text: Grounding LLMs made easy: Many people are now starting to think about how to bring Gen AI and large language models (LLMs) to production services. You may be wondering "How to integrate LLMs or AI chatbots with existing IT systems, databases and business data?", "We have thousands of products. How can I let LLM memorize them all precisely?", or "How to handle the hallucination issues in AI chatbots to build a reliable service?". Here is a quick solution: grounding with embeddings and vector search. What is grounding? What are embedding and vector search? In this post, we will learn these crucial concepts to build reliable Gen AI services for enterprise use with live demos and source code.

    Week of May 15 - 19

    • Introducing the date/time selector in Log Analytics in Cloud Logging. You can now easily customize the date and time range of your queries in the Log Analytics page by using the same date/time-range selector used in Logs Explorer, Metrics Explorer and other Cloud Ops products. There are several time range options, such as preset times, custom start and end times, and relative time ranges. For more information, see Filter by time in the Log Analytics docs.

    • Cloud Workstations is now GA. We are thrilled to announce the general availability of Cloud Workstations with a list of new enhanced features, providing fully managed integrated development environments (IDEs) on Google Cloud. Cloud Workstations enables faster developer onboarding and increased developer productivity while helping support your compliance requirements with an enhanced security posture. Learn More

    Week of May 8 - 14

    • Google is partnering with regional carriers Chunghwa Telecom, Innove (subsidiary of Globe Group) and AT&T to deliver the TPU (Taiwan-Philippines-U.S.) cable system — connecting Taiwan, Philippines, Guam, and California — to support growing demand in the APAC region. We are committed to providing Google Cloud customers with a resilient, high-performing global network. NEC is the supplier, and the system is expected to be ready for service in 2025.
    • Introducing BigQuery differential privacy, SQL building blocks that analysts and data scientists can use to anonymize their data. We are also partnering with Tumult Labs to help Google Cloud customers with their differential privacy implementations.
    • Scalable electronic trading on Google Cloud: A business case with BidFX: Working with Google Cloud, BidFX has been able to develop and deploy a new product called Liquidity Provision Analytics (“LPA”), launching to production within roughly six months, to solve the transaction cost analysis challenge in an innovative way. LPA will be offering features such as skew detection for liquidity providers, execution time optimization, pricing comparison, top of book analysis and feedback to counterparties. Read more here.
    • AWS EC2 VMs discovery and assessment - mFit can discover EC2 VMs inventory in your AWS region and collect guest level information from multiple VMs to provide technical fit assessment for modernization. See demo video.
    • Generate assessment report in Microsoft Excel file - mFit can generate detailed assessment report in Microsoft Excel (XLSX) format which can handle large amounts of VMs in a single report (few 1000’s) which an HTML report might not be able to handle.
    • Regulatory Reporting Platform: Regulatory reporting remains a challenge for financial services firms. We share our point of view on the main challenges and opportunities in our latest blog, accompanied by an infographic and a customer case study from ANZ Bank. We also wrote a white paper for anyone looking for a deeper dive into our Regulatory Reporting Platform.

    Week of May 1-5

    • Microservices observability is now generally available for C++, Go and Java. This release includes a number of new features and improvements, making it easier than ever to monitor and troubleshoot your microservices applications. Learn more on our user guide.

    • Google Cloud Deploy Google Cloud Deploy now supports Skaffold 2.3 as the default Skaffold version for all target types. Release Notes.

    • Cloud Build: You can now configure Cloud Build to continue executing a build even if specified steps fail. This feature is generally available. Learn more here

    Week of April 24-28

    • General Availability: Custom Modules for Security Health Analytics is now generally available. Author custom detective controls in Security Command Center using the new custom module capability.

    • Next generation Confidential VM is now available in Private Preview with a Confidential Computing technology called AMD Secure Encrypted Virtualization-Secure Nested Paging (AMD SEV-SNP) on general purpose N2D machines. Confidential VMs with AMD SEV-SNP enabled builds upon memory encryption and adds new hardware-based security protections such as strong memory integrity, encrypted register state (thanks to AMD SEV-Encrypted State, SEV-ES), and hardware-rooted remote attestation. Sign up here!

    • Selecting Tier_1 networking for your Compute Engine VM can give you the bandwidth you need for demanding workloads. Check out this blog on Increasing bandwidth to Compute Engine VMs with TIER_1 networking.

    Week of April 17-21

    Week of April 10-14

    • Assured Open Source Software is generally available for Java and Python ecosystems. Assured OSS is offered at no charge and provides an opportunity for any organization that utilizes open source software to take advantage of Google’s expertise in securing open source dependencies.

    • BigQuery change data capture (CDC) is now in public preview. BigQuery CDC provides a fully-managed method of processing and applying streamed UPSERT and DELETE operations directly into BigQuery tables in real time through the BigQuery Storage Write API. This further enables the real-time replication of more classically transactional systems into BigQuery, which empowers cross functional analytics between OLTP and OLAP systems. Learn more here.

    Week of April 3 - 7

    • Now Available: Google Cloud Deploy now supports canary release as a deployment strategy. This feature is supported in Preview. Learn more
    • General Availability: Cloud Run services as backends to Internal HTTP(S) Load Balancers and Regional External HTTP(S) Load Balancers. Internal load balancers allow you to establish private connectivity between Cloud Run services and other services and clients on Google Cloud, on-premises, or on other clouds. In addition you get custom domains, tools to migrate traffic from legacy services, Identity-aware proxy support, and more. Regional external load balancer, as the name suggests, is designed to reside in a single region and connect with workloads only in the same region, thus helps you meet your regionalization requirements. Learn more.
    • New Visualization tools for Compute Engine Fleets: TheObservability tab in the Compute Engine console VM List page has reached General Availability. The new Observability tab is an easy way to monitor and troubleshoot the health of your fleet of VMs
    • Datastream for BigQuery is Generally Available: Datastream for BigQuery is generally available, offering a unique, truly seamless and easy-to-use experience that enables near-real time insights in BigQuery with just a few steps. Using BigQuery’s newly developed change data capture (CDC) and Storage Write API’s UPSERT functionality, Datastream efficiently replicates updates directly from source systems into BigQuery tables in real-time. You no longer have to waste valuable resources building and managing complex data pipelines, self-managed staging tables, tricky DML merge logic, or manual conversion from database-specific data types into BigQuery data types. Just configure your source database, connection type, and destination in BigQuery and you’re all set. Datastream for BigQuery will backfill historical data and continuously replicate new changes as they happen.
    • Now available: Build an analytics lakehouse on Google Cloud whitepaper. The analytics lakehouse combines the benefits of data lakes and data warehouses without the overhead of each. In this paper, we discuss the end-to-end architecture which enable organizations to extract data in real-time regardless of which cloud or datastore the data reside in, use the data in aggregate for greater insight and artificial intelligence (AI) - all with governance and unified access across teams. Download now.

    Week of March 27 - 31

    • Faced with strong data growth, Squarespace made the decision to move away from on-premises Hadoop to a cloud-managed solution for its data platform. Learn how they reduced the number of escalations by 87% with the analytics lakehouse on Google Cloud. Read now
    • Last chance: Register to attend Google Data Cloud & AI Summit: Join us on Wednesday, March 29, at 9 AM PDT/12 PM EDT to discover how you can use data and AI to reveal opportunities to transform your business and make your data work smarter. Find out how organizations are using Google Cloud data and AI solutions to transform customer experiences, boost revenue, and reduce costs. Register today for this no cost digital event.
    • New BigQuery editions: flexibility and predictability for your data cloud: At the Data Cloud & AI Summit, we announced BigQuery pricing editions—Standard, Enterprise and Enterprise Plus—that allow you to choose the right price-performance for individual workloads. Along with editions, we also announced autoscaling capabilities that ensure you only pay for the compute capacity you use, and a new compressed storage billing model that is designed to reduce your storage costs. Learn more about latest BigQuery innovations and register for the upcoming BigQuery roadmap session on April 5, 2023.
    • Introducing Looker Modeler: A single source of truth for BI metrics: At the Data Cloud & AI Summit, we introduced a standalone metrics layer we call Looker Modeler, available in preview in Q2. With Looker Modeler, organizations can benefit from consistent governed metrics that define data relationships and progress against business priorities, and consume them in BI tools such as Connected Sheets, Looker Studio, Looker Studio Pro, Microsoft Power BI, Tableau, and ThoughtSpot.
    • Bucket based log based metrics — now generally available — allow you to track, visualize and alert on important logs in your cloud environment from many different projects or across the entire organization based on what logs are stored in a log bucket.

    Week of March 20 - 24

    • Chronicle Security Operations Feature Roundup - Bringing a modern and unified security operations experience to our customers is and has been a top priority with the Google Chronicle team. We’re happy to show continuing innovation and even more valuable functionality. In our latest release roundup we’ll highlight a host of new capabilities focused on delivering improved context, collaboration, and speed to handle alerts faster and more effectively. Learn how our newest capabilities enable security teams to do more with less here.

    • Announcing Google’s Data Cloud & AI Summit, March 29th! Can your data work smarter? How can you use AI to unlock new opportunities? Join us on Wednesday, March 29, to gain expert insights, new solutions, and strategies to reveal opportunities hiding in your company’s data. Find out how organizations are using Google Cloud data and AI solutions to transform customer experiences, boost revenue, and reduce costs. Register today for this no cost digital event.
    • Artifact Registry Feature Preview - Artifact Registry now supports immutable tags for Docker repositories. If you enable this setting, an image tag always points to the same image digest, including the default latest tag. This feature is in Preview. Learn more

    Week of March 13 - 17

    • A new era for AI and Google Workspace- Google Workspace is using AI to become even more helpful, starting with new capabilities in Docs and Gmail to write and refine content. Learn more.
    • Building the most open and innovative AI ecosystem - In addition to the news this week on AI products, Google Cloud has also announced new partnerships, programs, and resources. This includes bringing bringing the best of Google’s infrastructure, AI products, and foundation models to partners at every layer of the AI stack: chipmakers, companies building foundation models and AI platforms, technology partners enabling companies to develop and deploy machine learning (ML) models, app-builders solving customer use-cases with generative AI, and global services and consulting firms that help enterprise customers implement all of this technology at scale. Learn more.
    • From Microbrows to Microservices - Ulta Beauty is building their digital store of the future, but to maintain control over their new modernized application they turned to Anthos and GKE - Google Cloud’s managed container services, to provide an eCommerce experience as beautiful as their guests. Read our blog to see how a newly-minted Cloud Architect learnt Kubernetes and Google Cloud to provide the best possible architecture for his developers. Learn more.
    • Now generally available, understand and trust your data with Dataplex data lineage - a fully managed Dataplex capability that helps you understand how data is sourced and transformed within the organization. Dataplex data lineage automatically tracks data movement across BigQuery, BigLake, Cloud Data Fusion (Preview), and Cloud Composer (Preview), eliminating operational hassles around manual curation of lineage metadata. Learn more here.
    • Rapidly expand the reach of Spanner databases with read-only replicas and zero-downtime moves. Configurable read-only replicas let you add read-only replicas to any Spanner instance to deliver low latency reads to clients in any geography. Alongside Spanner’s zero-downtime instance move service, you have the freedom to move your production Spanner instances from any configuration to another on the fly, with zero downtime, whether it’s regional, multi-regional, or a custom configuration with configurable read-only replicas. Learn more here.

    Week of March 6 - 10

    • Automatically blocking project SSH keys in Dataflow is now GA.This service option allows Dataflow users to prevent their Dataflow worker VMs from accepting SSH keys that are stored in project metadata, and results in improved security. Getting started is easy: enable the block-project-ssh-keys service option while submitting your Dataflow job.
    • Celebrate International Women’s Day: Learn about the leaders driving impact at Google Cloud and creating pathways for other women in their industries. Read more.
    • Google Cloud Deploy now supports Parallel Deployment to GKE and Cloud Run workloads. This feature is in PreviewRead more.
    • Sumitovant doubles medical research output in one year using Looker
      Sumitovant is a leading biopharma research company that has doubled their research output in one year alone. By leveraging modern cloud data technologies, Sumitovant supports their globally distributed workforce of scientists to develop next generation therapies using Google Cloud’s Looker for trusted self-service data research. To learn more about Looker check out https://cloud.google.com/looker

    Week of Feb 27 - Mar 3, 2023

    • Add geospatial intelligence to your Retail use cases by leveraging the CARTO platform on top of your data in BigQuery
      Location data will add a new dimension to your Retail use cases, like site selection, geomarketing, and logistics and supply chain optimization. Read more about the solution and various customer implementations in the CARTO for Retail Reference Guide, and see a demonstration in this blog.
    • Google Cloud Deploy support for deployment verification is now GA! Read more or Try the Demo

    Week of Feb 20 - Feb 24, 2023

    • Logs for Network Load Balancing and logs for Internal TCP/UDP Load Balancingare now GA!
      Logs are aggregated per-connection and exported in near real-time, providing useful information, such as 5-tuples of the connection, received bytes, and sent bytes, for troubleshooting and monitoring the pass-through Google Cloud Load Balancers. Further, customers can include additional optional fields, such as annotations for client-side and server-side GCE and GKE resources, to obtain richer telemetry.
    • The newly published Anthos hybrid cloud architecture reference design guideprovides opinionated guidance to deploy Anthos in a hybrid environment to address some common challenges that you might encounter. Check out the architecture reference design guidehere to accelerate your journey to hybrid cloud and containerization.

    Week of Feb 13- Feb 17, 2023

    • Deploy PyTorch models on Vertex AI in a few clicks with prebuilt PyTorch serving containers - which means less code, no need to write Dockerfiles, and faster time to production.
    • Confidential GKE Nodes on Compute-Optimized C2D VMs are now GA. Confidential GKE Nodes help to increase the security of your GKE clusters by leveraging hardware to ensure your data is encrypted in memory, helping to defend against accidental data leakage, malicious administrators and “curious neighbors”. Getting started is easy, as your existing GKE workloads can run confidentially with no code changes required.
    • Announcing Google’s Data Cloud & AI Summit, March 29th!
      Can your data work smarter? How can you use AI to unlock new opportunities? Register for Google Data Cloud & AI Summit, a digital event for data and IT leaders, data professionals, developers, and more to explore the latest breakthroughs. Join us on Wednesday, March 29, to gain expert insights, new solutions, and strategies to reveal opportunities hiding in your company’s data. Find out how organizations are using Google Cloud data and AI solutions to transform customer experiences, boost revenue, and reduce costs. Register today for this no cost digital event.

    • Running SAP workloads on Google Cloud? Upgrade to our newly released Agent for SAP to gain increased visibility into your infrastructure and application performance. The new agent consolidates several of our existing agents for SAP workloads, which means less time spent on installation and updates, and more time for making data-driven decisions. In addition, there is new optional functionality that powers exciting products like Workload Manager, a way to automatically scan your SAP workloads against best-practices. Learn how to install or upgrade the agent here.

    • Leverege uses BigQuery as a key component of its data and analytics pipeline to deliver innovative IoT solutions at scale. As part of the Built with BigQuery program, this blog post goes into detail about Leverege IoT Stack that runs on Google Cloud to power business-critical enterprise IoT solutions at scale. 

    • Download white paper Three Actions Enterprise IT Leaders Can Take to Improve Software Supply Chain Security to learn how and why high-profile software supply chain attacks like SolarWinds and Log4j happened, the key lessons learned from these attacks, as well as actions you can take today to prevent similar attacks from happening to your organization.

    Week of Feb 3 - Feb 10, 2023

    • Immersive Stream for XRleverages Google Cloud GPUs to host, render, and stream high-quality photorealistic experiences to millions of mobile devices around the world, and is now generally available. Read more here.

    • Reliable and consistent data presents an invaluable opportunity for organizations to innovate, make critical business decisions, and create differentiated customer experiences. But poor data quality can lead to inefficient processes and possible financial losses. Today we announce new Dataplex features: automatic data quality (AutoDQ) and data profiling, available in public preview. AutoDQ offers automated rule recommendations, built-in reporting, and serveless execution to construct high-quality data. Data profiling delivers richer insight into the data by identifying its common statistical characteristics. Learn more.

    • Cloud Workstations now supports Customer Managed Encryption Keys (CMEK), which provides user encryption control over Cloud Workstation Persistent Disks. Read more.

    • Google Cloud Deploy now supports Cloud Run targets in General Availability. Read more.

    • Learn how to use NetApp Cloud Volumes Service as datastores for Google Cloud VMware Engine for expanding storage capacity. Read more

    Week of Jan 30 - Feb 3, 2023

    • Oden Technologies uses BigQuery to provide real-time visibility, efficiency recommendations and resiliency in the face of network disruptions in manufacturing systems. As part of the Built with BigQuery program, this blog post describes the use cases, challenges, solution and solution architecture in great detail.
    • Manage table and column-level access permissions using attribute-based policies in Dataplex. Dataplex attribute store provides a unified place where you can create and organize a Data Class hierarchy to classify your distributed data and assign behaviors such as Table-ACLs and Column-ACLs to the classified data classes. Dataplex will propagate IAM-Roles to tables, across multiple Google Cloud projects, according to the attribute(s) assigned to them and a single, merged policy tag to columns according to the attribute(s) attached to them. Read more.
    • Lytics is a next generation composableCDP that enables companies to deploy a scalable CDP around their existing data warehouse/lakes. As part of the Built with BigQuery program for ISVs, Lytics leverages Analytics Hub to launch secure data sharing and enrichment solution for media and advertisers. This blog post goes over Lytics Conductor on Google Cloud and its architecture in great detail.
    • Now available in public preview, Dataplex business glossary offers users a cloud-native way to maintain and manage business terms and definitions for data governance, establishing consistent business language, improving trust in data, and enabling self-serve use of data. Learn more here.
    • Security Command Center (SCC), Google Cloud’s native security and risk management solution, is now available via self-service to protect individual projects from cyber attacks. It’s never been easier to secure your Google Cloud resources with SCC. Read our blog to learn more. To get started today, go to Security Command Center in the Google Cloud console for your projects.
    • Global External HTTP(S) Load Balancer and Cloud CDN now support advanced traffic management using flexible pattern matching in public preview. This allows you to use wildcards anywhere in your path matcher. You can use this to customize origin routing for different types of traffic, request and response behaviors, and caching policies. In addition, you can now use results from your pattern matching to rewrite the path that is sent to the origin.
    • Run large pods on GKE Autopilot with the Balanced compute class. When you need computing resources on the larger end of the spectrum, we’re excited that the Balanced compute class, which supports Pod resource sizes up to 222vCPU and 851GiB, is now GA.

    Week of Jan 23 - Jan 27, 2023

    • Starting with Anthos version 1.14, Google supports each Anthos minor version for 12 months after the initial release of the minor version, or until the release of the third subsequent minor version, whichever is longer. We plan to have Anthos minor release three times a year around the months of April, August, and December in 2023, with a monthly patch release (for example, z in version x.y.z) for supported minor versions. For more information, read here.
    • Anthos Policy Controller enables the enforcement of fully programmable policies for your clusters across the environments. We are thrilled to announce the launch of our new built-in Policy Controller Dashboard, a powerful tool that makes it easy to manage and monitor the policy guardrails applied to your Fleet of clusters. New policy bundles are available to help audit your cluster resources against kubernetes standards, industry standards, or Google recommended best practices. The easiest way to get started with Anthos Policy Controller is to just install Policy controller and try applying a policy bundle to audit your fleet of clusters against a standard such as CIS benchmark.
    • Dataproc is an important service in any data lake modernization effort. Many customers begin their journey to the cloud by migrating their Hadoop workloads to Dataproc and continue to modernize their solutions by incorporating the full suite of Google Cloud’s data offerings. Check out this guide that demonstrates how you can optimize Dataproc job stability, performance, and cost-effectiveness.
    • Eventarc adds support for 85+ new direct events from the following Google services in Preview: API Gateway, Apigee Registry, BeyondCorp, Certificate Manager, Cloud Data Fusion, Cloud Functions, Cloud Memorystore for Memcached, Database Migration, Datastream, Eventarc, Workflows. This brings the total pre-integrated events offered in Eventarc to over 4000 events from 140+ Google services and third-party SaaS vendors.
    •  mFit 1.14.0 release adds support for JBoss and Apache workloads by including fit analysis and framework analytics for these workload types in the assessment report. See the release notes for important bug fixes and enhancements.
    • Google Cloud Deploy - Google Cloud Deploy now supports Skaffold version 2.0. Release notes
    • Cloud Workstations - Labels can now be applied to Cloud Workstations resources. Release notes 
    • Cloud Build- Cloud Build repositories (2nd gen) lets you easily create and manage repository connections, not only through Cloud Console but also through gcloud and the Cloud Build API. Release notes

    Week of Jan 17 - Jan 20, 2023

    • Cloud CDN now supports private origin authentication for Amazon Simple Storage Service (Amazon S3) buckets and compatible object stores in Preview. This capability improves security by allowing only trusted connections to access the content on your private origins and preventing users from directly accessing it.

    Week of Jan 9 - Jan 13, 2023

    • Revionics partnered with Google Cloud to build a data-driven pricing platform for speed, scale and automation with BigQuery, Looker and more. As part of the Built with BigQuery program, this blog post describes the use cases, problems solved, solution architecture and key outcomes of hosting Revionics product, Platform Built for Change on Google Cloud.
    • Comprehensive guide for designing reliable infrastructure for your workloads in Google Cloud. The guide combines industry-leading reliability best practices with the knowledge and deep expertise of reliability engineers across Google. Understand the platform-level reliability capabilities of Google Cloud, the building blocks of reliability in Google Cloud and how these building blocks affect the availability of your cloud resources. Review guidelines for assessing the reliability requirements of your cloud workloads. Compare architectural options for deploying distributed and redundant resources across Google Cloud locations, and learn how to manage traffic and load for distributed deployments. Read the full blog here.
    • GPU Pods on GKE Autopilot are now generally available. Customers can now run ML training, inference, video encoding and all other workloads that need a GPU, with the convenience of GKE Autopilot’s fully-managed Kubernetes environment.
    • Kubernetes v1.26 is now generally available on GKE. GKE customers can now take advantage of the many new features in this exciting release. This release continues Google Cloud’s goal of making Kubernetes releases available to Google customers within 30 days of the Kubernetes OSS release.
    • Event-driven transfer for Cloud Storage:Customers have told us they need asynchronous, scalable service to replicate data between Cloud Storage buckets for a variety of use cases including aggregating data in a single bucket for data processing and analysis, keeping buckets across projects/regions/continents in sync, etc. Google Cloud now offers Preview support for event-driven transfer - serverless, real-time replication capability to move data from AWS S3 to Cloud Storage and copy data between multiple Cloud Storage buckets. Read the full blog here.
    • Pub/Sub Lite now offers export subscriptions to Pub/Sub. This new subscription type writes Lite messages directly to Pub/Sub - no code development or Dataflow jobs needed. Great for connecting disparate data pipelines and migration from Lite to Pub/Sub. See here for documentation.

    • Transform your unstructured data with AI using BigQuery object tables, now GA Thu, 25 May 2023 16:00:00 -0000

      Today, the vast majority of data that gets generated in the world is unstructured (text, audio, images), but only a fraction of it ever gets analyzed. The AI pipelines required to unlock the value of this data are siloed from mainstream analytic systems, requiring engineers to build custom data infrastructure to integrate structured and unstructured data insights. 

      Our goal is to help you realize the potential of all your data, whatever its type and format. To make this easier, we launched the preview of BigQuery object tables at Google Cloud Next 2022. Powered by BigLake, object tables provide BigQuery users a structured record interface for unstructured data stored in Cloud Storage. With it, you can use existing BigQuery frameworks to process and manage this data using object tables in a secure and governed manner. 

      Since we launched the preview, we have seen customers use object tables for many use cases and are excited to announce that object tables are now generally available.

      Analyzing unstructured data with BigQuery object tables

      image2.png

      Object tables let you leverage the simplicity of SQL to run a wide range of AI models on your unstructured data. There are three key mechanisms for using AI models; all enabled through the BigQuery Inference engine

      First, you can import your models and run queries on the object table to process the data within BigQuery. This approach works well for customers looking for an integrated BigQuery solution that allows them to utilize their existing BigQuery resources. Since the preview, we’ve expanded support beyond TensorFlow models with TF-Lite and ONNX models and introduced new scalar functions to pre-process images. We also added support for saving pre-processed tensors to allow for efficient multi-model use of tensors to help you reduce slot usage. 

      Second, you can choose from various pre-trained Google models such as Cloud Vision API, Cloud Natural Language API, and Cloud Translation API, for which we have added pre-defined SQL table valued functions that invoke when querying an object table. The results of the inference are stored as a BigQuery table. 

      Third, you can integrate customer-hosted AI models or custom models built through Vertex AI using remote functions. You can call these remote functions from BigQuery SQL to serve objects to models, and the results are returned as BigQuery tables. This option is well suited if you run your own model infrastructure such as GPUs, or have externally maintained models. 

      During the preview, customers used a mix of these integration mechanisms to unify their AI workloads with data already present in BigQuery. For example, Semios, an agro-tech company, uses imported and remote image processing models to serve precision agriculture use cases. 

      “With the new imported model capability with object table, we are able to import state-of-the-art Pytorch vision models to process image data and improve in-orchard temperature prediction using BigQuery. And with the new remote model capability, we can greatly simplify our pipelines and improve maintainability.” - Semios

      Storage insights, fine-grained security, sharing and more 

      Beyond processing with AI models, customers extending existing data management frameworks to unstructured data, resulting in several novel use cases such as:

      • Cloud Storage insights - Objects tables provide an SQL interface to Cloud Storage metadata (e.g., storage class), making it easy to build analytics on Cloud Storage usage, understand growth, optimize costs, and inform decisions to better manage data.

      • Fine-grained access control at scale - Object tables are built on BigLake’s unified lakehouse infrastructure and support row- and column-level access controls. You can use it to secure specific objects with governed signed URLs. Fine-grained access control has broad applicability for augmenting unstructured data use cases, for example securing specific documents or images based on PII inferences returned by the AI model.  

      • Sharing with Analytics Hub - You can share object tables, similar to BigLake tables, via Analytics Hub, expanding the set of sharing use cases for unstructured data. Instead of sharing buckets, you now get finer control over the objects you wish to share with partners, customers, or suppliers.  

      Run generative AI workloads using object tables (Preview)

      Members of Google Cloud AI’s trusted tester program can use a wide range of generative AI models available in Model Garden to run on the object table. You can use Generative AI studio to decide on a foundation model of your choice or fine-tune it to deploy a custom API endpoint. You can then call this API using BigQuery using the remote function integration to pass prompts/inputs and return the text results from Language Learning Models (LLM) in a BigQuery table. In the coming months, we will enable SQL functions through the BigQuery Inference engine to call LLMs directly, further simplifying these workloads. 

      Getting started

      To get started, follow along with a guided lab or tutorials to run your first unstructured data analysis in BigQuery. Learn more by referring to our documentation.


      Acknowledgments: Abhinav Khushraj, Amir Hormati, Anoop Johnson, Bo Yang, Eric Hao, Gaurangi Saxena, Jeff Nelson, Jian Guo, Jiashang Liu, Justin Levandoski, Mingge Deng, Mujie Zhang, Oliver Zhuang, Yuri Volobuev and rest of the BigQuery engineering team who contributed to this launch.

    • Scaling reaction-based enumeration for next-gen drug discovery using Google Cloud Thu, 25 May 2023 16:00:00 -0000

      Discovering new drugs is at the heart of modern medicine, yet finding a “needle in the haystack” is immensely challenging due to the enormous number of possible drug-like compounds (estimated at 10^60 or more). To increase our chances of finding breakthrough medicines for patients with unmet medical needs, we need to explore the vast universe of chemical compounds and use predictive in silico methods to select the best compounds for lab-based experiments. Enter reaction-based enumeration, a powerful technique that generates novel, synthetically accessible molecules. Our team at Psivant has been pushing the boundaries of this process to an unprecedented scale, implementing reaction-based enumeration on Google Cloud. By tapping into Google Cloud’s robust infrastructure and scalability, we're unlocking the potential of this technique to uncover new chemical entities, leading to groundbreaking advancements and life-altering therapeutics.

      Our journey began with a Python-based prototype, leveraging RDKit for chemistry and Ray for distributed computing. Despite initial progress, we encountered a roadblock: our on-premises computing resources were limited, holding back our prototype's potential. While we could explore millions of compounds, our ambition was to explore billions and beyond. To address this limitation, we sought a solution that offered greater flexibility and scalability, leading us to the powerful ecosystem provided by Google Cloud.

      Leveraging Google Cloud infrastructure

      Google Cloud's technologies allowed us to supercharge our pipelines and conduct chemical compound exploration at scale. By integrating Dataflow, Google Workflows, and Compute Engine, we built a sophisticated, high-performance system that is both flexible and resilient. 

      Dataflow is a managed batch and streaming system that provides real-time, fault-tolerant, and parallel processing capabilities to manage and manipulate massive datasets effectively. Google Workflows orchestrates the complex, multi-stage processes involved in enumeration, ensuring smooth transitions and error handling across various tasks. Finally, Compute Engine provides us with scalable, customizable infrastructure to run our demanding computational workloads, ensuring optimal performance and cost-effectiveness. Together, these technologies laid the foundation for our cutting-edge solution to explore the endless possibilities of reaction-based enumeration.

      We built a cloud-native solution to achieve the scalability we sought, taking advantage of Dataflow, which relies on Apache Beam, a versatile programming model with its own data structures, such as the PCollection — a distributed dictionary designed to handle computation efficiently.

      Enter Dataflow 

      Balancing performance and cost-efficiency was crucial during pipeline development. That is where Dataflow came in, allowing us to optimize resource utilization without compromising performance, paving the way for optimal resource allocation and cost control.

      Our pipeline required a deep understanding of the chemistry libraries and Google Cloud ecosystem. We built a simple, highly distributed enumeration pipeline, then added various chemistry operations while ensuring scalability and performance at every step. Google Cloud's team played a pivotal role in our success, providing expert guidance and troubleshooting support.

      To 100 billion and beyond

      Our journey implementing reaction-based enumeration at scale on Google Cloud has been an inspiring testament to the collaborative spirit, relentless innovation, and unwavering pursuit of excellence. With smart cloud-native engineering and cutting-edge technologies, our workflow achieves rapid scalability, capable of deploying thousands of workers within minutes, enabling us to explore an astounding 100 billion compounds in under a day. Looking ahead, we're excited to integrate Vertex AI into our workflow as our go-to MLOps solution, and to supercharge our high-throughput virtual screening experiments with the robust capabilities of Batch, further enhancing our capacity to innovate.

      We'd like to extend our heartfelt thanks to Javier Tordable for his guidance in distributed computing, enriching our understanding of building a massively scalable pipeline.

      As we persistently push the boundaries of computational chemistry and drug discovery, we are continuously motivated by the immense potential of reaction-based enumeration. This potential is driven by the powerful and flexible infrastructure of Google Cloud, combined with the comprehensive capabilities of Psivant's QUAISAR platform. Together, they empower us to design the next generation of groundbreaking medicines to combat the most challenging diseases.

    • Introducing partitioning and clustering recommendations for optimizing BigQuery usage Thu, 25 May 2023 16:00:00 -0000

      Do you have a lot of BigQuery tables? Do you find it hard to keep track of which ones are partitioned and clustered, and which ones could be? If so, we have good news. We're launching a partitioning and clustering recommender that will do the work for you! The recommender analyzes all your organization's workloads and tables and identifies potential cost optimization opportunities. And the best part is, it's completely free!

      "The BigQuery partitioning and clustering recommendations are awesome! They have helped our customers identify areas where they can reduce costs, improve performance, and optimize our BigQuery usage." Sky, one of Europe leading media and communications companies

      How does the recommender work?

      Partitioning divides a table into segments, while clustering sorts the table based on user-defined columns. Both methods can improve the performance of certain types of queries, such as queries that use filter clauses and queries that aggregate data.

      BigQuery’s partitioning and clustering recommender analyzes each project’s workload execution over the past 30 days to look for suboptimal scans of the table data. The recommender then uses machine learning to estimate the potential savings and generate final recommendations. The process has four key steps: Candidate Generation, Read Pattern Analyzer, Write Pattern Analyzer, and Generate Recommendations.

      image3.png

      Candidate Generation is the first step in the process, where tables and columns are selected based on specific criteria. For Partitioning, tables larger than 100 Gb are chosen, and for Clustering tables larger than 10 Gb are chosen. The reason for filtering out the smaller tables is because the optimization benefit is smaller and less predictable. Then we identify columns that meet BigQuery's partitioning and clustering requirements. 

      In the Read Pattern Analyzer step, the recommender analyzes the logs of queries that filter on the selected columns to determine their potential for cost savings through partitioning or clustering. Several metrics, such as filter selectivity, potential file pruning, and runtime, are considered, and machine learning is used to estimate the potential slot time saved if partitioning or clustering is applied.

      The Write Pattern Analyzer step is then used to estimate the cost that partitioning or clustering may introduce during write time. Write patterns and table schema are analyzed to determine the net savings from partitioning or clustering for each column.

      Finally, in Generate Recommendations, the output from both the Read Pattern Analyzer and Write Pattern Analyzer is used to determine the net savings from partitioning or clustering for each column. If the net savings are positive and meaningful, the recommendations are uploaded to the Recommender API with proper IAM permissions.

      Discovering BigQuery partitioning and clustering recommendations

      You can access these recommendations via a few different channels:

      You can also export the recommendations to BigQuery using BigQuery Export.

      image2.gif

      To learn more about the recommender, please see the public documentation

      We hope you use BigQuery partitioning and clustering recommendations to optimize your BigQuery tables, and can’t wait to hear your feedback and thoughts about this feature. Please feel free to reach us at active-assist-feedback@google.com.



    Google has many products and the following is a list of its products: Android AutoAndroid OSAndroid TVCalendarCardboardChromeChrome EnterpriseChromebookChromecastConnected HomeContactsDigital WellbeingDocsDriveEarthFinanceFormsGboardGmailGoogle AlertsGoogle AnalyticsGoogle Arts & CultureGoogle AssistantGoogle AuthenticatorGoogle ChatGoogle ClassroomGoogle DuoGoogle ExpeditionsGoogle Family LinkGoogle FiGoogle FilesGoogle Find My DeviceGoogle FitGoogle FlightsGoogle FontsGoogle GroupsGoogle Home AppGoogle Input ToolsGoogle LensGoogle MeetGoogle OneGoogle PayGoogle PhotosGoogle PlayGoogle Play BooksGoogle Play GamesGoogle Play PassGoogle Play ProtectGoogle PodcastsGoogle ShoppingGoogle Street ViewGoogle TVGoogle TasksHangoutsKeepMapsMeasureMessagesNewsPhotoScanPixelPixel BudsPixelbookScholarSearchSheetsSitesSlidesSnapseedStadiaTilt BrushTranslateTravelTrusted ContactsVoiceWazeWear OS by GoogleYouTubeYouTube KidsYouTube MusicYouTube TVYouTube VR


    Google News
    TwitterFacebookInstagramYouTube



    Think with Google
    TwitterFacebookInstagramYouTube

    Google AI BlogAndroid Developers BlogGoogle Developers Blog
    AI is Artificial Intelligence


    Nightmare Scenario: Inside the Trump Administration’s Response to the Pandemic That Changed. From the Washington Post journalists Yasmeen Abutaleb and Damian Paletta - the definitive account of the Trump administration’s tragic mismanagement of the COVID-19 pandemic, and the chaos, incompetence, and craven politicization that has led to more than a half million American deaths and counting.

    Since the day Donald Trump was elected, his critics warned that an unexpected crisis would test the former reality-television host - and they predicted that the president would prove unable to meet the moment. In 2020, that crisis came to pass, with the outcomes more devastating and consequential than anyone dared to imagine. Nightmare Scenario is the complete story of Donald Trump’s handling - and mishandling - of the COVID-19 catastrophe, during the period of January 2020 up to Election Day that year. Yasmeen Abutaleb and Damian Paletta take us deep inside the White House, from the Situation Room to the Oval Office, to show how the members of the administration launched an all-out war against the health agencies, doctors, and scientific communities, all in their futile attempts to wish away the worst global pandemic in a century...


    GoogBlogs.com
    TwitterFacebookInstagramYouTube



    ZDNet » Google
    TwitterFacebookInstagramYouTube



    9to5Google » Google
    TwitterFacebookInstagramYouTube



    Computerworld » Google
    TwitterFacebookInstagramYouTube

    • 20 seconds to smarter Chromebook multitasking Wed, 31 May 2023 03:00:00 -0700

      If you love finding hidden software treasures as much as I do, Google's ChromeOS operating system is a productivity playground like no other.

      ChromeOS is in a constant state of evolution, y'see, with new releases landing every four weeks and fresh 'n' fancy features more or less always under development and begging to be discovered. The best part of that setup is that unlike most other platforms — including even Android — Google's latest and greatest ChromeOS features are typically tucked away behind a special switch and available on any Chromebook long before they're launched to the masses.

      To read this article in full, please click here

    • How to enable Google's clever new Chrome Reading Mode right now Thu, 25 May 2023 03:00:00 -0700

      Look, this is slightly awkward, but I'm not gonna sugarcoat it: Reading stuff on the web these days can be a pretty painful experience.

      You know what I'm talkin' about, right? (Insert awkward pause here.) On most modern websites, you're bombarded with an army of over-the-top pop-ups, promos, and other assorted ads every time you try to open up an enticing article. (Insert awkward eye-darting here.) Media is a business, of course, but still: As a mere mammal trying to ingest interesting info, it can sometimes get to be a bit much. (Insert forced awkward smile here.)

      The business part is what makes the whole thing especially tricky — 'cause for better or for worse, online publications and the lowly internet scribes who power 'em rely on revenue from all that advertising in order to exist. That means if you use some sort of aggressive ad-blocker, you're hurting that company's odds of survival and potentially also jeopardizing the content creators' ability to earn a paycheck.

      To read this article in full, please click here

    • Google killer, killed: Neeva and the limits of privacy as a philosophy Tue, 23 May 2023 03:00:00 -0700

      Well, that was fast.

      Just under two years after splashing into the world with all sorts of provocative promises, a search startup that was set on convincing people to pay for a privacy-centric Google alternative is shutting its doors.

      Neeva, founded by a pair of former Google executives and the subject of intense fascination within the tech universe, quietly announced over the weekend that its service will be winding down next week. From the announcement:

      To read this article in full, please click here

    • Google I/O and the curious case of the missing Android version Tue, 16 May 2023 03:00:00 -0700

      With Google's I/O announcement expo now firmly in the rearview mirror, it's time for us to enter the inevitable next phase of any tech-tinted revelation — and that's the careful contemplation of everything we've just experienced.

      It's my favorite phase of all, personally, as it lets us really dive in and analyze everything with a fine-toothed comb to uncover all the subtle significance that isn't always apparent on the surface.

      And this year, my goodness, is there some splendid stuff to pore over.

      Specific to the realm of Android, the sharp-eyed gumshoes over at 9to5Google noticed that this year's under-development new Android version, Android 14, was mentioned by name only one time during the entire 2,000-hour Google I/O keynote.

      To read this article in full, please click here

    • The most significant Google Pixel news no one's noticing Fri, 12 May 2023 03:00:00 -0700
    • A smart new way to stack Android widgets Tue, 09 May 2023 03:00:00 -0700

      Happy Google I/O week, my fellow Android-appreciating animal! While we wait for El Googlenheim to drop all sorts of interesting new goodies on our suspiciously greasy domes, I've got a trio of important truths for you to chew over:

      1. One of the best parts about Android is how easy the operating system makes it to take total control of your experience and make your phone work any way you want.
      2. One of the best ways to harness that power is by taking advantage of a custom Android launcher — an app that replaces your standard home screen setup with a more efficient and customizable environment.
      3. And one of the best Android launchers out there just got an inspired new update that adds even more oomph into that opportunity.

      It's my personal favorite Android launcher of the moment, in fact, and an app that's one of my top secrets to a more productivity-minded Android interface arrangement. And Goog almighty, lemme tell ya: This latest addition within its virtual walls will really send your efficiency soaring.

      To read this article in full, please click here

    • How to use Google passkeys for stronger security on Android Thu, 04 May 2023 03:00:00 -0700

      Still signing into your Google account by tapping out an actual password? That's, like, so 2022.

      Now, don't get me wrong: The tried-and-true password is perfectly fine, especially if you're using it in conjunction with two-factor authentication. But particularly for something as important as your Google account, you want to have the most effective security imaginable to keep all your personal and/or company info safe.

      And starting this week, you've got a much better way to go about that.

      To read this article in full, please click here

    • Apple, Google team up to tackle Bluetooth tracker-stalking terror Tue, 02 May 2023 08:42:00 -0700

      The days when people can be abusively tracked using devices such as Apple's AirTags may be numbered; both Apple and Google today jointly announced work on a new standard that will prevent this from happening and hinted that Android users will soon be able to tell whether they’re being tracked by an AirTag.

      Got to stop tracker abuse

      The two companies say they have been working on a new industry specification to help prevent Bluetooth location-tracking devices being used to track people without permission. They also seem to have the industry behind them, as Samsung, Tile, Chipolo, eufy Security, and Pebblebee have all expressed support for the draft specification, which has been filed with the Internet Engineering Task Force (IETF).

      To read this article in full, please click here

    • Google’s Bard AI to take on GitHub Copilot and Amazon CodeWhisperer Mon, 24 Apr 2023 02:23:00 -0700
      Google’s generative AI offering is now capable of helping developers write and debug code in 20 programming languages including Python and Typescript, the company said.
    • How to use Gmail labels to tame your inbox Fri, 21 Apr 2023 03:00:00 -0700

      So, you've got email, you say? Lots of it? More than you can possibly manage without losing the few metaphorical marbles still sloshing around in that soggy ol' brain of yours?

      I hear ya. In fact, I think we all can relate (even those of us whose brains are, erm, slightly less soggy). And I'm here to tell you: It doesn't have to be so difficult.

      Gmail has a variety of built-in tools for making your messages more manageable. Some of 'em are a little bit different from what you might be accustomed to using in more traditional email clients (here's lookin' at you, Outlook) — but if you take the time to figure out how they work, you might just be surprised at how effective they can be.

      To read this article in full, please click here

    • 10 powerful tricks for your Google Pixel Calculator app Fri, 21 Apr 2023 02:45:00 -0700

      As any self-respecting geek knows, calculators have long sported all sorts of spiffy hidden tricks.

      From the .7734 "HELLO" hack that delighted us all in grade school to the more advanced graphic calculator goodness of our teenage years, these fancy-schmancy adding machines of ours have always had their fair share of fun secrets for those of us in the know.

      And you might not realize it yet, but that simple-seeming Calculator app on your Google Pixel phone continues that tradition.

      Google's self-made Calculator app — which comes preinstalled by default on the company's own Pixel devices but can be manually downloaded onto any other Android phone or tablet — doesn't look like much more than a run-of-the-mill number cruncher when you first open it up.

      To read this article in full, please click here

    • Google adds data loss prevention, security features to Chrome Thu, 20 Apr 2023 14:02:00 -0700

      Google today rolled out several new features for enterprise users of its Chrome browser, including data loss prevention (DLP), protections against malware and phishing, and the ability to enable zero-trust access to the search engine.

      In all, Google highlighted six new features for Chrome – three of them specific to the browser's existing DLP capabilities.

      A new “context-aware” feature allows enterprise administrators to customize DLP rules based on the security posture of the device being used. For example, admins can allow users to download sensitive documents if they're accessing them from a corporate device that’s up to date on security fixes or is confirmed to have endpoint protection software installed.

      To read this article in full, please click here

    • Is Samsung dumping Google Search? Thu, 20 Apr 2023 09:47:00 -0700

      Samsung may be looking to swap out Google Search as the default search engine on its smartphones for Microsoft’s AI-based Bing search engine. The move could cost Google up to $3 billion in annual revenue from its partnership with the South Korean electronics giant, according to a Yahoo report.

      Samsung is second only to Apple in smartphone sales; it shipped 261 million smartphones worldwide last year. Samsung did not respond to a request for comment.

      To read this article in full, please click here

    • Think Android 14 seems boring? Think about this. Wed, 19 Apr 2023 03:00:00 -0700

      Look, I'll be honest: When I first got my grubby gorilla paws on Google's latest and great Android version — the hot-off-the-griddle first beta of this year's Android 14 update — two terse words kept ringing in my head:

      That's it?

      Also: Mmm, pancakes. But that's just because of the griddle reference I'd already cooked up in my deeply demented man-brain.

      All flapjackery aside, there's no way around it: On the surface, the inaugural beta of Android 14 isn't exactly exciting. In fact, in most day-to-day use at the moment, you'd be hard-pressed to notice much difference between it and the Android 13 update that came before it.

      To read this article in full, please click here

    • Android, ChromeOS, and Google's cloudy vision for a connected future Wed, 12 Apr 2023 03:00:00 -0700

      Riddle me this, my fellow tech-contemplating primate: What exactly is ChromeOS?

      It's a simple-seeming question that's deceptively difficult to answer. If there's been one near-constant with Google's ChromeOS operating system since its launch nearly 12 years ago, it's the platform's perpetual state of foundational change.

      In its earliest days, ChromeOS represented an ahead-of-its-time vision for an exceptionally focused kind of cloud-centric computing — one in which you quite literally just had a full-screen browser window and nothing else around it. The philosophy was admirable, but the concept was arguably a touch too ambitious and something that went overboard in its drive to strip back our digital domains.

      To read this article in full, please click here

    • A forgotten Android personalization power Thu, 06 Apr 2023 03:00:00 -0700

      Well, gang, it happened. Again.

      I, an alleged human who lives and breathes all things Android, just realized I'd gone months without activating one of the platform's most powerful and pleasant personalization options.

      This time, though, I spotted a pattern.

      Late last year, y'see, I moved into a new Android device — the perky and praiseworthy Pixel 7 Pro. And as part of that migration into a different digital home, I inadvertently left this bit of advanced customization behind.

      It isn't the first time, I now realize. Virtually every time I've moved into a new phone over the past few years, this same maddening slip has transpired. And virtually every time, it takes me anywhere from several weeks to the better part of a year to notice and then gently plop me palm against me noggin in frustration.

      To read this article in full, please click here

    • Got a Chromebook? Get early access to an awesome new interface enhancement Wed, 05 Apr 2023 03:00:00 -0700

      It's easy to overlook, but Google's ChromeOS platform is in a near-constant state of evolution. The software inside Chromebooks gets updates and improvements multiple times a month — more than any other operating system out there — and if you aren't watching closely, you might just miss something significant.

      That, my Chrome-carrying companion, is absolutely the case right now.

      As we speak, Google's in the midst of massaging a near-finished feature that completely changes how notifications are presented on the platform. It's a subtle-seeming shift on the surface, but once you get it up and running on your Chromebook, you'll wonder how you ever managed without it — 'cause it really will shake up the way you work and bring a welcome boost to your big-screen productivity.

      To read this article in full, please click here

    • 2 advanced tools that'll change how you interact with Android apps Fri, 31 Mar 2023 02:45:00 -0700

      Android apps are essentially tools for your phone — right?

      From productivity to privacy and everything in between, having the right set of apps on your phone can make all the difference in the world when it comes to getting stuff done.

      It seems almost silly, then, to talk about tools to manage those tools — Android apps designed specifically to enhance your ability to find and effectively use other Android apps.

      To read this article in full, please click here

    • Tech bigwigs: Hit the brakes on AI rollouts Wed, 29 Mar 2023 10:37:00 -0700

      More than 1,100 technology luminaries, leaders, and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

      In an open letter published by Future of Life Institute, a nonprofit organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

      To read this article in full, please click here

    • 7 Google Play Store secrets for smarter Android app management Wed, 29 Mar 2023 03:00:00 -0700

      When you think about hidden tricks and little-known features on Android, the Google Play Store probably isn't the first place that comes to mind.

      And why would it? The Play Store seems like a simple utility — a place where you go when you've got something you want to download or an app update you're especially eager to seek out.

      But like so many areas of Android, the Google Play Store holds plenty of secrets for faster, smarter, and generally more effective phone maneuvering. Some of 'em are time-saving shortcuts, while others are out-of-sight bits of advanced insights or control over your Android app arsenal.

      All of 'em, though, are things you'll wonder how you remained woefully unaware of all this time — and things that'll make your Android experience meaningfully easier in small but significant measures.

      To read this article in full, please click here



    Pac-Man Video Game - Play Now

    A Sunday at Sam's Boat on 5720 Richmond Ave, Houston, TX 77057