Home

Google Fi SIM Card Kit. Choose between the Simply Unlimited, Unlimited Plus and Flexible plans based on your data usage. 4G LTE and nationwide 5G coverage included for compatible phones.

Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. Google was launched in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Some of Google’s products are Google Docs, Google Sheets, Google Slides, Gmail, Google Search, Google Duo, Google Maps, Google Translate, Google Earth, and Google Photos. Play our Pac-Man videogame.

Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California. The project initially involved an unofficial "third founder", Scott Hassan, the original lead programmer who wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company. Read the full story...
Clothing & Jewelry —— Cellphones —— Microsoft Products —— All Products


Google Blog
TwitterFacebookInstagramYouTube



Google Ads
Many books were created to help people understand how Google works, its corporate culture and how to use its services and products. The following books are available: Ultimate Guide to Google AdsThe Ridiculously Simple Guide to Google Docs: A Practical Guide to Cloud-Based Word ProcessingMastering Google Adwords: Step-by-Step Instructions for Advertising Your Business (Including Google Analytics)Google Classroom: Definitive Guide for Teachers to Learn Everything About Google Classroom and Its Teaching Apps. Tips and Tricks to Improve Lessons’ Quality.3 Months to No.1: The "No-Nonsense" SEO Playbook for Getting Your Website Found on GoogleUltimate Guide to Google AdsGoogle AdSense Made Easy: Monetize Your Website and Blogs Instantly With These Proven Google Adsense TechniquesUltimate Guide to Google AdWords: How to Access 100 Million People in 10 Minutes (Ultimate Series)


Google Cloud Blog
TwitterFacebookInstagramYouTube

  • How Iron Mountain uses Assured Workloads to serve our customers’ compliance needs Fri, 27 Jan 2023 17:00:00 -0000

    Editor’s note: Data storage experts Iron Mountain turned to Google Cloud when they wanted to scale their digital business. David Williams, cloud manager at Iron Mountain, explains in this post how Assured Workloads helped Iron Mountain’s InSight product achieve and maintain compliance with government standards and better protect customer data.  


    Businesses need the right information to make decisions that lead to successful outcomes. With so much new data being generated every day, organizations can benefit greatly from information management services with significant data storage and classification capabilities to secure their data in a way that both optimizes value and maximizes compliance.

    Iron Mountain has been storing data in one way or another since it was founded in 1951. We are trusted guardians for our customers’ information and assets, working with them to manage the complexity and risks of today and tomorrow by understanding, protecting and transforming what matters most. We now store digital and physical assets for more than 225,000 customers around the world.

    Iron Mountain InSight is a content services platform that provides actionable business insights and predictive analytics through machine learning (ML)-based classification of a company’s physical and digital information. InSight adds structure, context and meta-data to information to make it more usable. The resulting enriched content can then enable enhanced automated governance and workflows throughout an organization.

    Regulatory requirements are top of mind for our customers

    Part of our job as a provider of information management solutions is to stay on top of every requirement for storing, protecting, and managing the data of customers across a range of industries in 58 countries. As policies shift, we need to make sure that we are up to date with support for regulatory requirements. New technologies and other innovations are constantly being added to our business environment and we need to ensure that the regulatory compliance process is keeping up with the pace of innovation.

    security icon.jpg

    Google Cloud works with us to help securely manage workloads and meet the requirements of our regulators and customers globally. We chose to deploy with Google Cloud Assured Workloads because it provides us with the security controls we need and helps address a wide range of compliance requirements. Our ability to meet compliance requirements around the globe enables us to grow our business while reducing the overhead and complexities of the complex multinational compliance process.

    Data residency is a key requirement for us. Assured Workloads allow us to customize and restrict data storage to certain regions, so we know for sure that the data is where it should be. Building on Google Cloud’s default encryption in transit and at rest, it also gives us a robust set of tools to manage our own and our customers’ encryption keys.

    Google Cloud’s global footprint combined with Assured Workloads controls enables us to address compliance at scale. By making use of dedicated folders with specific controls for particular compliance types, there’s a regulated boundary and restricted access where we need it. Assured Workloads allowed for easy repeat deployments while implementing and maintaining tight security controls. It allows us to use the same code base across the entire Google Cloud global platform, including the same services and the same machine learning APIs, so that we can use the latest technology for our customers — without adding more developer or operational teams. 

    Our journey to FedRAMP certification using Assured Workloads

    Security is one of the core things that Iron Mountain is known for. When we started our journey with Iron Mountain InSight, a key to our success was to get our FedRAMP Authority to Operate (ATO) so we could serve U.S. public sector customers with similarly high security requirements.

    We embarked on a 12-month process working through the ISO, SOC, and NIST controls and with each step, we got closer to FedRAMP certification. Partnering with Google Cloud allowed us to scale up faster and enabled us to more quickly achieve FedRAMP certification.

    Google Cloud’s shared fate operating model allowed us to reduce the number of controls we were responsible for to help achieve FedRAMP compliance. We were able to inherit key compliance and security controls that were configured by default, so we could focus on implementing additional controls we needed to support our specific business requirements. With Google Cloud doing the heavy lifting, it allowed us to confidently move our federal government business forward while simultaneously strengthening the security posture of our InSight platform. 

    The best part is Assured Workloads locks configurations down and eliminates any unwarranted changes to configurations or resources. There’s no room for mistakes in a deployment file, or a manually created resource. Access to support engineers in each geographical region gives peace of mind to us and our customers — and it helped us achieve our FedRAMP ATO in record time.

    Securing the future with Google Cloud

    We are expanding our InSight service to multiple regions around the world. As Google Cloud has offered Assured Workloads in more global regions, including Europe and Canada, we are able to expand along with it. Plans to expand into APAC regions will help us to scale even further globally.

    Compliance can enable your company to grow its business across the globe. Assured Workloads was a starting point for us to enter new regions and scale without the complexities associated with entering a regulated market. It means we can meet multinational compliance needs using a single cloud.

    Related Article

    Introducing Assured Workloads in Canada and Australia, and new features for all

    Google Cloud now offers Assured Workloads for Canada and Australia, and we’ve made it easier to get started by automating the onboarding ...

    Read Article
  • Snap maintains uptime and growth with Mission Critical Services Fri, 27 Jan 2023 17:00:00 -0000

    The Snap Inc. visual messaging app, Snapchat, is one of the world's most popular social media apps. In 2011, the Snapchat platform was developed on Google Cloud to provide an architecture that enabled Snap to focus on its core strengths: advancing the development of its social platform and ecosystem to speed attraction of users and business partners at scale. As a technology company, Snap’s social media apps require user accessibility 24x7 around the world. With high reliability and low latency such a business priority, when technical issues arise, resolution should be lightning fast. 

    As Snap continues to grow daily users at a rate of 18% year over year, Google Cloud continues to enable Snap's business objectives to not only ensure the application infrastructure can scale to reach global daily active users efficiently, but also to drive the performance and reliability of their infrastructure to deliver the user experiences that Snap users have come to expect. 

    The challenge: beyond just faster response times

    Any unplanned downtime is difficult, but for a social media company the impact is immediate and devastating. Snap recognized that part of delivering optimal reliability for its users required a modernized engagement model including proactive monitoring — the fastest possible response time — via a war-room styled ‘live response’ populated with designated technical experts well versed in Snap’s operating environment, and empowered to muster additional engineering resources quickly.

    Snap scales with Mission Critical Services from Google Cloud

    In this 2 min video, Kamran Tirdad, Head of Application Infrastructure at Snap and Min Lee, TAM Manager, Google Cloud demonstrate how Google Cloud, Mission Critical Services (MCS) effectively addresses Snap’s business objectives. 

    The solution: Customer Care for Google Cloud

    Snap began collaboration with the Google Customer Care Team to develop a more tailored support approach to meet their prioritized business objectives. Together, Google Cloud and Snap developed a process and assets to enable engineers at both companies to better communicate so they could more quickly and easily resolve issues. Here’s what they did: 

    • Customer Care led the identification of Snap’s critical environment, performed architectural reviews with designated project owners, set up monitoring and alerting set up, and refined post-onboarding alerts. 

    • Through various reviews and assessments, Customer Care created internal playbooks to ensure engagement teams’ awareness of additional context for Snap’s environment, focused on reducing time to resolution.

    • A regular cadence was established with Customer Care to review business critical alerts (i.e., P0) and expert responses to recommend and make improvements to the environments. 

    The results: Snap meets business objectives

    Snap chose Mission Critical Services (MCS), a value-added service of Premium Support, to achieve proactive system monitoring, which provided a designated team of technical experts ready to engage in response to mission-critical issues within 5 minutes. Due to formalized and established MCS processes, the technical experts have complete access to the Snap environment, architecture and operation details to drive an optimal time to resolution and ongoing system improvements for issue prevention.

    MCS is available for purchase by our Premium Supportcustomers as a consultative offering in which we partner with you on a journey toward enterprise readiness. And what makes MCS truly unique is that it’s built on the same methodologies we use in support of our own Google Cloud infrastructure, including a set of core concepts and methods that our Site Reliability Engineering (SRE) teams have developed over the past two decades. To learn more about MCS, please contact sales.

    Snap has more business altering details to share about its strategic business improvements and transformation leveraging Google Cloud and Cloud Customer Experience Services. In our next blog, we will cover details about how Snap transformed its business leveraging Cloud Support and Cloud Learning Services.

  • These retailers have one thing in common Thu, 26 Jan 2023 17:00:00 -0000

    They run containers on Google Cloud. Why is that important and why should you care?

    Containers are at the heart of modern applications. According to the 2022 State of DevOps report, containers have been the number one deployment target for primary services and applications for the past two years, and adoption continues to accelerate rapidly. 

    Nowhere is this more evident than in the retail industry. In a world where improving inventory margins and operational efficiency is a top priority, containers allow retail organizations’ IT teams to do just that. 

    Google pioneered container technologies and Google Cloud is optimized for containers. The flexibility and velocity gained by running containers in Google Cloud have been essential for retailers in the application modernization journey. Here are three retailers who run containerized workloads on Google Cloud, and how it helps them stay ahead of the competition: 

    Carrefour
    The Taiwan operation of global retail giant Carrefour chose a combination of Cloud Run and GKE to build on a more flexible and scalable infrastructure, accelerate growth and reduce its operational costs. The retailer now deploys its enterprise resource planning, human resource and supply chain management system on SAP on Google Cloud, while running its ecommerce platform with containers on Cloud Run and its mobile application with servers on Google Kubernetes Engine (GKE).

    L’Oréal
    A legendary leader in beauty, L’Oréal chose to manage the ingestion of diverse datasets into BigQuery with Cloud Run’s managed containers, to meet its system’s scalability, flexibility and portability demands. For the retailer, being able to measure and understand the environmental footprint of its public cloud usage is also an important step to support its sustainability efforts. With Google Cloud Carbon Footprint, L’Oréal can easily see the impact of its sustainable infrastructure approach and architecture principles.

    Cloud Run gives you infinite possibilities. You can use the library you want, you can use the language you want, and you can have portability. L’Oréal

    Rent the Runway
    Online fashion retailer Rent the Runway chose GKE to improve its reliability and minimize latency for its customers. After a decade of running its operations on hosted data centers and virtual machines, Rent the Runway migrated to GKE to simplify its underlying infrastructure, reduce the amount of time it took IT teams to manage it, and enhance its ability to quickly scale compute and storage up or down depending on the needs of its customers.

    These retailers all have something in common: They chose containers because of the flexibility and portability it unlocks. But containers are just a packaging format. These retailers arrived at Google Cloud specifically because we run containers so well — on a variety of platforms.

    First, we designed Cloud Run and GKE to make running containers in the cloud as simple and as fast as possible. Developers want to write code using their preferred language and safely deploy code into production without worrying if their favorite open-source libraries are supported. Second, Google Cloud is optimized for containers. For decades we’ve oriented our infrastructure design patterns and operations practices around high-performance container orchestration. At Google Cloud, developers reap the benefits of this optimization while also being able to easily integrate with the extensive Google Cloud ecosystem (including cloud-native databases, AI/ML APIs and toolkits, and data analytics platform offerings), and with the rich open-source and third-party ecosystem built around containers. Finally, because priorities change, flexibility matters: the same container image can run on GKE or Cloud Run without much effort.

    For the retail industry, change happens fast. The industry first experienced the initial shock of e-commerce, and continues to adapt as it navigates new challenges such as COVID-19 and disrupted supply chains. Improving flexibility  across all business areas, including how they design and run their applications, is key for thriving retail going forward. 

    Get started using Cloud Run and GKE on Google Cloud today.

  • Better together: Looker connector for Looker Studio now generally available Thu, 26 Jan 2023 17:00:00 -0000

    Today’s leading organizations want to ensure their business users get fast access to data with real-time governed metrics, so they can make better business decisions. Last April, we announced our unified BI experience, bringing together both self-serve and governed BI. Now, we are making our Looker connector to Looker Studio generally available, enabling you to access your Looker modeled data in your preferred environment.

    Connecting people to answers quickly and accurately to empower informed decisions is a primary goal for any successful business, and more than ten million users turn to Looker each month to easily explore and visualize their data from hundreds of different data sources. Now you can join the many Google Cloud customers who have benefited from early access to this connector by connecting your data in a few steps.*

    How do I turn on the integration between Looker and Looker Studio?

    You can connect to any Google Cloud-hosted Looker instance immediately after your Looker admin turns on its Looker Studio integration.

    1 Looker Studio.jpg

    Once the integration is turned on, you create a new Data Source, select the Looker connector, connect to your Looker instance, choose an Explore in your connected Looker instance, and start analyzing your modeled data.

    2 Looker Studio.gif

    You can explore your company’s modeled data in the Looker Studio report editor and share results with other users in your organization.

    When can I access the Looker connector?

    The Looker connector is now available for Looker Studio, and Looker Studio Pro, which includes expanded enterprise support and compliance.

    Learn more about the connector here.


    * A Google Cloud-hosted Looker instance with Looker 23.0 or higher is required to use the Looker connector. A Looker admin must enable the Looker Studio BI connector before users can access modeled data in Looker Studio.

  • How The Home Depot gets a single pane of glass for metrics across 2,200 stores Thu, 26 Jan 2023 17:00:00 -0000

    “Are we making it easier for customers to buy hammers or are we stuck on toil?” At The Home Depot, part of SRE’s responsibility is to keep our developers focused on building the technologies that make it easier for our customers to buy home improvement goods and services. So we like to use the hammer question as a barometer of whether a process or technology needs to be automated/outsourced, or whether it is something that deserves our attention as engineers.

    We run a highly distributed, hybrid- and multi-cloud IT environment at The Home Depot which connects our stores to our cloud and datacenters. You can read about the transformation that occurred when we switched to BigQuery, making sales forecasting, inventory management, and performance scorecards more effective. However, to collect that data for advanced analytics, our systems need to be up and running. Monitoring the infrastructure and applications that run across all of our environments used to be a complex process. Google Cloud Managed Service for Prometheus helped us pull together metrics, a key component of our observability stack, so we now have a single pane of glass view for our developers, operators, SRE, and security teams.

    Monitoring more than 2,200 stores running bare metal Kubernetes 

    We run our applications in on-prem data centers, the cloud, and at the edge in our stores with a mix of managed and self-managed Kubernetes. In fact, we have bare-metal Kubernetes running at each of our store locations — over 2,200 of them. You could imagine the large number of metrics that we’re dealing with; to give you some sense, if we don’t compress data, egress from each of our stores can run in the 20-30 MBPS range. Managing these metrics quickly became a huge operational burden. In particular, we struggled with:

    • Storage federation: Open-source Prometheus is not built for scale. By default it runs on one machine and stores metrics locally on that machine. As your applications grow, you will quickly exceed the ability for that single machine to scrape and store the metrics. To deal with this you can start federating your Prometheus metrics, which means aggregating from multiple machines and storing them. We initially tried using Thanos, which is an open source solution to aggregate and store metrics, but it took a lot of engineering resources to maintain.

    • Uptime: As your federation becomes more complex, you need to maintain an ever-increasing infrastructure footprint and be ready to deal with changes to metrics that break the federation structure. Eventually, you have a team that is really good at running a metrics scraping, storage, and querying service. Going back to the question above: as an SRE manager, is running this metrics operation making it easier for customers to buy hammers, or is this operational toil that we need to consider outsourcing?

    Graphic 1 for THD blog.jpg
    Diagram of the IT footprint served by Managed Service for Prometheus for The Home Depot

    For us, the right answer was to simply use a service for all of this and we chose Google Cloud Managed Service for Prometheus. It allows us to keep everything we love about Prometheus including the ecosystem and the flexibility — we can monitor applications, infrastructure, and literally anything else that emits Prometheus-format metrics — while offloading the heavy operational burden of scaling it. 

    Creating a “single pane of glass” for observability at The Home Depot

    Part of what I do as an SRE director is make the developers and operators on my team more effective by providing them processes and tools they can use to make better applications. Our observability stack provides a comprehensive view of logs, metrics, and traces that are connected in such a way that gives us visibility across our IT footprint and the data we need for root cause analysis. 

    The Home Depot blog recording.gif
    A view of the aggregated metrics dashboard for over 2,200 stores used at The Home Depot
    • Logs: We generate a huge amount of logs across our applications and infrastructure and we use BigQuery to store and query them. The powerful search capability of BigQuery makes it easy to pull up stack traces whenever we encounter an exception in our code workflows. 

    • Metrics: We can keep an eye on what is happening in real time across our applications and infrastructure. In addition to the metrics we all are used to, I want to call out exemplars as particularly useful elements of our observability strategy. Exemplars add data, such as a traceID, to metrics that an application is producing. Without exemplars you have to investigate issues such as latency through guesswork across different UIs. It is inefficient and less precise to review a particular timeframe in the metrics, then review the same timeframe in your traces and draw the conclusion that some event happened.

    • Traces: We use OpenTelemetry and OpenTracing to provide visibility into traces and spans so we can create service and application dependency graphs.

    What’s next for The Home Depot

    We are working closely with the Google Cloud team to get even more features incorporated into Managed Service for Prometheus to help us round out our single-pane-of-glass goals. Support for exemplars in the managed collector is about to be added by the Google Cloud team and we will incorporate that as soon as it’s ready. Further, they are working to expand PromQL support throughout their Cloud operations suite so that their built-in alerting can use PromQL.

    I am always looking for people who are passionate about Site Reliability Engineering and DevOps, so please take a look at our Home Depot jobs board. And if you want to get more in-depth on this topic, check out the podcast I did with the Google Cloud team.

    Related Article

    Introducing a high-usage tier for Managed Service for Prometheus

    New pricing tier for our managed Prometheus service users with over 500 billion metric samples per month. Pricing for existing tiers redu...

    Read Article
  • Introducing Spresso: A new solution to balance profit and conversion Thu, 26 Jan 2023 17:00:00 -0000

    When it comes to pricing strategy, balancing profit and conversion has long been a challenge for retailers, and one compounded by today’s rapidly changing commerce landscape. 

    Macroeconomic factors such as rising costs of goods, fluctuating costs of shipping, changing consumer demand, and inflation make keeping up with pricing decisions nearly impossible. These factors, combined with the phasing out of cookies and third-party data tools, leave retailers in a lurch and seeking solutions to combat their real business problems, of which pricing and profitability remain paramount. 

    Introducing Spresso Insights: 
    Introducing Spresso, modular SaaS solutions that leverage machine learning, advanced analytics, and AI to deliver actionable data-driven insights and better business outcomes. 

    The solutions were developed with retailers at the forefront of the design. In proud partnership with Google Cloud, and available in the GCP Marketplace, Spresso aims to empower retailers to take meaningful action on their most powerful asset, their data. 

    Price optimization, built by retailers for real-world problems
    Spresso’s price optimization solution was created in direct response to retailers’ challenges with pricing.

    The solution offers retailers compelling benefits including:

    • Increased SKU-level profitability without compromising conversion.

    • Use of first-party data, empowering retailers to make proactive pricing decisions based on their consumers’ behavior, not competitors’ pricing.

    • Real-time dynamic pricing, automatically routing site traffic to the top-performing price point, effectively balancing profit with consumers’ willingness to pay, meaning you’re always hedged. 

    In addition to Price Optimization, Spresso’s inaugural marketplace offering includes Customer Quality and Retention module, designed to help businesses predict LTV and combat customer churn. Using data and advanced analytics support the shift into a proactive retention strategy. 

    Whether it be pricing or retention, empowering businesses to adopt proactive strategies is a centralized theme for Spresso. With a stated mission of democratizing advanced analytics through actionable insights, Spresso’s solutions are designed to empower proactive strategies for all facets of business - from marketing to finance, merchandising to data science.

    It is also precisely why Google is excited about the partnership. 

    Insights, ready for action in GCP Marketplace
    The decision to use Google Cloud for Spresso was a simple one. The product suite, security, performance, and scale Google Cloud brings to Spresso are unmatched. 

    For Spresso, being powered by GCP means increased developer velocity, scalability, and unparalleled security. With Google Kubernetes Engine, the Spresso team has been able to focus on the solution instead of cloud management. This has allowed the product to move at the speed of innovation, iterating quickly and launching new products and features as needed. 

    Plus, since Google Kubernetes Engine has automatic cluster management, autoscaling, and built-in security best practices, Spresso is able to provide a robust scalable solution. 

    In addition to Google Kubernetes Engine, the suite of other products that GCP offers including Pub/Sub, Memorystore, Cloud SQL, Cloud DNS, and Cloud Storage, provides Spresso a central platform from which to build and scale the product.

    For retailers, Spresso being powered by GCP means lighting fast optimization and price calls without slowing down their website. Spresso can scale with a retailer’s business so they can focus on acquisition and profitability. Lastly, it means peace of mind as retailers know their customers’ data is secure.

    As a trusted partner of Google Cloud, Spresso is able to bring advanced analytics to a wide array of retailers who stand to benefit from the solutions. Through Spresso’s listing in the GCP Marketplace, retailers have easy access to the price optimization module, a proven solution to combat pricing woes and drive revenue growth, at their fingertips. 

    Multi-armed bandit algorithm
    The foundation of the price optimization module is a multi-armed bandit algorithm (MAB). A favorite of innovative, data-driven cultures, multi-armed bandits use machine learning algorithms to dynamically allocate traffic to best-performing variations and away from those underperforming. 

    One of the primary reasons MABs are a favorite of data-driven cultures is their ability to optimize for multiple objectives. In the case of Spresso, the Price Optimization solution balances for conversion and profit, according to the goals set within the console.

    Price Optimization Module Explainer

    How it works
    While MABs are beloved by data scientists, one needn’t be a data scientist to reap the benefits of Spresso’s price optimization solution. The Spresso app is designed for all functions of a retail business to navigate with ease and is an especially valuable tool for the merchandising team. They simply set a minimum and maximum price for the product based on their knowledge of the business and specify the desired balance, or goal, between conversion and profit.

    The bandit determines five price points, equally distributed within the specified range, and immediately starts to present those price points equally to consumers on the retailer’s website.

    Then, in real-time, the bandit identifies which price point is performing the best, either converting at the highest rate or generating the most profit, according to the goals set within the app, and begins to drive more site traffic to that price point.

    The bandit is always on, meaning the exploration never stops, keeping a small percentage of traffic at other price points. 

    So, if something happens and consumers are suddenly willing to pay more, the bandit picks up on that change and dynamically shifts, adjusting to the new best-performing price point. 

    Furthermore, implementation is straightforward with only three inputs needed - event data, catalog data, and a call to Spresso for pricing.

    Early Spresso customer success stories
    Despite a turbulent macroeconomic climate and decreased site traffic, e-commerce retailer Boxed was able to leverage Spresso insights to net an additional $3,000,000+ annualized incremental profit by improving margin percentage while maintaining conversion.

    In a recent case study, Boxed used Spresso’s price optimization module for their 550 lowest-margin SKUs. Based on seasonality, they knew traffic would be decreasing on these items and wanted to make up the forecasted impact to their bottom line. The retailer set a goal of increasing profit while holding conversion in the Spresso console. 

    In under 2 months, the price optimization solution was able to increase the margin on the 550 SKUs from 0.8% to 5.1% while maintaining the same conversion, resulting in millions of additional annualized revenue for Boxed. 

    “Last quarter, we were able to report an increase of 88% in gross profit, and the Spresso Price Optimization bandit was integral part of that. Better yet, that profit came without sacrificing conversion or customer experience. With what feels like a simple flip of a switch we were able to find literally millions of dollars hiding in the business. The ROI of this solution was realized in just a few days - something I haven’t seen with any other solution we’ve invested in.” said Chieh Huang, CEO of Boxed. 

    Alison Weick, President of eCommerce at Boxed agreed, “Spresso’s price optimization solution is my merchandising team’s secret weapon. We’ve been able to break free of manual scraping and use our data to make pricing decisions that have proven to resonate with our customers.”

    Striking the right price
    As the retail industry continues to evolve and grapple with the complexities of pricing strategy, it’s clear that retailers need innovative solutions to help them stay competitive. 

    With Spresso’s price optimization module, retailers can take control of their pricing strategy and drive profitability at the SKU level. Leveraging first-party data and real-time optimization allows retailers to strike the right balance between profit and conversion. 

    And with the added benefits of being a trusted partner of Google Cloud, retailers can trust in the security, performance, and scalability of the Spresso platform. 

    Retailers are invited to try the solution for themselves, to learn more check out the Spresso GCP Marketplace listing or visit the Spresso website here.

  • How innovative startups are growing their businesses on Google’s open data cloud Wed, 25 Jan 2023 17:00:00 -0000

    Data is one of the single most valuable assets for organizations today. It can empower businesses to do incredible things like create better views of health for hospitals, enable people to share timely insights with their colleagues, and — increasingly — be a foundational building block for startups who build their products and businesses in a data cloud

    Last year, we shared that more than 800 software companies are building their products and businesses with Google’s data cloud. Many of these are fast-growing startups. These companies are creating entirely new products with technologies like AI, ML and data analytics that help their customers turn data into real-time value. In turn, Google’s data cloud and services like Google BigQuery, Cloud Storage, and Vertex AI are helping startups build their own thriving businesses. 

    We’re committed to supporting these innovative, fast-growing startups and helping them grow within our open data cloud ecosystem. That’s why today, I’m excited to share how three innovative data companies - Ocient, SingleStore, and Glean - are now building on Google’s data cloud as they grow in the market and deliver scalable data solutions to more customers around the world.

    Founded in 2016, Ocient is a hyperscale data warehousing and analytics startup that is helping enterprises analyze and gain real-time value from trillions of data records by enabling massively parallelized processing in a matter of seconds. By designing its data warehouse architecture with compute adjacent to storage on NVMe solid state drives, continuous ingest on high-volume data sets, and intra-database ELT and machine learning, Ocient’s technology enables users to transform, load, and analyze otherwise infeasible data queries at 10x-100x the price performance of other cloud data warehouse providers. To help more enterprises scale their data intelligence to drive business growth, Ocient chose to bring its platform to Google Cloud’s flexible and scalable infrastructure earlier this year via Google Cloud Marketplace. In addition to bringing its solution to Google Cloud Marketplace, Ocient is using Google Cloud technologies including Google Cloud Storage for file loading, Google Compute Engine (GCE) for running its managed hyperscale data analytics solutions, and Google Cloud networking tools for scalability, increased security, and for analyzing hyperscale sets data with greater speed. In just three months, Ocient more than doubled its Google Cloud usage in order to support the transformation workloads of enterprises on Google Cloud.

    Another fast-growing company that recently brought its solution to Google Cloud Marketplace to reach more customers on Google Cloud’s scalable, secure, and global infrastructure is SingleStore. Built with developers and database architects in mind, SingleStore helps companies provide low-latency access to large datasets and simplify the development of enterprise applications by bringing transactions and analytics in a single, unified data engine (SingleStoreDB). Singlestore integrates with Google Cloud services to enable a scalable and highly available implementation. In addition to growing its business by reaching more customers on Google Cloud Marketplace, SingleStore is today announcing the establishment of its go-to-market strategy with Google Cloud, which will further enable them to deliver their  database solution to customers around the world.

    I’m also excited to share how Glean is leveraging our solutions to scale its business and support more customers. Founded in 2019, Glean is a powerful, unified search tool built to search across all deployed applications at an organization. Glean’s platform understands context, language, behavior, and relationships, which in turn enables users to find personalized answers to questions, instantly. To achieve this, the Glean team built its enterprise search and knowledge discovery product with Google managed services, including Cloud SQL and Kubernetes, along with solutions from Google Cloud like our Vertex AI, Dataflow, Google BigQuery. By creating its product with technologies from Google Cloud, Glean has the capabilities needed to be agile and iterate quickly. This also gives Glean’s developer team more time to focus on developing the core application aspects of its product, like relevance, performance, ease of use, and delivering a magical search experience to users. To support the growing needs of enterprises and bring its product to more customers at scale, Glean is today announcing its formal partnership with Google Cloud and the availability of its product on Google Cloud Marketplace. 

    We’re proud to support innovative startups with the data cloud capabilities they need to help their customers thrive and to build and grow their own businesses, and we’re committed to providing them with an open and extensible data ecosystem so they can continue helping their customers realize the full value of their data.

  • Manage Kubernetes configuration at scale using the new GitOps observability dashboard Wed, 25 Jan 2023 17:00:00 -0000

    As a Platform Administrator or Operator, you've already been using Config Sync to sync your configurations — deployments, policy definitions, Helm charts, ConfigMaps, and more — consistently across many Kubernetes clusters. But with the excitement of solving one problem comes a new one: real time visibility on configuration syncs and failures across clusters. When operating at scale, your list of concerns grows longer: Has my configuration synced? Are my resources reconciling? Which of my configuration changes in the cluster is impacting end-user behavior? Today, we're introducing the Config Management Dashboard which can help you more easily find the answers to these questions and more.

    The new Config Management Dashboard not only helps you quickly identify the configurations and differences between the desired and the actual state, but also keeps track of the clusters that run Config Sync. In this article, you'll learn more about the key components for this new dashboard and how it can help you solve your operational problems. 

    Key components 

    Dashboard: focuses on the overall status of all configurations and resources across one or multiple clusters. The dashboard also provides a quick view of the top concerns within a particular cluster or package.

    1 GitOps observability dashboard.jpg

    Packages: are Git repositories, Helm charts, and OCI registries that contain cluster configurations and resources that are synced across clusters. You can view synced resources and configurations, either by package or cluster. You can also filter your view by sync status, reconcile status, location, and clusters.

    2 GitOps observability dashboard.jpg

    Common operations 

    The new Config Management Dashboard has been designed with common operations in mind that previously could only be done via CLI. Now you can: 

    Easily install Config Sync on one or multiple clusters from the dashboard and track install status on the Settings tab.

    3 GitOps observability dashboard.gif

    Quickly check the sync status and reconcile status of a particular configuration in a package across one or many clusters from the Packages tab with the help of quick filters. Sync status is the status of the latest sync from the packages such as Git repositories, Helm charts, or OCI registries. Reconcile status is the status of the configuration when deployed to the Kubernetes API by Config Sync.

    4 GitOps observability dashboard.gif

    Filter issues and identify errors on any resource across clusters by viewing the error messages directly from the Packages tab. Now you can quickly see common errors such as configuration errors and synchronization errors right from the dashboard.

    5 GitOps observability dashboard.gif

    Glance over a real-time snapshot of the sync status and reconcile status of all the packages, along with overall Config Sync health across clusters right from the dashboard.

    6 GitOps observability dashboard.gif

    Summary

    Config Management Dashboard will help you and your application teams with some of their critical daily tasks:

    • Review progress of configurations and resources across clusters confidently and ensure consistent cluster behavior. 

    • Quickly identify issues and act accordingly so that value to end-users and service-level objectives (SLOs) are maintained.

    Making a Config Management Dashboard that is easy to use and serves your needs is important for us and we plan to add important features to the Config Management Dashboard including package deployment, rollout management, and notifications in the future. So stay tuned and we welcome your feedback on the new Config Management Dashboard. You can reach us by clicking on the question icon on the top right corner in Cloud Console and choosing Send feedback.

    Related Article

    Apply policy bundles and monitor policy compliance at scale for Kubernetes clusters

    Policy Controller enables the enforcement of programmable policies for Anthos clusters. This blog is for introducing new features launche...

    Read Article
  • How to get started with the Political Ads Transparency Report dataset Wed, 25 Jan 2023 14:00:00 -0000

    In 2022 we saw several major elections across the globe. And as many countries prepare for elections in 2023 and 2024, we’d like to re-introduce a set of transparency tools that Google provides in the election advertising space. 

    We know that election advertising is an important component of the democratic process – candidates use ads to raise awareness about their campaigns, share information and engage potential voters. And we want people to have confidence in election ads they see on our platforms. That’s why, in 2018, Google launched its Election Advertising Verification Policy, requiring all advertisers who wish to run election ads to complete a verification process. This includes checking and confirming an advertiser’s identity, and ensuring they meet certain eligibility requirements to run elections ads. We currently support election advertising verification in over 35 countries. 

    Using the information collected via this verification process, Google also released itsPolitical Ads Transparency Report, which provides important insights and data about the ads published under our elections ads policy through Google Ads, YouTube and Google Display & Video 360.

    Political Transparency dataset user-friendly interface.
    Political Transparency dataset user-friendly interface.

    In this blog post, we will explore the data available in the dataset, how to access it, and provide some examples of how it can be used. By the end of this post, you will have a better understanding of the valuable insights this information can provide.  We encourage all readers to partner with us by using this data to drive greater accountability in the election advertising ecosystem.

    Deeper dive into the dataset

    Let’s dive deeper into what qualifies ads to be included in the Political Ads Transparency Report, and the data that is made available in the dataset. 

    The dataset contains information on election ads run by verified advertisers across Google Ads, YouTube and Google Display & Video 360. The definition of an election ad on Google varies based on the country or region in question. For example, in the United States, advertisers who wish to run ads that mention a US state-level or federal candidate or office holder, political party, or ballot measure, must complete our verification process verification process and the ads would subsequently appear in the Transparency Report. 

    The dataset– which begins in 2018 and retains ads for 7 years – includes the ads creatives themselves, as well as meaningful data on each ad, such as:

    • the audience that was targeted (advertisers can only target election ads on four broad categories: age, gender, contextual and general location)

    • how many users saw the ad, and 

    • the money spent on the ad.

    These features are a prime example of Google's commitment to transparency and accountability in the industry. Not only have journalists and research groups found the Transparency Report’s dataset incredibly useful for analyzing trends in election advertising spending, but this data and the disclosures that accompany it also help empower voters with reliable information about the election ads they may see on Google. 

    Accessing the election ads dataset

    You can access this dataset in two ways: through a user-friendly interface and with SQL using the public dataset available in BigQuery

    The interface, which was updated in 2022, includes an interactive drop-down tool that allows users to select and filter specific advertisers, candidates, locations, ad formats, and timeframes. It then displays the resulting ads and automatically produces insights in the form of easy-to-understand data visualizations, such as tables and charts. You can also create reports on an advertiser’s  spend over time and spend by location. This can provide valuable insights into who is spending money on election ads and where investments are happening. 

    Top Advertizers for Ads Shown
    Using the UI to show top advertisers for ads shown in the US from 05/31/2018 to 12/13/2022

    The BigQuery public dataset allows you to analyze the data using SQL, including the ability to join the data with other public datasets, or their own private data loaded in BigQuery.  It also opens up the possibility to access the data via the BigQuery API, visualize the data in Looker Studio or Connected Sheets, or integrate the data for any number of purposes across the services of Google Cloud. 

    We’ll delve into a few example ways you can search using the UI and access and analyze the data in BigQuery.

    Use cases for the election ads dataset

    For research looking at ad dollars spent by location 

    With the user interface, you can search and analyze ad spending over time in a specific location as broad as an entire country or as granular as a specific state or congressional district. For instance, the UI allows for search for spending inAlabama from 2018 to 2022. You can see top advertisers, and district by district spending amounts in Alabama.

    Alabama Political Spending From 2018-2022
    Alabama Political Spending From 2018-2022

    For research looking into targeted political advertisement

    The dataset can also be used to identify political advertisers for a targeted demographic. For example, we can use BigQuery to write a query to identify political advertisers that targeted women in Florida over the 6 weeks leading up to the November 8, 2022 US midterm elections:

    code_block
    [StructValue([(u'code', u'SELECT\r\n CS.advertiser_name AS Advertiser,\r\n CONCAT("https://adstransparency.google.com/advertiser/", CS.advertiser_id, "?region=&political=") AS Advertiser_Page,\r\n COUNT(1) AS Number_of_Political_Ads,\r\n CS.gender_targeting,\r\n CS.geo_targeting_included\r\nFROM\r\n `bigquery-public-data.google_political_ads.creative_stats` AS CS\r\nWHERE\r\n REGEXP_CONTAINS(CS.gender_targeting,\r\n r"Female")\r\n AND REGEXP_CONTAINS(CS.geo_targeting_included,\r\n r"Florida")\r\n AND (CS.date_range_end BETWEEN DATE_SUB(DATE(\'2022-11-8\'), INTERVAL 42 DAY) AND\r\n DATE(\'2022-11-08\')\r\n OR CS.date_range_start BETWEEN DATE_SUB(DATE(\'2022-11-8\'), INTERVAL 42 DAY) AND\r\n DATE(\'2022-11-08\'))\r\n\r\nGROUP BY\r\n 1,2,4,5\r\nORDER BY 3 desc'), (u'language', u'lang-sql'), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8d40e1ccd0>)])]

    After running this query in the BigQuery console, the results show 149 ads, with details including the advertiser's name, advertiser page, gender and general location targeting of the ad , and the total number of election ads that advertiser ran for this demographic. 

    Query Results in BigQuery
    Query results in the BigQuery Console

    For researching election ads with high impressions

    Another use case is analyzing election ads with large numbers of impressions, for example those that have reached over 1 million impressions. To do this, we can use the following SQL query in BigQuery:

    code_block
    [StructValue([(u'code', u'SELECT\r\n CS.ad_id,\r\n CS.impressions AS Impressions,\r\n CS.advertiser_name AS Advertiser,\r\n CONCAT("https://adstransparency.google.com/advertiser/", CS.advertiser_id, "/creative/", CS.ad_id, "?political=&region=") AS Creative_Page,\r\nFROM\r\n `bigquery-public-data.google_political_ads.creative_stats` AS CS\r\nWHERE\r\n CS.impressions = "1000000-1250000"\r\n OR CS.impressions = "1250000-1500000"\r\n OR CS.impressions = "1500000-1750000"\r\n OR CS.impressions = "1750000-2000000"\r\n OR CS.impressions = "2000000-2250000"\r\n OR CS.impressions = "2250000-2500000"\r\n OR CS.impressions = "2500000-3000000"\r\n OR CS.impressions = "3000000-3500000"\r\n OR CS.impressions = "3500000-4000000"\r\n OR CS.impressions = "4000000-4500000"\r\n OR CS.impressions = "4500000-5000000"\r\n OR CS.impressions = "5000000-6000000"\r\n OR CS.impressions = "6000000-7000000"\r\n OR CS.impressions = "7000000-8000000"\r\n OR CS.impressions = "8000000-9000000"\r\n OR CS.impressions = "9000000-10000000"\r\n OR CS.impressions = "\u226510000000";'), (u'language', u'lang-sql'), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8d26b29f10>)])]

    The query returned over 24,790 ads and each row contains additional information like the election ad id (which can be useful to find the ad in the UI), impressions, advertiser name and a link to the election ad details. If you click on the link for the election ad details, you’ll see even more information on that specific ad. 

    Results for Query
    Results For Query For How Many Ads Reached 1M Impression, alongside the additional election ad details

    Next steps

    These are just a few of the many ways to use the Political Ads Transparency Report dataset.

    A great way to get familiar with the dataset is through its visual UI; for example, you can check out all the election ads run by a specific advertiser. Then you can start running SQL queries on the BigQuery dataset at no cost by creating a BigQuery sandbox. The BigQuery sandbox enables you to query data within the limits of the free tier without needing a credit card. If you decide to enable billing and go above the free tier threshold, you are subject to regular BigQuery pricing

    To learn more about the Political Ads Transparency Report dataset, head to the Marketplace listing


    We would like to thank Juan Uribe (Software Engineer) and Michael Yu (Software Engineer) for their help in creating this blog post.

    Related Article

    Using the Open Source Insights dataset to analyze the security and compliance of your dependencies

    In this blog, we’ll cover several ways your team can use the Open Source Insights dataset, which scans millions of open-source packages, ...

    Read Article

    Related Article

    The Diversity Annual Report is now a BigQuery public dataset

    Google’s 2022 Diversity Annual Report is now available as a BigQuery public dataset, making it easy for researchers and community groups ...

    Read Article
  • What’s new with Google Cloud Tue, 24 Jan 2023 20:00:00 -0000

    Want to know the latest from Google Cloud? Find it here in one handy location. Check back regularly for our newest updates, announcements, resources, events, learning opportunities, and more. 


    Tip: Not sure where to find what you’re looking for on the Google Cloud blog? Start here: Google Cloud blog 101: Full list of topics, links, and resources.


    Week of Jan 23 - Jan 27, 2023

    • Starting with Anthos version 1.14, Google supports each Anthos minor version for 12 months after the initial release of the minor version, or until the release of the third subsequent minor version, whichever is longer. We plan to have Anthos minor release three times a year  around the months of April, August, and December in 2023, with a monthly patch release (for example, z in version x.y.z) for supported minor versions. For more information, read here.
    • Anthos Policy Controller enables the enforcement of fully programmable policies for your clusters across the environments. We are thrilled to announce the launch of our new built-in Policy Controller Dashboard, a powerful tool that makes it easy to manage and monitor the policy guardrails applied to your Fleet of clusters. New policy bundles are available to help audit your cluster resources against kubernetes standards, industry standards, or Google recommended best practices.  The easiest way to get started with Anthos Policy Controller is to just install Policy controller and try applying a policy bundle to audit your fleet of clusters against a standard such as CIS benchmark.
    • Dataproc is an important service in any data lake modernization effort. Many customers begin their journey to the cloud by migrating their Hadoop workloads to Dataproc and continue to modernize their solutions by incorporating the full suite of Google Cloud’s data offerings. Check out this guide that demonstrates how you can optimize Dataproc job stability, performance, and cost-effectiveness.
    • Eventarc adds support for 85+ new direct events from the following Google services in Preview: API Gateway, Apigee Registry, BeyondCorp, Certificate Manager, Cloud Data Fusion, Cloud Functions, Cloud Memorystore for Memcached, Database Migration, Datastream, Eventarc, Workflows. This brings the total pre-integrated events offered in Eventarc to over 4000 events from 140+ Google services and third-party SaaS vendors.
    •  mFit 1.14.0 release adds support for JBoss and Apache workloads by including fit analysis and framework analytics for these workload types in the assessment report. See the release notes for important bug fixes and enhancements.

    Week of Jan 17 - Jan 20, 2023

    • Cloud CDN now supports private origin authentication for Amazon Simple Storage Service (Amazon S3) buckets and compatible object stores in Preview. This capability improves security by allowing only trusted connections to access the content on your private origins and preventing users from directly accessing it.

    Week of Jan 9 - Jan 13, 2023

    • Revionics partnered with Google Cloud to build a data-driven pricing platform for speed, scale and automation with BigQuery, Looker and more. As part of the Built with BigQuery program, this blog post describes the use cases, problems solved, solution architecture and key outcomes of hosting Revionics product, Platform Built for Change on Google Cloud.
    • Comprehensive guide for designing reliable infrastructure for your workloads in Google Cloud. The guide combines industry-leading reliability best practices with the knowledge and deep expertise of reliability engineers across Google. Understand the platform-level reliability capabilities of Google Cloud, the building blocks of reliability in Google Cloud and how these building blocks affect the availability of your cloud resources. Review guidelines for assessing the reliability requirements of your cloud workloads. Compare architectural options for deploying distributed and redundant resources across Google Cloud locations, and learn how to manage traffic and load for distributed deployments. Read the full blog here.
    • GPU Pods on GKE Autopilot are now generally available. Customers can now run ML training, inference, video encoding and all other workloads that need a GPU, with the convenience of GKE Autopilot’s fully-managed Kubernetes environment.
    • Kubernetes v1.26 is now generally available on GKE. GKE customers can now take advantage of the many new features in this exciting release. This release continues Google Cloud’s goal of making Kubernetes releases available to Google customers within 30 days of the Kubernetes OSS release.
    • Event-driven transfer for Cloud Storage:Customers have told us they need asynchronous, scalable service to replicate data between Cloud Storage buckets for a variety of use cases including aggregating data in a single bucket for data processing and analysis, keeping buckets across projects/regions/continents in sync, etc. Google Cloud now offers Preview support for event-driven transfer - serverless, real-time replication capability to move data from AWS S3 to Cloud Storage and copy data between multiple Cloud Storage buckets. Read the full blog here.
    • Pub/Sub Lite now offers export subscriptions to Pub/Sub. This new subscription type writes Lite messages directly to Pub/Sub - no code development or Dataflow jobs needed. Great for connecting disparate data pipelines and migration from Lite to Pub/Sub. See here for documentation.

    • Looking back at Retail’s Big Show: Google at NRF 2023 Tue, 24 Jan 2023 17:00:00 -0000

      Enabling retailers and brands to use  technology to  discover new opportunities for growth, innovation and productivity is one of our greatest passions here at Google Cloud. Every year, we look forward to the National Retail Federation (NRF) annual conference and expo, and Retail’s Big Show did not disappoint.  

      To celebrate NRF 2023 being back in full swing, we published a number of posts sharing product announcements , customer stories, and best practices - all with a focus on shining a spotlight on technology that is helping shape the future of retail. Whether you are looking to transform the retail shopping experience by making it more personalized and frictionless or secure your omnichannel footprint, we have you covered!

      To keep things simple (just the way we like it), we’ll be highlighting each post here in one place so you can bookmark it for later. We’ll continue to update this post as more content is published, so you check back for our latest insights, lessons, and stories from the retail event of the year.

      aside_block
      [StructValue([(u'title', u'A big moment for retail: At NRF, marrying physical and digital shopping like never before'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9632e3ae90>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/transform/nrf-2023-google-cloud-big-show-big-moment-hybrid-retail'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'Google Cloud and Deloitte Boost Grocery Associate Productivity and Improve the Customer Experience'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9630674390>), (u'btn_text', u'Read now'), (u'href', u'https://www.googlecloudpresscorner.com/2023-01-20-Google-Cloud-and-Deloitte-Boost-Grocery-Associate-Productivity-and-Improve-the-Customer-Experience'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'The search bar got a workout on Black Friday 2022'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9630674850>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/lucidworks-reveals-cyber-five-shopping-trends-for-2023'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'How a multinational CPG brand clocked a 90-day time-to-value with Lytics and Google Cloud'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e96306742d0>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/leading-cpg-brand-breaks-down-data-silos'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'Automating the retail customer journey through technology'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9630674ed0>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/how-to-create-value-throughout-the-retail-customer-journey'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'Solving the biggest global retail challenges with Google Cloud and Teamwork Commerce'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9630674410>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/teamwork-commerce-delivers-seamless-shopping-with-google-cloud'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'Drive operational impact in retail cloud applications using Address Validation'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e96306740d0>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/products/application-modernization/address-validation-using-google-maps-api-in-ecommerce'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'How Palo Alto Networks and Google Cloud help secure the future of omnichannel retail'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9610d4b0d0>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/keeping-omnichannel-retail-secure-with-panw-and-google-cloud'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'L\u2019Oreal enables global developer workforce with secure cloud development environments'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9610d4bf10>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/loreal-increased-developer-productivity-with-cloud-workstations'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'Why retailers choose to build on Kubernetes'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9610d4ba90>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/how-kubernetes-is-enabling-digital-transformation-for-retailers'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'These retailers have one thing in common'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9611a1a610>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/three-retailers-that-run-containers-on-google-cloud'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'How The Home Depot gets a single pane of glass for metrics across 2,200 stores'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e9610d4bed0>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/products/devops-sre/how-the-home-depot-uses-a-managed-service-for-prometheus'), (u'image', None)])]
      aside_block
      [StructValue([(u'title', u'Introducing Spresso: A new solution to balance profit and conversion'), (u'body', <wagtail.wagtailcore.rich_text.RichText object at 0x3e96305bd7d0>), (u'btn_text', u'Read now'), (u'href', u'https://cloud.google.com/blog/topics/retail/introducing-spresso-a-new-solution-to-balance-profit-and-conversion'), (u'image', None)])]

      We hope you’re as inspired as we are by these stories! Stay tuned for more stories, best practices, and tips and tricks from the partners, customers, and communities across our retail and CPG ecosystem.

    • L’Oreal enables a global developer workforce with secure cloud development environments Tue, 24 Jan 2023 17:00:00 -0000

      With many dramatic shifts in the world of retail over the past two decades — from the explosion of e-commerce to the COVID pandemic — digitalization is no longer just a strategic choice for companies in the retail and consumer packaged goods (CPG) industry, but a necessity for survival. In this blog, we discuss how the world’s leading cosmetic company, L'Oréal, uses Google Cloud solutions like Cloud Workstations to accelerate digitalization, empowering global developers with increased productivity and security.

      Retail digitalization calls for increased developer productivity

      Developers are a key element in enabling a digital-first strategy and shortening time-to-market. However, developers often have heavy workloads and are under a lot of pressure to deliver. A recent study shows that 94% of ecommerce developers took on additional work during the pandemic, which is likely to remain or even increase post-pandemic. This makes increasing developer productivity a key agenda item for many companies. Instead of developers spending time on peripheral tasks, you want to give them the right resources so they can focus on writing core business logic and driving the bottom line. According to a McKinsey study, retailers with higher developer productivity can increase revenue up to four times faster than their peers. 

      However, increasing developer productivity is neither easy nor straightforward. There are many factors that can hold developers back, for example, prolonged onboarding, unnecessary friction to accessing the right tools, inconsistency between environments, and security-related disruptions. Moreover, the rise of the remote workforce, and the increasing number of software supply chain security attacks early at the development stage, add to the complexity. 

      The developer productivity challenge

      As the world’s largest cosmetics company, L’Oréal manufactures and sells beauty and hair products across 150 countries through e-commerce, travel retail establishments and physical stores, and has 35 global beauty brands under management and more than 85,000 employees. 

      As a global leader in the beauty business, L’Oreal runs at the forefront of digitalization utilizing advanced IT technologies. Partnering with Google Cloud, we built our Beauty Tech Data Platform, a next-generation data platform that delivers data products “as a service” to empower decision-making with instant, sophisticated analysis using big data and serverless technologies.

      We have hundreds of developers working on this platform, across many different countries in many different teams, collaborating on different projects, while trying to share the same way of working.

      1 L'Oréal.jpg

      Early on, many problems surfaced with the existing development environment set-up, and our data team realized that we needed a better solution to make work more efficient for developers. 

      First of all, developers coded on local physical laptops where all the files were stored, which made setting up new environments very time-consuming and error-prone. Developers had to install many things, such as code editors, libraries and utilities on their laptops before they could start coding. The physical laptop became a single point of failure, and could lead to potential security risks like code exfiltration — unauthorized code transfer from the computer. 

      Also, developer teams worked in different ways at different speeds with different levels of maturity, which made the code rationalization super time-consuming at the final stage. In addition, cost management was a big headache since teams all used their own development solutions that were licensed differently. On top of all this, it was almost impossible to maintain a consistent security posture across the board: teams used disparate tools with different security features, making it very difficult to configure security to the same level.

      We started to look for a solution that could help us break these silos and increase developer productivity and security as a whole. 

      Google Cloud’s solution

      While searching for a solution, our goal was to enable developers to work anywhere, anytime on any device in a consistent, efficient and secure manner. With such a bold vision, we partnered with Google Cloud again, this time, with Cloud Workstations.

      Cloud Workstations is a key part of Google Cloud’s Software Delivery Shield, and is focused on accelerating developer onboarding and increasing developer productivity in a secure manner. It provides fully-managed, cloud-based development environments with advanced security features, support for multiple integrated development environments (IDEs), customizable development environments, and many popular developer tools, addressing the needs of enterprise developer teams like L’Oreal’s data team.

      With Cloud Workstations, developer onboarding is now measured in days instead of weeks or even months. Deploying a new development environment is as simple as clicking a button. Within just a few minutes, a brand new development environment is ready to go in the cloud. With a cloud-based solution, there is no longer the need to store code on developers’ local laptops. They can access fast development environments anytime via a browser or from their preferred local IDE, no matter where they are located. 

      In addition, development environments can be pre-configured consistently across global teams with commonly used development tooling and the same level of security configuration. Compliance is no longer a goal that is hard to achieve. Cloud Workstations enables us to enforce security configurations and policy control consistently across various teams with features such as VPC Service Controls, IAM, and private ingress/egress, etc. Updating or patching hundreds of developer environments is also made simple. The platform team centrally updates the Workstation image, and Cloud Workstations service handles all the updates on the individual workstations in a fast and scalable manner.

      While providing a single solution for global developer teams to work in a consistent and efficient manner, Cloud Workstations also offers flexibility and customization to accommodate teams’ different needs. It supports specific environment profiles such that, for example, frontend and backend developers can request workstations with different CPU, RAM or storage settings according to their specific needs. It also supports multiple popular IDEs such as IntelliJ IDEA, PyCharm, Rider, Code-OSS, and CLion, as well as popular developer tools, so developers can choose familiar tools for faster coding.

      “Cloud Workstations removes the technical barriers by providing a powerful and scalable solution for all the developers we have across the world.” — Sebastien Morand, Head of Data Engineering, L’Oréal

      Summary

      Developer productivity is key to a successful digital transformation. The traditional model of development on physical machines not only negatively impacts developer productivity, but also poses security risks. Cloud-based development environment solutions like Cloud Workstations enable our bold vision for our developers, allowing them to work anywhere, anytime on any device in a consistent, efficient and secure manner. 

      Learn more about Cloud Workstations and try it today.

    • How Confidential Space and multi-party computation can help manage digital assets more securely and efficiently Tue, 24 Jan 2023 17:00:00 -0000

      Managing digital asset transactions and their often-competing requirements to be secure and timely can be daunting. Human errors can lead to millions in assets being instantly lost, especially when managing your own encryption keys. This is where multi-party computation (MPC) can help reduce risk stemming from single points of compromise and facilitate instant, policy-compliant transactions. MPC has proven valuable to help secure digital asset transactions because it can simplify the user experience, and it can create operational efficiencies, while users retain control over their private keys. 

      Google Cloud customers can implement MPC solutions with our new Confidential Space, which we introduced at Google Cloud Next in October. MPC enabled by Confidential Space can offer many benefits to safely manage and instantly transact digital assets:

      • Digital assets can be held online without requiring cold storage.

      • You can use an institutional-grade custody solution without having to give up control of your private keys.  

      • Distributed parties can participate in a signing process that is both auditable and policy-compliant.

      • All parties can produce their signatures while not exposing secret material to other parties, including the MPC platform operator.

      An individual private key represents a single point of failure in the digital asset custody and signing process. In an MPC-compliant model, an individual private key is replaced with distributed key shares. Each key shareholder collaborates to sign a transaction, and all actions performed by all parties are logged for offline auditing. No key holder exposes their key share to another key holder or to the platform operator. Unlike multi-signature, a single private key is not assembled or stored anywhere.

      Confidential Space.jpg
      Figure 1 - Multi-Party Computation for transacting digital assets.

      An attacker coming from outside the organization would need to compromise multiple parties across multiple distributed operating environments in order to get access to a key that can sign a transaction. MPC is resistant to insider attacks against the platform operator or key holder because no single key can sign a transaction and the operator can not access the key. Since multiple parties must come together to approve and sign each transaction, MPC-based digital asset custody solutions can better facilitate governance. The solutions provide the ability to create and enforce policies that control who must approve transactions. This prevents a single malicious insider from stealing assets, including the party that owns the workload or a workload operator. 

      Because Confidential Space is built on our Confidential Computing platform, it leverages remote attestation and AMD’s Secure Encrypted Virtualization (SEV). This allows us to offer a more secure environment, fast performance, and seamless workload portability. This foundation can enable the MPC operator and co-signer workloads to run in a Trusted Execution Environment (TEE). Co-signers can have control over how their keys are used and which workloads are authorized to act on them. Finally, with the hardened version of Container-Optimized OS (COS), Confidential Space blocks the workload operator from influencing the signing workload.

      Deploying MPC on Confidential Space provides the following differentiated benefits:

      • Isolation: Ensures that external parties cannot interfere with the execution of the transaction signing process.

      • Confidentiality: Ensures that the MPC platform operator has no ability to access the key material.

      • Verifiable attestations: Allows co-signers to verify the identity and integrity of the MPC operator’s workload before providing a signature.

      “MPC solutions will become increasingly essential as blockchains continue to support more critical infrastructure within the global financial system,” said Jack Zampolin, CEO of Strangelove Labs.“As a core developer building and hosting critical infrastructure in the rapidly growing Cosmos ecosystem, MPC-compliant systems are an important focus area for Strangelove. We are excited to expand our relationship with Google Cloud by building out key management integrations with our highly available threshold signer, Horcrux.”

      In 2022 the Web3 community celebrated the Ethereum merge, one of several engineering advancements that can encourage applications of MPC. For example, MPC could be used for the efficient management of Ethereum validator keys. To learn more about MPC and Web3 with Google Cloud, please reach out to your account team. If you’d like to try Confidential Space, you can take it for a spin today.


      We’d like to thank Atul Luykx and Ross Nicoll, software engineers, and Nelly Porter and Rene Kolga, product managers, for their contributions to this post.

    • Transforming customer experiences with modern cloud database capabilities Tue, 24 Jan 2023 17:00:00 -0000

      Editor’s note: Six customers, across a range of industries, share their success stories with Google Cloud databases.


      From professional sports leagues to kidney care and digital commerce, Google Cloud databases enable organizations to develop radically transformative experiences for their users. The stories of how Google Cloud Databases have helped Box, Credit Karma, Davita, Forbes, MLB, and PLAID build data-driven applications is truly remarkable - from unifying data lifecycles for intelligent applications, to reducing, and even eliminating operational burden. Here are some of the key stories that customers shared at Google Cloud Next.

      Box modernizes its NoSQL databases with zero downtime with Bigtable   

      A content cloud, Box enables users to securely create, share, co-edit, and retain their content online. While moving its core infrastructure from on-premises data centers to the cloud, Box chose to migrate its NoSQL infrastructure to Cloud Bigtable. To fulfill the company’s user request needs, the NoSQL infrastructure has latency requirements measured in tens of milliseconds. "File metadata like location, size, and more, are stored in a NoSQL table and accessed at every download. This table is about 150 terabytes in size and spans over 600 billion rows. Hosting this on Bigtable removes the operational burden of infrastructure management. Using Bigtable, Box gains automatic replication with eventual consistency, an HBase-compliant library, and managed backup and restore features to support critical data." Axatha Jayadev Jalimarada, Staff Software Engineer at Box, was enthusiastic about these Bigtable benefits, “We no longer need manual interventions by SREs to scale our clusters, and that's been a huge operational relief. We see around 80 millisecond latencies to Bigtable from our on-prem services. We see sub-20 millisecond latencies from our Google Cloud resident services, especially when the Bigtable cluster is in the same region. Finally, most of our big NoSQL use cases have been migrated to Bigtable and I'm happy to report that some have been successfully running for over a year now.”

      Axatha Jayadev Jalimarada walks through “how Box modernized their NoSQL databases with minimal effort and downtime” with Jordan Hambleton, Bigtable Solutions Architect at Google Cloud.

      Credit Karma deploys models faster with Cloud Bigtable and BigQuery

      Credit Karma, a consumer technology platform helping consumers in the US, UK and Canada make financial progress, is reliant on its data models and systems to deliver a personalized experience for its nearly 130 million members. Given its scale, Credit Karma recognized the need to cater to the growing volume, complexity, and speed of data, and began moving its technology stack to Google Cloud in 2016. 

      UsingCloud Bigtable andBigQuery, Credit Karma registered a 7x increase in the number of pre-migration experiments, and began deploying 700 models/week compared to 10 per quarter. Additionally, Credit Karma was able to push recommendations through its modeling scoring service built on a reverse extract, transform, load, (ETL) process on BigQuery, Cloud Bigtable andGoogle Kubernetes Engine. Powering Credit karma’s recommendations are machine learning models at scale — the team runs about 58 billion model predictions each day.

      Looking to learn “what’s next for engineers”? Check outthe conversation between Scott Wong, and Andi Gutmans, General Manager and Vice President of Engineering for Databases at Google.

      DaVita leverages Spanner and BigQuery to centralize health data and analytics for clinician enablement

      As a leading global kidney care company, DaVita spans the gamut of kidney care from chronic kidney disease to transplants. As part of its digital transformation strategy, DaVita was looking to centralize all electronic health records (EHRs) and related care activities into a single system that would not only embed work flows, but also save clinicians time and enable them to focus on their core competencies. Jay Richardson, VP, Application Development at DaVita, spoke to the magnitude of the task, “Creating a seamless, real-time data flow across 600,000 treatments on 200,000 patients and 45,000 clinicians was a tall engineering order.”  The architecture was set up in Cloud Spanner housing all the EHRs and related-care activities, and BigQuery handling the analytics. Spanner change streams replicated data changes to BigQuery with a 75 percent reduction in time for replication--from 60 to 15 seconds-enabling both, simplification of the integration process, as well as, a highly scalable solution. DaVita also gained deep, relevant, insights–about 200,000 a day–and full aggregation for key patient meds and labs data. This helps equip physicians with additional tools to care for their patients, without inundating them with numbers.

      Jerene Yang, Senior Software Engineering Manager at Google Cloud, helps to “see the whole picture by unifying operational data with analytics” with Jay Richardson.

      Forbes fires up digital transformation with Firestore

      A leading media and information company, Forbes is plugged into an ecosystem of about 140 million—employees, contributors, and readers—across the globe. It recently underwent a successful digital transformation effort to support its rapidly scaling business. This included a swift, six-month migration to Google Cloud, and integrating with the full suite of Google Cloud products from BigQuery to Firestore—a NoSQL document database. Speaking of Firestore, Vadim Supitskiy, Chief Digital & Information Officer at Forbes, explained, “We love that it's a managed service, we do not want to be in the business of managing databases. It has a flexible document model, which makes it very easy for developers to use and it integrates really, really, well with the products that GCP has to offer.” Firestore powers the Forbes insights and analytics platform to give its journalists and contributors comprehensive, real-time suggestions that help content creators author relevant content, and analytics to assess the performance of published articles. At the backend, Firestore seamlessly integrates with Firebase Auth, Google Kubernetes Engine, Cloud Functions, BigQuery, and Google Analytics, while reducing maintenance overheads. As a cloud-native database that requires no configuration or management, it’s cheap to store data in, and executes low-latency queries

      Minh Nguyen, Senior Product Manager at Google cloud, discusses “serverless application development with a document database” with Vadim Supitskiyhere.

      MLB hits a home run by moving to Cloud SQL

      When you think ofMajor League Baseball (MLB), you think of star players and home runs. But as Joseph Zirilli, senior software engineer at MLB explained, behind-the-scenes technology is critical to the game, whether it is the TV streaming service, or on-field technology to capture statistics data. And that’s a heavy lift, especially when MLB was running its player scouting and management system for player transactions on a legacy, on-premises database. This, in combination with the limitations of conventional licensing, was adversely impacting the business. The lack of in-house expertise in the legacy database, coupled with its small team size, made routine tasks challenging. 

      Having initiated the move to Google Cloud a few years ago, MLB was already using Cloud SQL for some of its newer products. It was also looking to standardize its relational database management system around PostgreSQL so it could build in-house expertise around a single database. They selected Cloud SQL which supported their needs, and also offered high availability and automation.

      Today, with drastically improved database performance and automatic rightsizing of database instances, MLB is looking forward to keeping its operational costs low and hitting it out of the park for fan experience.

      Sujatha Mandava, Director, Product Management, SQL Databases at Google Cloud, and Joseph Zirilli discuss “why now is the time to migrate your apps to managed databases”.

      Major League Baseball trademarks and copyrights are used with permission of Major League Baseball. Visit MLB.com.

      PLAID allies with AlloyDB to enhance the KARTE website and native app experience for customer engagement

      PLAID, a Tokyo-based startup hosts KARTE, an engagement platform focused on customer experience that tracks the customer in real time, supports flexible interactions, and provides wide analytics functionality. To support hybrid transactional and analytical processing (HTAP) at scale, KARTE was using a combination of BigQuery, Bigtable, and Spanner in the backend. This enabled KARTE to process over 100,000 transactions per second, and store over 10 petabytes of data. Adding AlloyDB for PostgreSQL to the mix has provided KARTE with the ability to answer flexible analytical queries. In addition to the range of queries that KARTE can now handle, AlloyDB has brought in expanded capacity with low-latency analysis in a simplified system. As Yuki Makino, CTO at PLAID pointed out, "With the current (columnar) engine and AlloyDB performance is about 100 times faster than earlier."

      Yuki Makino, in conversation with Sandy Ghai, Product Manager at Google Cloud, says "goodbye, expensive legacy database, hello next-gen PostgreSQL database" here.

      Implement a modern database strategy

      Transformation hinges on new cloud database capabilities. Whether you want to increase your agility and pace of innovation, better manage your costs, or entirely shut down data centers, we can help you accelerate your move to cloud. From integration into a connected environment, to disruption-free migration, and automation to free up developers for creative work, Google Cloud databases offer unified, open, and intelligent building blocks to enable a modern database strategy.

      Download the complimentary 2022 Gartner Magic Quadrant for Cloud Database Management Systems report. 

      Learn more about Google Cloud databases.

      Learn why customers choose Google Cloud databases in this e-book.

    • How to plan your SQL Server migration to Cloud SQL Tue, 24 Jan 2023 17:00:00 -0000

      A SQL Server DBA has many options for transferring data from one SQL Server instance to a new environment - those options can be overwhelming. This blog aims to help you decide what option you might want to choose for your specific migration scenario. We’ll walk through several steps for deciding on a migration plan:

      1. Evaluating your application and database migration requirements

      2. Migration approaches - continuous vs one-time

      3. Deep-dive on different migration paths

      Evaluating your application and database needs for migration

      We first need to define the different application-specific factors that will help us with choosing a migration approach. Several common questions customers consider:

      • How long will migration take?

      • Will my apps continue to work during the migration?

      • How complex is the support of such migration and how easy is the rollback process?

      To answer these questions, you need to evaluate your application and database to have better understanding of the migration process as we are making the decision:

      • What is your Downtime Tolerance?
        Some applications have well defined Change Request schedules, which can be used for the migration, while others are developed to run 24/7 with high uptime. Knowing the acceptable downtime will allow you to weigh the complexity of the continuous migration options with the simplicity of one-time approaches.

      • How big is your database?
        Migrating large databases may pose additional challenges, like a prolonged increased resource utilization on on-premises servers to support the migration or how to deliver database snapshots in Transactional Replication. Transfer rates that can look simple on a surface become less simple accounting for different challenges one can face uploading multi-terabyte backups to the cloud.

      • What is the daily size of updates to your database?
        Size of the daily updates and the net change of those updates can both have a major impact on the decision between one-time and continuous migration approaches. For example when net db size is smaller than log of all changes in case your workload updates a significant part of the database with multiple changes to the same set of rows or follows a wipe and load data refresh strategy, you can look to schedule a series of one-time migrations instead of continuous migration.  On the other hand, if changes are limited and appear during an extended time, you may want to look at an online migration approach.

      Migration approaches

      Migration approaches fall into two buckets: one-time migrations or continuous migrations.  In one-time migrations, you are taking a copy of your source database, transferring it to your destination instance, and then switching over your application to point to the new instance.  In continuous migrations, data is copied from your source instance to your destination instance on an ongoing basis - starting with an initial data load - and the application(s) may gradually switch over days, weeks, or months later.

      Depending on your downtime tolerance, or if you have an infrequently updated database, you may choose to go with one-time migration. This approach employs several options with different levels of process complexity from the least complex - import database backup to average complexity of Snapshot Replication.

      If there are a significant number of daily changes in your database and a requirement of a minimal downtime, continuous migration options are likely better for your application. Continuous migration scenarios are usually based on data replication technologies supported by SQL Server which includes Transactional, Merge Replications, p2p and Bidirectional replications and based on SQL Server Agent, Snapshot, Log Reader and Distribution Agents. Other replication technologies may leverage Change Data Capture (CDC), Change Tracking or even custom triggers to capture and store incremental data changes combining this with their own delivery mechanisms.

      Cloud SQL for SQL Server supports Push Transactional Replication, which we will explore in more detail in Part 2, along with CDC-based migration tools offered by Google Cloud partners, which you can find in the Google Cloud Marketplace. 

      One-time migration

      1 - Import DB Backup.jpg

      One of the simplest ways to migrate your database to Cloud SQL is to import it from a backup. This approach is suitable for any database size and if you are focused on a one-time migration, import from a backup becomes an increasingly appealing choice as instance size grows, mostly driven by differences in performance this option has compared to options described below when working with huge amounts of data. Striped backups should be used for 5TB+ DBs due to file size limitations.

      2 - BCP Migration.jpg

      Another option, while slower, may benefit teams that already have table extracts and using BCP tools to load their on-prem databases - the same will work with your Cloud-hosted instances. 

      BCP can be used in standalone process as well, you would need to: 

      1. Generate and apply database schema - for example using SQL Server Management Studio (SSMS) generate scripts wizard.

      2. Extract table data using BCP tool to a folder on a machine that is accessible to BCP tool (for example, the machine BCP tool is installed on) and can connect to your Cloud SQL instance. If filtering is required, you can use the “QUERYOUT” option to supply your own query criterias.

      3. Import table data from a folder to Cloud SQL.

      3 - Snapshot Replication.jpg

      If you want to move specific objects, or don’t want to transfer files manually, you can use Snapshot Replication

      While this is a step up in complexity compared to backup import, this article describes in detail the steps you would take. Snapshot replication will introduce additional resource load on your on-prem server like extra space for storing the snapshot, as well as IO and CPU resources for transferring and generating it. Some types of workloads may not be supported and may block or reset the snapshot generation process. Depending on the database schema and article configuration used there is also a limitation on objects supported by this type of replication and some potential that additional steps would be required to cut over to the replica, so we would recommend consulting with SQL Server documentation for example starting with this article. All of the additional work and caveats of Snapshot Replication have several additional advantages over the simple Import/Export approach - granularity of the objects being replicated/migrated, one click re-initialization to apply updated snapshot to target server, and established reporting and monitoring tooling.

      Snapshot generation will keep a lock on the source tables until the process is complete. This may pose an issue for larger databases as the run time can extend to hours. Consider importing from a backup or Push Transactional Replication with Snapshot Agent initialization if this lock would affect your workloads.  In contrast to Snapshot Replication, Transactional replication keeps locks for a fraction of the time and incorporates updates into the transaction log to be sent with incremental changes).

      Continuous Migration

      When your workloads can’t be stopped for the downtime required to take a backup and import it into Cloud SQL you can use one of the following continuous migration approaches: using one of Push Transactional Replication setups or CDC based custom replication.

      Push Transactional Replication

      Transactional Replication comes in many shapes and forms, allowing for very flexible replication setups. As of the time of writing this article, among alltypes of replication Cloud SQL supports Push Transactional replication as publisher and as a subscriber, which allows setup of continuous replication to a Cloud SQL instance from an external source, create additional replicas in Cloud SQL, or replication from Cloud SQL to an external destination (for example, for a multi-environment DR scenario)

      Continuous migration using Push Transactional Replication can be viewed as a set of 3 steps:

      1. Initial Seed: Before sending incremental updates to Cloud SQL, you need to copy over the initial data snapshot. There are a number of ways to do this - backup, Snapshot Agent job, BCP etc., each with its own benefits and features.
      2. Incremental updates: Incremental updates are being sent to the replica instance. Depending on the replication settings, replica can be not only available, the database can be queried (read-only in most cases).
      3. Cut over to Cloud SQL: Due to certain limitations required for transactional replication to work, final changes to the database schema are required to fully cut over workloads to Cloud SQL instances. This may include changes like adding/enabling triggers, updating identity field ranges, synchronizing logins, converting views back from tables, etc.

      Replication withinitialization from a backup:

      This is a one stop shop to set up your schema and initial seed data transfer for all server supported objects. Additionally, this works for any database size, and provides optimal performance for larger instances where other methods of initialization like Snapshot Agent, BCP, etc., have disadvantages. While this option requires a custom prepared backup file (your usual backups will not work until publications are created and marked to allow initialization from a backup), you still can use non-prepared backups with manual initialization discussed below.

      Replication with aSnapshot Agent:

      An initial seed with a Snapshot Agent works well with compatible, moderately-sized databases, on instances with enough spare resources to finish the phase. As with Snapshot Replication, this approach allows for granularity in migration and added flexibility in restarting the process at any time with just a few clicks. Another advantage is an integrated transactional replication monitoring feature that shows the status and progress of both the Snapshot Agent and the Distribution Agent replication jobs.

      Replication withmanual initialization:

      This option has the same benefits and limitations as “initialization from a backup”, with a small but significant difference - developers can choose a synchronization point for the start of the Transactional Replication. This allows the initial seeding with any previously discussed option or custom tooling available; the Transactional Replication takes care of the rest.

      A key consideration when choosing among the transactional replication initial seed methods is your database size - 1Tb+ databases are more reliably initiated from backups, while smaller ones could benefit from ease of reinitialization with a Snapshot Agent. If your database has static tables without primary keys, or otherwise unsupported objects, we recommend using backup or manual initialization options.

      Incremental updates:

      4 - Push Replication Incremental Updates.jpg

      The Log Reader Agent running at the Distributor location (which can be the same as source or a separate SQL Server instance for migrating to Cloud SQL and on a Cloud SQL instance as source) gathers the incremental updates for all published database objects from the source instance by reading data modification statements and storing them in a distribution database. The agent can run as a periodical sweep of transactions or as a continuous setup, decreasing potential replication lag between the source and target instances.

      The Distribution Agent reads the distribution database and applies undistributed commands at the target instance. As incremental changes are no longer benefiting from the BCP performance that we might have seen with an initial seed replication, the high data turnover rates in combination with a large database size may need an additional transactional replication setup tuning for efficiency. It is important to validate your migration setup on test instances before attempting an actual migration, to avoid unintended delays and timeouts. 

      Migration complexity

      We’ve explored many different migration technologies, and each has potential sources of complexity that can vary depending on the specific database being migrated. Four major steps can be sources of complexity:

      1. File transfer to Cloud SQL: Smaller databases should have few issues with backup upload or download, while larger databases in TB+ territory may have additional needs, such as the use of striped backups, compression techniques, or filtering.
      2. Setting up a database with an initial seed: Approaches that include a restore of a database backup step includes restoration of the database schema, while BCP or a custom tooling approaches may need the schema to be in place. During a manual setup of the database schema, you might need to update the identity ranges, set trigger execution orders, etc., increasing the complexity and increasing the need for DBA and/or developer involvement. 
      3. Fine-tuning replication settings: Snapshot Replication and Transactional Replication approaches may need test runs to validate the schema and workload compatibility with replication and to find the correct replication settings. DBA involvement is highly recommended at all steps of the process. Setup of monitoring, reporting, and alert systems is recommended in case replication is set to run for an extended time. We can estimate replication approaches as moderate complexity to high complexity.
      4. Finalizing the database setup for application cut-over: Snapshot Replication, Transactional Replication, BCP, and some 3rd party tooling do not finish their part of the migration with a complete database replica migrated, possibly having it in a half-ready state. Inconsistencies may include incorrect identity ranges, turned off (or missing) triggers, missing users, and so on. What is migrated and in what capacity depends on the database schema and compatibility level with the migration approach chosen. We highly recommend doing migration test runs with schema comparison to identify possible deficiencies, preparing “pull up” scripts for database promotion, and application cutover to the Cloud SQL instance.

      Not all of the complexities affect all of the discussed approaches. As examples, file transfers are irrelevant for Snapshot Replication, and the fine-tuning of replication settings is irrelevant to importing a database backup.

      Copying data out of Cloud SQL for SQL Server

      Some customers want to copy data from Cloud SQL to another destination, often as part of a multi-cloud strategy. In a nutshell, every approach we discussed above has an opportunity to be run in reverse:

      • One-time migration

        • Database backup export 

        • BCP or similar tools usually works both ways for import and export

        • Snapshot Replication using predefined stored procedures, this guide can help you with the process for both Snapshot Replication and Transactional Replication setups

      • Continuous migration

        • Push Transactional Replication with a Snapshot Agent and with manual initialization can use predefined stored procedures

        • Push Transactional Replication with a Backup can be achieved by the predefined stored procedures to create publications and create a backup. During restore, please note the LSN and use the “initialize from LSN'' option to add a subscriber. Another option is to use a “replication support only” subscription.

      Let’s review our options

      One-time migrations are the most convenient and simple approach to migrate a snapshot of your database to Cloud SQL, contrasting with continuous migration approaches that allow you to keep your target and source instances in sync with each other while your workloads continue to run.

      A universal recommendation is to go with a database backup import in all cases where possible. If that is not working, a one-time migration of smaller databases, with simple schemas and requirements in periodic refreshes Snapshot Replication might help. If table file extracts are already part of your workflows, or you already use ETL jobs and have spare development time or a need, respective BCP import and custom tooling are possibilities with benefits.

      Push transactional replication allows you to keep your source and target servers in sync. The main difference in approaches is the initialization, otherwise called the initial seed. There are no universal recommendations though. You will have to work through the options and choose the best-fitting one. Is it Snapshot Agent or backup initialization, manual initialization or custom ETL jobs? Testing is required from the beginning to the full cut-over to ensure no surprises during the initial seed, replication process, or final promotion.

      I hope you were able to find the migration approach that allows you to sit on a chair and sip coffee trouble-free, so what are you waiting for? Start migrating to Cloud SQL for SQL Server now.

      Related Article

      Migrating your Oracle and SQL Server databases to Google Cloud

      Learn how to migrate your Oracle and SQL Server databases to Google Cloud with these five new videos by Google product experts.

      Read Article
    • Scaling machine learning inference with NVIDIA TensorRT and Google Dataflow Tue, 24 Jan 2023 17:00:00 -0000

      A collaboration between Google Cloud and NVIDIA has enabled Apache Beam users to maximize the performance of ML models within their data processing pipelines, using NVIDIA TensorRTand NVIDIA GPUs alongside the new Apache Beam TensorRTEngineHandler

      The NVIDIA TensorRT SDK provides high-performance, neural network inference that lets developers optimize and deploy trained ML models on NVIDIA GPUs with the highest throughput and lowest latency, while preserving model prediction accuracy. TensorRT was specifically designed to support multiple classes of deep learning models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Transformer-based models. 

      Deploying and managing end-to-end ML inference pipelines while maximizing infrastructure utilization and minimizing total costs is a hard problem. Integrating ML models in a production data processing pipeline to extract insights requires addressing challenges associated with the three main workflow segments: 

      1. Preprocess large volumes of raw data from multiple data sources to use as inputs to train ML models to “infer / predict” results, and then leverage the ML model outputs downstream for incorporation into business processes. 

      2. Call ML models within data processing pipelines while supporting different inference use-cases: batch, streaming, ensemble models, remote inference, or local inference. Pipelines are not limited to a single model and often require an ensemble of models to produce the desired business outcomes.

      3. Optimize the performance of the ML models to deliver results within the application’s accuracy, throughput, and latency constraints. For pipelines that use complex, computate-intensive models for use-cases like NLP or that require multiple ML models together, the response time of these models often becomes a performance bottleneck. This can cause poor hardware utilization and requires more compute resources to deploy your pipelines in production, leading to potentially higher costs of operations.

      Google Cloud Dataflow is a fully managed runner for stream or batch processing pipelines written with Apache Beam. To enable developers to easily incorporate ML models in data processing pipelines, Dataflow recently announced support for Apache Beam's generic machine learning prediction and inference transform, RunInference. The RunInference transform simplifies the ML pipeline creation process by allowing developers to use models in production pipelines without needing lots of boilerplate code. 

      You can see an example of its usage with Apache Beam in the following code sample. Note that the engine_handler is passed as a configuration to the RunInference transform, which abstracts the user from the implementation details of running the model.

      code_block
      [StructValue([(u'code', u"engine_handler = TensorRTEngineHandlerNumPy(\r\n min_batch_size=4,\r\n max_batch_size=4,\r\n engine_path=\r\n 'gs://gcp_bucket/single_tensor_features_engine.trt')\r\n\r\npcoll = pipeline | beam.Create(SINGLE_FEATURE_EXAMPLES)\r\npredictions = pcoll | RunInference(engine_handler)"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ed284037150>)])]

      Along with the Dataflow runner and the TensorRT engine, Apache Beam enables users to address the three main challenges. The Dataflow runner takes care of pre-processing data at scale, preparing the data for use as model input. Apache Beam's single API for batch and streaming pipelines means that RunInference is automatically available for both use cases. Apache Beam’s ability to define complex multi-path pipelines also makes it easier to create pipelines that have multiple models. With TensorRT support, Dataflow now also has the ability to optimize the inference performance of models on NVIDIA GPUs. 

      For more details and samples to start using this feature today please have a look at the NVIDIA Technical Blog, “Simplifying and Accelerating Machine Learning Predictions in Apache Beam with NVIDIA TensorRT.” Documentation for RunInference can be found at the Apache Beam document site and for Dataflow docs.

    • Native image compilation - what’s new, and what’s next? Tue, 24 Jan 2023 15:49:00 -0000

      Native image compilation has been an emerging technology in the Java community for a number of years. In a sentence, it offers a smaller memory footprint and dramatically faster startup times which makes it especially well suited for Serverless use cases. Earlier in 2022, at I/O, we discussed the technology in more depth and how to get started with the project during its experimental phase. Since then, the project has gained first class support, as promised at SpringOne 2021. This post will discuss where the project has gone this year, what has changed regarding the getting started experience, and provide updated materials to help you leverage the technology on Google Cloud.

      What’s new?

      In the past year the Spring Native project has made a number of notable improvements in terms of compatibility, bug fixes, and documentation improvements before being superseded by Spring Boot 3’s official native image support. The result of this first class support is that Native Image Support is available as a developer tooling option in the Spring Initializer, and requires far less configuration to get started. It’s also gotten easier to download GraalVM JDKs since the Graal team released the one-line JDK download method.

      What else can trip us up?

      There are a number of topics that remain somewhat challenging when adopting Native Image compilation. 

      Reachability

      Ensuring that all libraries that your project makes use of are reachable can sometimes require some specific library metadata to be provided. It’s difficult to provide general advice for these types of issues, but Community Reachability project can save you time and effort when troubleshooting them.

      Native testing

      Testing native images can result in discovering unexpected differences when compared to apps running on the JVM. This is especially common when tests involve backing services like database access or file storage. It can also take a significant amount of time to build artifacts for these tests. Fortunately this process is steadily improving, with relevant bug fixes coming out periodically and open-source testcontainers being added to the shared metadata repository.

      Monitoring

      Monitoring a Native Image application is another non-obvious topic, since you’re no longer shipping your application with a JVM. While Native binaries can be monitored just like any other binary, there are also some specific solutions being provided for Native image compilation, such as JFR support. This is a useful option to have in mind, since it provides a similar experience to monitoring a traditional Java workload. The Graal team has provided a useful guide on the topic, which is a good starting point.

      What’s next?

      Stay tuned for more materials intended to help you make use of Native image compilation on Google Cloud.
      For an example of a complete project that makes effective use of native image compilation, check out our Pic-A-Daily codelab. It stores pictures in a Storage bucket, handles file creation events, processes the images with a Native app image and the Native vision client libraries. For reference documentation on Native image support for other client libraries, refer to our Client Libraries guide

    • Apply policy bundles and monitor policy compliance at scale for Kubernetes clusters Mon, 23 Jan 2023 17:00:00 -0000

      As more enterprise customers are adopting a hybrid and multi cloud strategy, centralized security and governance become increasingly important as workloads are distributed across the environments. Anthos is our cloud-centric container platform to run modern applications anywhere consistently and at scale. Anthos Config Management (ACM) automates policy and security for Kubernetes clusters and is comprised of Config Sync, Config Controller, and Policy Controller. Config Sync reconciles the state of clusters with one or more Git repositories. Config Controller is a hosted service that allows administrators to manage Google Cloud Platform (GCP) resources in a declarative fashion. This blog covers the enhancements we have brought to the Policy Controller component. 

      As a key component of ACM, Policy Controller enables the enforcement of fully programmable policies for your clusters. These policies act as "guardrails" and prevent any changes from violating security, operational, or compliance controls. Policy Controller can help accelerate your application modernization efforts by helping developers release code quickly and safely. 

      We are thrilled to announce the launch of our new built-in Policy Controller Dashboard, a powerful tool that makes it easy to manage and monitor the policy guardrails applied to your Fleet of clusters. 

      With Policy Controller Dashboard, Platform and Security Admins can:

      • Get an at-a-glance view for the state of all the policies applied to Fleet of clusters including enforcement status (dryrun or enforced)

      • Easily troubleshoot and resolve policy violations by referring to opinionated recommendations for each violation

      • Get visibility into compliance status of the cluster resources

      Policy Controller Dashboard is designed to be user friendly and intuitive, making it easy for users of all skill levels to manage and monitor violations for their fleet of clusters. It allows you to have a centralized view of Policy violations and take action if necessary.

      1 The Anthos Policy Controller dashboard.jpg
      The Anthos Policy Controller dashboard

      The dashboard can also show you which of your resources are affected by a specific policy, and can make opinionated suggestions on how to fix the problem.

      2 Identifying resources affected by vulnerabilities .jpg
      Identifying resources affected by vulnerabilities

      Introducing Policy Bundles

      Policy bundle is an out-of-the-box set of constraints that are created and maintained by Google. The bundles help audit your cluster resources against kubernetes standards, industry standards, or Google recommended best practices. 

      Policy bundles are available now, and can be easily used by a new or existing user as-is i.e. without writing a single line of code. Users will view the status of Policy bundle coverage for the fleet from the Policy Controller dashboard i.e. if you have 4 clusters in your fleet and you have applied the PCI DSS 3.2.1 bundle on all 4 clusters then the dashboard will show a 100% coverage for your fleet. In addition to coverage, the dashboard will also show the overall state of compliance for each bundle for the entire fleet of clusters.

      Following policy bundles are available now with Anthos:

      Get started today

      The easiest way to get started with Anthos Policy Controller is to just install Policy controller and try applying a policy bundle to audit your fleet of clusters against a standard such as CIS benchmark.

      You can also Try Policy controller to audit your cluster against Policy Essentials bundle.

    • Accessing Cloud SQL using Private Service Connect Mon, 23 Jan 2023 17:00:00 -0000

      Private Service Connect (PSC) allows private consumption of services across VPC networks that belong to different groups, teams, projects, or organizations. In some cases it can be a much better alternative than VPC Peering, Shared VPC or other approaches of private connectivity. In this blog post we are sharing a workaround to use PSC to access Cloud SQL. In addition, this solution is applicable to other managed services too which do not natively support PSC such as Memorystore, AlloyDB and several other services dependent on Private Service Access (PSA) for connectivity.

      Background

      Many customers have requirements that cause them to adopt architectures where the  resources that consume a Cloud SQL instance are in a different VPC Network or GCP Project. Peering VPCs do not work out of the box as Cloud SQL does not support Transitive Peering, nor is it desirable since it requires a lot of planning of IP Ranges.

      PSC Solution Design

      Figure 1.jpg
      Figure 1

      In an enterprise environment it is common to isolate the responsibilities among different teams with a decoupled architecture. Assuming the different application team(s) takes care of ownership of their respective application gcp project (left side of diagram). The database team(s) takes care of all the database resources for multiple applications in the database gcp project (right side of diagram). Each of these teams deploy resources in their own VPC network, giving them a high degree of autonomy and flexibility. In such architecture, the database team needs to expose the database as service to the various application(s). 

      The above diagram only shows 1:1 combination for simplicity, in-practice the relationship can be many-consumers : many-producers. This means each service attachment can have multiple psc endpoints in the same or different gcp projects. 

      Let us imagine a scenario where the client application is running on GCE/GKE and the persistence store is a Cloud SQL for MySQL database instance. Looking at Figure 1, the client application’s database requests connect to 172.168.32.2:3306  (the PSC endpoint). This IP is from the client VPC’s address space. Request originating from inside the GKE cluster, traverses subnetwork routes and lands at the PSC endpoint. The PSC endpoint is essentially a forwarding rule to the PSC service attachment that lives in the producer project.

      The service attachment connects to the Internal Load Balancer (ILB). ILB connects to a Virtual Machine (via Instance Group) which has Private Service Access (PSA) connectivity to Cloud SQL. To forward the communication from VM to Cloud SQL, VM needs to be further configured with an IP Table rule such as below.

      code_block
      [StructValue([(u'code', u'iptables -t nat -A PREROUTING -p tcp --dport 3306 -j DNAT --to-destination ${CLOUD_SQL_PROD_IP}:3306\r\niptables -t nat -A POSTROUTING -j MASQUERADE'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e8d5433ab50>)])]

      Note: Application could also be any other client platform which allows private connectivity to Cloud SQL. Also, the database engine could be Cloud SQL running PostgreSQL / Microsoft SQL Server or MySQL.

      Overlaying DNS

      It is common for enterprises to give a friendly domain name to the database ip addresses. In-order to keep both the producer and consumer networks decoupled it is best to create a separate private Cloud DNS instance on each VPC Network. Then assign the similar DNS name for the same logical resource (target database) in both networks, as a convention. Using the similar names can help both the teams to communicate more efficiently. 

      For example, Application VPC (consumer) has a DNS entry db-inst1.app1.acemy.com resolving to ip address 172.168.32.2. Therefore the application will connect using uri db-inst1.app1.acemy.com:3306. Similarly the Database VPC (producer) will have the entry db-inst1.dbs.acemy.com resolving to the ip address of Cloud SQL instance. Note the subtle difference in subdomains of dns (app1 vs dbs). The db-inst1.dbs.acemy.com dns name can be used in the IP Tables configuration (instead of Cloud SQL ip).
      Although it is possible to have the exact same DNS name in both networks, doing so can lead to debugging and human communication issues.

      Managing connectivity to multiple database instances

      The database team could be providing its services to multiple different applications, hosting several different database engines. Providing connectivity to each database instance will require a PSC, ILB and VM resources. It can be handled by using either of the following architectures or a combination of both.

      1. Simple deployment

      Figure 2.jpg
      Figure 2

      This architecture has separate gcp resources provisioned for connectivity per Cloud SQL instance. It can be more suitable for a multi-tenant application(s) or where there is risk of noisy neighbor problems. Therefore we recommend using this architecture.

      2. Deployment with shared resources

      Figure 3.jpg
      Figure 3

      In this architecture, client applications will need to use different ports to connect per Cloud SQL instance, as shown in blue (port 3316) and orange (port 3317). The PSC endpoint and service attachment do not have any port binding. Therefore the same pass through ILB for the allowed ports can be used. VM should be configured with IP Tables route for each port to the respective Cloud SQL instance.
      There are some cost benefits due to sharing of resources. However there are a few factors which you should consider before implementing

      • All application(s) will have a network path to all Cloud SQL instances, which may be a concern.

      • Complexity of updating ip-tables as new Cloud SQL instances come online.

      • Noisy neighbor problem, if one database instance has higher traffic then it may choke the common instance group. 

      Recommendations and best practices

      • PSC based solution has only one hop (in path from client application to Cloud SQL instance), which happens on the Instance Group’s VM. Hence it has minimal latency overhead. This is because PSC and ILB are part of GCP’s software defined constructs of VPC Network.

      • Instance group’s VMs network performance depends on the machine type, hence factor in the bandwidth requirements and choose VM size accordingly.

      • Prefer to use a VM operating system with the smallest footprint (like Ubuntu Minimal LTS), to reduce attack surface and therefore frequency for OS patching.

      • Use a managed instance group for high availability and automatically heal from zonal failure. 

      • If database connections are long running stateful (like cached connection pool), avoid frequent restarts to the VM with IP Tables. Similarly avoid configurations which cause frequent auto scaling (and shrinking).

    • A new Google Cloud region is coming to Kuwait Mon, 23 Jan 2023 17:00:00 -0000

      To meet growing demand for cloud services in the Middle East, we are excited to announce plans to bring a new Google Cloud region to Kuwait to support our growing customer base. 

      When it opens, the Kuwait region will deliver high-performance services that make it easier for organizations to serve their own users faster, more reliably and securely. Local customers like the Government of Kuwait and Alshaya Group will benefit from key controls that enable them to maintain low latency and the highest security and compliance standards. 

      "Through our strategic partnership with Google Cloud, the State of Kuwait will continue to make great strides towards digital transformation, a main pillar of our New Kuwait vision (Kuwait 2035). Our alliance with Google Cloud will have significant benefits for Kuwait and will provide a major boost to achieving the country's socio-economic priorities, including promoting efficiencies in government, enhancing healthcare and education, and diversifying the economy", said H.E. Mr. Mazin Saad Alnahedh, Minister of Commerce and Industry and Minister of State for Communications and Information Technology Affairs.

      "Alshaya is a pioneer and leader in our industry, and the scale and expansion of Google Cloud's platform will further enable us to deliver safe and reliable services to customers across the Middle East and Africa," said Chady Younan, Director of Data, Analytics, BI & Data Science at Alshaya Group. 

      With 35 regions and 106 zones currently in operation around the world, Google Cloud’s global network of cloud regions is the foundation of the infrastructure it is building to support customers of all sizes and across industries. From retail and media & entertainment to financial services, healthcare and the public sector, leading organizations come to Google Cloud as their trusted innovation partner to address five key areas: 

      • Understanding and using data: Google Cloud helps customers become smarter and make better decisions with a unified data platform. We help customers reduce complexity and combine unstructured and structured data — wherever it resides — to quickly and easily produce valuable insights. 

      • Establishing an open foundation for growth: When customers move to Google Cloud, they get a flexible, secure and open platform that evolves with their organization. Our commitment to multicloud, hybrid cloud, and open source offers organizations the freedom of choice, allowing their developers to build faster and more intuitively.

      • Securing systems and users: As every company rethinks its security posture, we help customers protect their data using the same infrastructure and security services that Google uses for its own operations. 

      • Creating a collaborative environment: In today’s hybrid work environment, Google Cloud provides the tools needed to transform how people connect, create, and collaborate. 

      • Building a cleaner, more sustainable future: Google has been carbon-neutral since 2007, and we are working toward an ambitious goal to operate entirely on carbon-free energy by 2030. Today, when customers run on Google Cloud their workloads are matched with 100% renewable energy. 

      The forthcoming Kuwait cloud region represents our ongoing commitment to supporting digital transformation across the Middle East, and follows previous announcements of our plans to bring cloud regions to Doha and Dammam

      Learn more about our global cloud infrastructure, including new and upcoming regions.

      Related Article

      New cloud regions coming to a country near you

      Google Cloud regions are coming to Austria, Greece, Norway, South Africa, and Sweden.

      Read Article



    Google has many products and the following is a list of its products: Android AutoAndroid OSAndroid TVCalendarCardboardChromeChrome EnterpriseChromebookChromecastConnected HomeContactsDigital WellbeingDocsDriveEarthFinanceFormsGboardGmailGoogle AlertsGoogle AnalyticsGoogle Arts & CultureGoogle AssistantGoogle AuthenticatorGoogle ChatGoogle ClassroomGoogle DuoGoogle ExpeditionsGoogle Family LinkGoogle FiGoogle FilesGoogle Find My DeviceGoogle FitGoogle FlightsGoogle FontsGoogle GroupsGoogle Home AppGoogle Input ToolsGoogle LensGoogle MeetGoogle OneGoogle PayGoogle PhotosGoogle PlayGoogle Play BooksGoogle Play GamesGoogle Play PassGoogle Play ProtectGoogle PodcastsGoogle ShoppingGoogle Street ViewGoogle TVGoogle TasksHangoutsKeepMapsMeasureMessagesNewsPhotoScanPixelPixel BudsPixelbookScholarSearchSheetsSitesSlidesSnapseedStadiaTilt BrushTranslateTravelTrusted ContactsVoiceWazeWear OS by GoogleYouTubeYouTube KidsYouTube MusicYouTube TVYouTube VR


    Google News
    TwitterFacebookInstagramYouTube



    Think with Google
    TwitterFacebookInstagramYouTube

    Google AI BlogAndroid Developers BlogGoogle Developers Blog
    AI is Artificial Intelligence


    Nightmare Scenario: Inside the Trump Administration’s Response to the Pandemic That Changed. From the Washington Post journalists Yasmeen Abutaleb and Damian Paletta - the definitive account of the Trump administration’s tragic mismanagement of the COVID-19 pandemic, and the chaos, incompetence, and craven politicization that has led to more than a half million American deaths and counting.

    Since the day Donald Trump was elected, his critics warned that an unexpected crisis would test the former reality-television host - and they predicted that the president would prove unable to meet the moment. In 2020, that crisis came to pass, with the outcomes more devastating and consequential than anyone dared to imagine. Nightmare Scenario is the complete story of Donald Trump’s handling - and mishandling - of the COVID-19 catastrophe, during the period of January 2020 up to Election Day that year. Yasmeen Abutaleb and Damian Paletta take us deep inside the White House, from the Situation Room to the Oval Office, to show how the members of the administration launched an all-out war against the health agencies, doctors, and scientific communities, all in their futile attempts to wish away the worst global pandemic in a century...


    GoogBlogs.com
    TwitterFacebookInstagramYouTube



    ZDNet » Google
    TwitterFacebookInstagramYouTube



    9to5Google » Google
    TwitterFacebookInstagramYouTube



    Computerworld » Google
    TwitterFacebookInstagramYouTube

    • Google Forms cheat sheet: How to get started Fri, 27 Jan 2023 03:00:00 -0800

      Need to make a quiz, survey, registration form, order form, or other web page that gathers feedback from co-workers, customers, or others? You can design and deploy one right from your web browser with Google Forms. It’s integrated with Google Drive to store your forms in the cloud.

      Anyone with a Google account can use Forms for free. It’s also part of Google Workspace, Google's subscription-based collection of online office apps for business and enterprise customers that includes Google Docs, Sheets, Slides, Gmail, and more. Forms is lesser known than these other productivity apps, but it's a useful tool to know how to use. This guide takes you through designing a form, deploying it online, and viewing the responses it gathers.

      To read this article in full, please click here

    • 9 handy hidden features in Google Docs on Android Fri, 27 Jan 2023 02:45:00 -0800

      Few apps are as essential to mobile productivity as the humble word processor. I think I've probably spent a solid seven years of my life staring at Google Docs on one device or another at this point, and those minutes only keep ticking up with practically every passing day.

      While we can't do much about the need to gaze at that word-filled white screen, what we can do is learn how to make every moment spent within Docs count — and in the Docs Android app, specifically, there are some pretty spectacular tucked-away time-savers just waiting to be discovered.

      Make a mental note of these advanced shortcuts and options, and put 'em to good use the next time you find yourself staring at Docs on your own device.

      To read this article in full, please click here

    • How layoffs at Google could affect enterprise cloud services Wed, 25 Jan 2023 08:52:00 -0800

      An investor with a $6 billion stake in Google parent Alphabet is calling for more layoffs at the company, although it has already cut 12,000 jobs.

      The managing partner of London-based TCI Capital Fund Management wrote to Alphabet’s chief executive, Sundar Pichai, asking him to cut thousands more jobs and to reduce the compensation of its remaining employees.

      Alphabet already plans to cut its workforce by 6%, it said on January 20, 2023, a move that will affect staff across the company including in its enterprise cloud computing division.

      To read this article in full, please click here

    • Big banks' proposed digital wallet payment system likely to fail Wed, 25 Jan 2023 03:00:00 -0800

      A group of leading banks is partnering with payment service Zelle’s parent company to create their own “digital wallet” connected to consumer credit and debit cards to enable online or retail store payments.

      The new payment service, however, must compete with entrenched digital wallets such as Apple Pay and Google Pay that are embedded on mobile devices. It’s also not the first attempt for some in the consortium to create a digital wallet payment service.

      The consortium includes Wells Fargo & Co., Bank of America, JPMorgan Chase, and four other financial services companies, according to The Wall Street Journal (WSJ). The digital wallet, which does not yet have a name, is expected to launch in the second half of this year.

      To read this article in full, please click here

    • Google's parent company Alphabet to cut 12,000 jobs Fri, 20 Jan 2023 03:48:00 -0800

      Google’s parent company Alphabet is cutting 12,000 jobs, around 6% of its global workforce, according an internal memo from Sundar Pichai, Alphabet's CEO.

      Pichai told employees in an email first reported by Bloomberg on Friday that he takes “full responsibility for the decisions that led us here” but the company has a “substantial opportunity in front of us” with its early investments in artificial intelligence.

      The layoffs are global and will impact US staff immediately, news outlet Reuters also reported. They will affect teams across Alphabet, including recruiting and some corporate functions, as well as some engineering and products teams.

      To read this article in full, please click here

    • 8 advanced Android clipboard tricks Fri, 20 Jan 2023 02:45:00 -0800

      You'd never know it, but one of the most potential-packed parts of your favorite Android phone is a feature you rarely actually see.

      It's mostly invisible by design, in fact — and yet, if you teach yourself how to tap into it, you'll save time, increase your efficiency, and feel like a total smartphone sorcerer.

      The feature of which we speak is the humble-seeming Android clipboard — the simple virtual space where anything you copy stays tucked away out of sight 'til you're ready to use it.

      If you haven't spent much time thinking about the Android clipboard, believe me: You aren't alone. But my goodness, are you ever missing out.

      So allow me to introduce you to some of the most advanced and easily overlooked productivity boosters hiding away in your phone's invisible holding space. Copy these tricks into your noggin, and before you know it, you'll be slashing all sorts of wasted seconds out of your day.

      To read this article in full, please click here

    • A colossal Wear OS calendar upgrade — Google Pixel Watch and beyond Wed, 18 Jan 2023 03:00:00 -0800

      Google's Pixel Watch has plenty of good things goin' for it. But one part of the Wear OS software that remains decidedly meh is its system calendar integration.

      It's a frustrating limitation not just for the Pixel Watch but for Wear OS on the whole and virtually any associated gadget. Somewhat shockingly, for a company that claims Google Calendar as one of its most popular and important productivity products, Google has yet to grace its wearable operating system with any meaningful agenda-interacting interfaces.

      Now, sure, you can always check in on upcoming events or make appointments on your watch via voice command — but try to add an actual calendar tile into your watch's swipeable mix of at-a-glance info panels, and your only real option is something like this:

      To read this article in full, please click here

    • Alphabet robotics subsidiary Intrinsic lays off 20% staff Fri, 13 Jan 2023 00:13:00 -0800

      Layoffs at Alphabet’s “Other Bets” division has widened to include its robotics subsidiary Intrinsic AI, which is eliminating about 20% of its workforce or roughly 40 employees, according to reports.

      Intrinsic AI came out of Alphabet's X research unit, after incubating there for close to five years. It is a robotics firm that is working on developing artificial intelligence-based software to bolster the use of robots in industries and commercial environments.

      The news about the company’s layoffs was first report by The Information. An email sent to Intrinsic AI didn’t immediately receive a response.

      To read this article in full, please click here

    • The most promising Google Pixel product of 2023 Wed, 11 Jan 2023 03:00:00 -0800

      We may be only mere days into this shiny new year of ours, but man alive, lemme tell ya: I'm feeling pretty darn excited about what 2023's got cookin'.

      Here in the land o' Googley matters, y'see, this odd-numbered eon is rapidly shaping up to be a significant one when it comes to Pixel-flavored produce. Google's riding the long-building momentum of its homemade Android products and gearin' up for a monumental year of potentially shapeshifting launches.

      The Pixel prize we're hearing the most buzz about right now, without a doubt, is the on-again off-again (and still completely unofficial) folding Pixel phone — believed to be known as either the Pixel Fold or the Pixel Notepad, depending on which rumor du jour you've read most recently.

      To read this article in full, please click here

    • How to switch from Android to iPhone Wed, 11 Jan 2023 03:00:00 -0800

      The days when migrating from an Android device to an Apple iPhone was characterized by frustration, complexity, and lost data are for the most part behind us. Which means when you’re ready to make the leap to the Apple ecosystem, as many are, the process should be straightforward — and even easier if you follow this guide.

      There’s an app for that: Move to iOS

      Apple has created Move to iOS, an app to soothe the pain when moving to an iPhone. The app is available in the Google Play Store and automates much of the migration process for you.

      To read this article in full, please click here

    • 7 advanced Android adjustments for your new year Fri, 06 Jan 2023 02:45:00 -0800

      Ah, January. It's the perfect time to step back, take stock of your digital life, and set yourself up for a year packed with pleasure-producing productivity.

      Oh, and you also might make a few overly ambitious resolutions we all know you won't keep.

      But meaningless promises aside, that "stepping back" stuff can actually make a meaningful difference — and here in the land o' Android, whether you're palming a shiny new device or trying to make the most of a trusty old companion, a handful of simple steps can go an impressively long way in improving your experience.

      Here, specifically, are seven advanced adjustments worth revisiting on whatever phone you're using — adjustments that are all too easy to forget about and fail to keep up with over time.

      To read this article in full, please click here

    • 2022's top Google Assistant advice for Android Thu, 22 Dec 2022 03:00:00 -0800

      One of Android's most underappreciated advantages is its tight integration with the excellent Google Assistant voice command genie.

      It's easy to take Assistant for granted, being a person who carries around an Android phone every day. It's just always there, usually quietly waiting. And it's certainly not perfect.

      But Assistant can do some spectacularly useful stuff. And all it takes is 10 minutes with an iPhone to remind yourself just how good we've got it.

      These bits of advanced Google Assistant knowledge from Android Intelligence over the past year will help you tap into some of the service's most advanced and out-of-sight possibilities. Check 'em out, store 'em deep in your brain's internal storage, and be sure to come sign up for my Android Intelligence newsletter when you're done to get even more off-the-beaten-path knowledge in your inbox every Friday — direct from me to you.

      To read this article in full, please click here

    • The top Google Pixel tips of 2022 Tue, 20 Dec 2022 03:00:00 -0800

      More and more, there are Android tips — and then there are Pixel tips.

      Owning a Google Pixel phone has become a ticket of sorts to a uniquely top-tier type of Android experience. With Google's pure vision for the way the operating system itself should work (and none of the experience-harming and often even privacy-compromising layers other device-makers love to lard into the software) — not to mention all the extra bits of exceptionally helpful Googley goodness that are available only in the Pixel environment — the Google Pixel increasingly represents Android at its best. And as anyone who's spent any amount of time living with a Pixel can tell you, nothing else comes close to comparing.

      To read this article in full, please click here

    • 7 hidden tricks for your Chromebook trackpad Wed, 14 Dec 2022 03:00:00 -0800

      When we talk about gestures, we tend to focus on the fingie-on-the-screen variety — whether we're talkin' Android gestures and all the possibilities on that front or chewing over the similar set of on-screen gestures Chromebooks have had for a while now.

      But there's a whole other category of time-saving swipers sneakin' around in your greatest Googley gizmos. These swipes are relevant to ChromeOS, specifically, and they'll have you flying around your favorite Chromebook in record time — once you remember to actually start using 'em.

      To read this article in full, please click here

    • The Great Resignation isn’t over yet Wed, 14 Dec 2022 03:00:00 -0800
    • An Android shortcut secret Fri, 09 Dec 2022 02:45:00 -0800

      Have I ever mentioned how much I love shortcuts?

      All right, maybe I have. (Maybe, erm, somewhere in the neighborhood of 7,942 times, come to think of it.) But oh, it be true. There's just something so impossibly satisfying about knowing you're increasing your efficiency and slicing soul-sucking seconds out of your day.

      And on Android, whoo boy, have we got some awesome opportunities for cutting out steps and achieving Mega-Nerd™ levels of efficiency obsession.

      Today, as your friendly neighborhood Mega-Nerd™, I want to remind you about an easily overlooked option for adding some extra step-shaving shortcuts directly onto your home screen. These shortcuts are buried deep within some of Android's most productivity-oriented apps. And you'd have to be — well, an efficiency-obsessed Mega-Nerd™ to even realize they're there.

      To read this article in full, please click here

    • Google Chrome gets memory- and power-saving modes Thu, 08 Dec 2022 12:59:00 -0800

      Google’s Chrome browser has long been plagued by memory system-sucking issues — especially when multiple tabs are open — but the world’s most popular browser today got an upgrade to optimize both device battery power and memory use.

      With the latest release of Chrome on desktop, Google will be introducing two new performance settings: Memory Saver and Energy Saver. When they're used, Google said Chrome will consume up to 30% less memory and extend a device’s battery when it’s running low.

      To read this article in full, please click here

    • A new Chromebook productivity feature worth finding Wed, 07 Dec 2022 03:00:00 -0800

      Good news, my fellow Chromebook-carrying citizens: Google's ChromeOS platform is in the midst of getting a great new feature that's all about productivity — and odds are, you can find and enable it on your own favorite Chromebook this very second.

      The feature ties into the ChromeOS Virtual Desks system. Remember that thing? It's the setup that snuck into our Chrome-colored lives a few years back. In short, the Virtual Desks option empowers you to spread your work across multiple environments within a single Chromebook computer. So, for instance:

      To read this article in full, please click here

    • Hey, Google: It's time to step up your Pixel upgrade promise Fri, 02 Dec 2022 02:45:00 -0800

      Look, it's no big secret that I'm a fan of Google's Pixel program.

      I've personally owned Pixel phones since the first-gen model graced our gunk-filled pockets way back in 2016. And Pixels have been the only Android devices I've wholeheartedly recommended for most folks ever since.

      There's a reason. And more than anything, it comes down to the software and the overall experience Google's Pixel approach provides.

      • Part of that is the Pixel's interface and the lack of any unnecessary meddling and complication — including the absence of confusing (and often privacy-compromising) duplicative apps and services larded onto the phone for the manufacturer's business benefit and at the expense of your user experience.
      • Part of it is the unmatched integration of exceptional Google services and exclusive Google intelligence that puts genuinely useful stuff you'll actually benefit from front and center and makes it an integrated part of the Pixel package.
      • And, yes, part of it is the Pixel upgrade promise and the fact that Pixel phones are still the only Android devices where both timely and reliable software updates are a built-in feature and guarantee.

      [Psst: Got a Pixel? Any Pixel? Check out my free Pixel Academy e-course to uncover all sorts of advanced intelligence lurking within your phone!]

      To read this article in full, please click here

    • The best Android apps for business in 2023 Thu, 01 Dec 2022 03:00:00 -0800

      Trying to find the right app for any given area on Android is a lot like trying to order dinner at a restaurant with way too many options on the menu. How can you possibly find the right choice in such a crowded lineup? With the Google Play Store now boasting somewhere in the neighborhood of 70 gazillion titles (last I checked), it's no simple task to figure out which apps rise above the rest and provide the best possible experiences.

      That's why I decided to step in and help. I've been covering Android from the start and have seen more than my fair share of incredible and not so incredible apps. From interface design to practical value, I know what to look for and how to separate the ordinary from the extraordinary. And taking the time to truly explore the full menu of options and find the cream of the crop is quite literally my job.

      To read this article in full, please click here



    Pac-Man Video Game - Play Now

    A Sunday at Sam's Boat on 5720 Richmond Ave, Houston, TX 77057