Google Fi SIM Card Kit. Choose between the Simply Unlimited, Unlimited Plus and Flexible plans based on your data usage. 4G LTE and nationwide 5G coverage included for compatible phones.

Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. Google was launched in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Some of Google’s products are Google Docs, Google Sheets, Google Slides, Gmail, Google Search, Google Duo, Google Maps, Google Translate, Google Earth, and Google Photos. Play our Pac-Man videogame.

Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California. The project initially involved an unofficial "third founder", Scott Hassan, the original lead programmer who wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company. Read the full story...
Clothing & Jewelry —— Cellphones —— Microsoft Products —— All Products

Google Blog

  • Helping people understand AI Thu, 18 Aug 2022 15:00:00 +0000

    If you’re like me, you may have noticed that AI has become a part of daily life. I wake up each morning and ask my smart assistant about the weather. I recently applied for a new credit card and the credit limit was likely determined by a machine learning model. And while typing the previous sentence, I got a word choice suggestion that “probably” might flow better than “likely,” a suggestion powered by AI.

    As a member of Google’s Responsible Innovation team, I think a lot about how AI works and how to develop it responsibly. Recently, I spoke with Patrick Gage Kelley, Head of Research Strategy on Google’s Trust & Safety team, to learn more about developing products that help people recognize and understand AI in their daily lives.

    How do you help people navigate a world with so much AI?

    My goal is to ensure that people, at a basic level, know how AI works and how it impacts their lives. AI systems can be really complicated, but the goal of explaining AI isn’t to get everyone to become programmers and understand all of the technical details — it’s to make sure people understand the parts that matter to them.

    When AI makes a decision that affects people (whether it’s recommending a video or qualifying for a loan), we want to explain how that decision was made. And we don’t want to just provide a complicated technical explanation, but rather, information that is meaningful, helpful, and equips people to act if needed.

    We also want to find the best times to explain AI. Our goal is to help people develop AI literacy early, including in primary and secondary education. And when people use products that rely on AI (everything from online services to medical devices), we want to include a lot of chances for people to learn about the role AI plays, as well as its benefits and limitations. For example, if people are told early on what kinds of mistakes AI-powered products are likely to make, then they are better prepared to understand and remedy situations that might arise.

    Do I need to be a mathematician or programmer to have a meaningful understanding of AI?

    No! A good metaphor here is financial literacy. While we may not need to know every detail of what goes into interest rate hikes or the intricacies of financial markets, it’s important to know how they impact us — from paying off credit cards, to buying a home, or paying for student loans. In the same way, AI explainability isn’t about understanding every technical aspect of a machine learning algorithm – it’s about knowing how to interact with it and how it impacts our daily lives.

    How should AI practitioners — developers, designers, researchers, students, and others — think about AI explainability?

    Lots of practitioners are doing important work on explainability. Some focus on interpretability, making it easier to identify specific factors that influence a decision. Others focus on providing “in-the-moment explanations” right when AI makes a decision. These can be helpful, especially when carefully designed. However, AI systems are often so complex that we can’t rely on in-the-moment explanations entirely. It’s just too much information to pack into a single moment. Instead, AI education and literacy should be incorporated into the entire user journey and built continuously throughout a person’s life.

    More generally, AI practitioners should think about AI explainability as fundamental to the design and development of the entire product experience. At Google, we use our AI Principles to guide responsible technology development. In accordance with AI Principle #4: “Be accountable to people,” we encourage AI practitioners to think about all the moments and ways they can help people understand how AI operates and makes decisions.

    How are you and your collaborators working to improve explanations of AI?

    We develop resources that help AI practitioners learn creative ways to incorporate AI explainability in product design. For example, in the PAIR Guidebook we launched a series of ethical case studies to help AI practitioners think through tricky issues and hone their skills for explaining AI. We also do fundamental research like this paper to learn more about how people perceive AI as a decision-maker, and what values they would like AI-powered products to embody.

    We’ve learned that many AI practitioners want concrete examples of good explanations of AI that they can build on, so we’re currently developing a story-driven visual design toolkit for explanations of a fictional AI app. The toolkit will be publicly available, so teams in startups and tech companies everywhere can prioritize explainability in their work.

    An illustration of a sailboat navigating the coast of Maine

    The visual design toolkit provides story-driven examples of good explanations of AI.

    I want to learn more about AI explainability. Where should I start?

    This February, we released an Applied Digital Skills lesson, “Discover AI in Daily Life.” It’s a great place to start for anyone who wants to learn more about how we interact with AI everyday.

    We also hope to speak about AI explainability at the upcoming South by Southwest Conference. Our proposed session would dive deeper into these topics, including our visual design toolkit for product designers. If you’re interested in learning more about AI explainability and our work, you can vote for our proposal through the SXSW PanelPicker® here.

  • More connected TV ad buying options in Display & Video 360 Thu, 18 Aug 2022 10:00:00 +0000

    As viewers continue to shift from traditional TV to connected TV (CTV), marketers are looking for effective ways to connect with streamers and measure the reach of their campaign across a variety of CTV apps. So we’re introducing new CTV solutions in Display & Video 360 to give you the option to pick the inventory and measurement solutions that work best for you.

    First we’re adding audience guarantees based on Nielsen Digital Ad Ratings (DAR) to Display & Video 360 and expanding advanced Programmatic Guaranteed features to more exchanges. Programmatic Guaranteed lets you access top CTV ad placements, combining the best of the direct deal buying experience with the automation and personalization of programmatic. Leading brands like Uber already use this buying technique to secure coveted CTV inventory around high-visibility events while still enjoying the efficiency of programmatic advertising.

    Second, we’re making it simpler to buy YouTube CTV and other CTV apps in a consolidated workflow. This will give you a chance to improve your media performance by managing your campaign goals seamlessly across any CTV inventory.

    Audience guarantees backed by Nielsen

    CTV and video buyers often use Nielsen Digital Ad Ratings as the system of record to understand how many unique viewers they reached within their core audiences and prove campaign impact across digital media platforms. That’s why we’re launching Nielsen audience guarantees across streaming TV and video in Display & Video 360. This will make it easier to plan, buy and measure an entire connected TV upfront in Display & Video 360 in a way that’s comparable to linear TV. Being able to reach your key audiences has been central to effective traditional TV advertising — the same goes for CTV.

    When setting up your guaranteed deal, you can now choose a specific age and gender demographic, like adults ages 18 to 49, and pay only for the ad impressions that reach your target audience as measured by Nielsen DAR. This feature works for all types of video campaigns — including for connected TV ads — and comes at no additional cost for advertisers.

    Nielsen-based audience guarantees enable Display & Video 360 users to buy inventory programmatically and pay only for impressions that reached their target audience as reported in Digital Ad Ratings. Kim Gilberti, Sr VP, Product Management

    For now, audience guarantees are available for Programmatic Guaranteed ads running with a set of publishers on Google Ad Manager in the U.S. We look forward to onboarding more publishers and exchanges.

    Advanced Programmatic Guaranteed features available for more exchanges

    Speaking of expansion, we’re making Google audiences for Programmatic Guaranteed available across a variety of exchanges including Google Ad Manager but also Xandr and Magnite and looking to add more. We've already expanded capabilities for you to reach Google audiences on CTV campaigns when bidding on open auction inventory. Google audiences can help drive a higher return on investment by reaching the groups of consumers who are most likely to respond to your message based on Google’s understanding of intent. Now, you can also use Google affinity, in-market and demo segments while buying Programmatic Guaranteed deals across a variety of participating publishers, giving you even more flexibility in your audience strategies for CTV.

    For these exchanges, we’re also improving how ad frequency management works for Programmatic Guaranteed deals, helping to enhance the viewing experience for your audiences. Once your campaign frequency goal is reached for certain users, whether via open auction, Programmatic Guaranteed, or a combination of the two, Display & Video 360 now stops showing ads to these users while still prioritizing and delivering the agreed number of impressions from your guaranteed deals.

    We're doubling down on programmatic reservation, particularly in the growing CTV landscape. So managing frequency for Programmatic Guaranteed deals with more exchanges is critical to help us further reduce waste associated with ad overexposure. Charles Cebuhar, Sr Director Digital Activation
    OMG Center of Excellence

    Consolidated CTV workflow across YouTube and other CTV apps

    For many marketers, simplifying campaign execution for a variety of CTV apps is fundamental to effectively reaching streamers. And Display & Video 360’s capacity to plan, manage ad frequency and measure performance across YouTube and other CTV inventory sources saves them time and money. To help CTV buyers deliver more coordinated ad campaigns, YouTube ads can now be purchased within Display & Video 360’s insertion order dedicated to connected TV ad buying. This simplified workflow features parameters designed specifically for CTV campaigns to help minimize technical blockers that typically limit reach on CTV devices. Because it puts YouTube side-by-side with other top CTV inventory, it also makes it easier to optimize for common goals or control ad frequency across your entire CTV media mix.

    Test this integrated workflow and advanced Programmatic Guaranteed capabilities today and combine them with new CTV frequency management solutions in Display & Video 360 to get the most efficient reach out of your CTV deals.

  • Google Workspace Individual is now available in Europe Thu, 18 Aug 2022 07:00:00 +0000

    We launched Google Workspace Individual over a year ago to help solo business owners reach new customers and bring their big ideas to life. Since then, customers in the U.S., Canada, Mexico, Brazil, Japan and Australia have all used it to grow, run and protect their businesses.

    They’ve used Google Meet for consultations, Calendar for scheduling and customized layouts in Gmail to easily communicate in ways that reflect their brands. We’ve loved seeing how businesses of all stripes have brought their passions to the world — from caterers, creative consultants and real estate agents to pet sitters, hair stylists and life coaches.

    Now, we’re launching Google Workspace Individual in Europe. This means that customers in France, Italy, Spain, Germany, the U.K., and Switzerland can benefit from new and upcoming capabilities designed to help individual business owners stay focused on what they love, and spend less time on tasks like scheduling appointments and sending emails.

    We’ve recently added powerful new capabilities to Workspace Individual, including:

    • The ability to live stream from Google Meet to YouTube so customers can reach a bigger audience
    • Professional layouts and multi-send for email newsletters, campaigns and announcements
    • Better appointment booking with customizable reminders, durations and exceptions
    • Improvements to secure video conferencing over Google Meet, with immersive backgrounds, improved sound and lighting, and the ability to bring meetings directly into the flow of work by integrating Meet with Docs, Sheets, and Slides.

    And soon, customers will have the ability to add e-signatures directly to Google Docs. They’ll be able to quickly execute agreements from the familiar interface of Docs, without having to switch tabs or apps. See the full list of new and coming-soon features.

    With this expansion of Google Workspace Individual, solo business owners in Europe can run, grow and protect their businesses with apps they’re already familiar with. This means more time helping customers and less time scheduling, emailing and updating calendars. In the months ahead, we’ll continue to bring Workspace Individual to an expanding list of countries.

    Sign up today with a 14-day trial

    Sign up for Google Workspace Individual today with a 14-day trial, or learn more about Google Workspace Individual on our website. If you’re not a business owner but still want premium capabilities for personal use, Google Workspace premium is also now available with Google One 2TB+ plans[5a49a0], which include expanded cloud storage, advanced Photos editing features and more.

  • 5 apps making their mark in Asia Pacific and beyond Thu, 18 Aug 2022 01:00:00 +0000

    Google Play turned 10 this year, and we’ve been keeping the celebrations going with local developer communities around the world. It’s an extra special occasion in Asia Pacific, which is home to one of the largest app developer populations (nearly a third of the 26.9 million app developers worldwide) and one of the most engaged audiences. In fact, people in Asia Pacific download and use mobile apps more than any other region.

    Developers in Asia Pacific are reaching global audiences, with hundreds of millions of downloads outside the region. Some of these apps have become global names and inspired new trends on Play, like multiplayer gaming (Mobile Legends: Bang Bang), super apps (Grab), rapid delivery e-commerce (Coupang) and fintech solutions for the unbanked (Paytm).

    Let’s take a closer look at some other emerging themes on Play — like mental health, news and music — where developers in Asia Pacific are making their mark globally.


    Developer: Seekrtech, Taiwan

    Listed on Play: August 2014

    “The main goal of Forest is to encourage users to put down their phones and focus on the more important things around them,” says Shaokan Pi, CEO of Forest. Here’s how it works — you set a focus time period, whether you’re working at the office or at dinner with friends. Once you put down your phone, a virtual tree starts growing. If you stay focused (and don’t look at your phone), the sapling grows into a big tree. And you can earn virtual coins to grow more trees, and eventually a whole forest. There’s a real-world benefit, too — thanks to a partnership between Forest and Trees for the Future, you can spend your coins to plant real trees on Earth.

    A group of seven people standing outside and holding a banner that says “Forest.”

    The Forest team planting a tree in Kenya


    Developer: SmartNews, Japan

    Listed on Play: March 2013

    SmartNews, which is also celebrating its 10th anniversary this year, uses artificial intelligence to collect and deliver a curated view of news from all over the world. But it’s not just an echo chamber — its News From All Sides feature shows people articles across a wide spectrum of political perspectives. SmartNews has also developed timely products like a COVID-19 dashboard and trackers for wildfires and hurricanes.


    Developer: Evolve, India

    Listed on Play: July 2020

    Evolve, a health-tech startup supporting the wellbeing of the LGBTQ+ community, landed on Google Play’s Best of 2021 list in India. The app offers educational content for members of the LGBTQ+ community, covering topics like embracing your sexuality and coming out to loved ones. “There is a need for more customized solutions for this community,” says Anshul Kamath, co-founder of Evolve. “We hope to provide a virtual safe space where members can work on themselves and specific challenges that impact their daily mental health.”

    Four people smiling at the camera and holding a trophy

    The Evolve team with their “Best of Play” trophy in 2021

    Magic Tiles 3

    Developer: Amanotes, Vietnam

    Listed on Play: February 2017

    This musical game app quickly found fans in the U.S., Japan, Brazil and Russia. Magic Tiles 3 is designed to let anyone — even those without a musical background — play instruments like the piano, guitar and drums on their smartphone. You can choose from over 5,000 songs across genres like pop, rap, jazz and electronic dance music, and compete in an interactive game with others around the world.

    Mom Sitter

    Developer: Mom Sitter, Korea

    Listed on Play: September 2021

    Mom Sitter, a platform connecting parents with babysitters, topped the Play Store’s childcare category in Korea last year. But it didn’t actually start as a mobile app. It was founded as a website to help parents find babysitters while they were at work or when daycare centers were too full. After attending the ChangGoo program, Google’s training program for developers and startups in Korea, the Mom Sitter team learned they could reach more people if they went mobile. Today, caretakers all over the world use their services. “Childcare issues concern not only working women but everyone who raises children, and it’s important that they can find support,” says Jeeyea Chung, founder of Mom Sitter.

  • Watch With Me on Google TV: Kerry Washington’s watchlist Wed, 17 Aug 2022 17:00:00 +0000

    Movies and TV can make us laugh, cry and even shape who we are. Our watchlists can be surprisingly revealing. We’re teaming up with entertainers, artists and cultural icons on our Watch With Meseries on Google TV to share their top picks and give you a behind-the-scenes look at the TV and movies that inspired them.

    Actress, producer and director Kerry Washington believes movies and TV have the power to influence us as individuals, and ultimately change the world. “When you step into a story and see yourself immersed in a part of the world that you’ve never been exposed to,” Kerry says, “that’s a magical experience.”

    Kerry’s love for movies started when she was a young “latchkey kid” in the Bronx. Movies and TV helped connect her with others and feel a sense of belonging. “When you can connect to a story, you can connect more deeply to your humanity,” Kerry says. “ That’s why we watch, to be human and to connect to ourselves and each other.”

    Google TV showing the Watch With Me page with Kerry Washington’s watchlist.

    We sat down with Kerry to dig in more on her favorite movie and TV picks.

    What do you think your watchlist says about you?

    Kerry Washington: A lot of the films I love are about compassion, belonging and finding your way.

    Are there specific moments where you used movies or TV to escape?

    Washington: I am an only child and spent a lot of time alone. The people in shows? They were my friends.

    What inspired you to create your YouTube series Street You Grew Up On?

    Washington: I was so excited when we started our series, Street You Grew Up On. It was really fun because the whole point is that we are each the center of our own story. It’s really fun to talk to people we admire about the “once upon a time” in their life.

    What new frontiers and environments interest you in movies and TV?

    Washington: There’s all this attention on sci-fi and space travel, but [I think] there’s more real estate in the ocean that’s unexplored than in space. I think there’s magic to discover down below.

    Your watchlist features many movies with mermaids. Do you believe in mermaids?

    Washington: As a child, I really did think that maybe I was part mermaid. To this day, I keep trying to convince my kids that my DNA breakdown includes a percentage of mermaid.

    Can movies or TV shows make any change in the world?

    Washington: One of the things that I love about being a storyteller is that you really do see that hearts and minds are transformed by the best narratives.

    Is it a crime to text during a movie?

    Washington: I try to silence notifications so that when I’m watching, I can have as sacred an experience as possible.

    Do you watch credits all the way through?

    Washington: In my house, we watch the credits all the way through out of respect for the people I work with. The credits that matter the most are not just ones at the beginning of the show. It’s the folks at the end of the show that also make it happen.

    Check out Kerry’s watchlist to see the incredible movies and shows that inspired this aspiring mermaid turned storyteller onGoogle TV, rolling out over the next few days. Share your favorites as well using #WatchWithMe.

  • A climate and clean energy renaissance in the U.S. Wed, 17 Aug 2022 17:00:00 +0000

    The climate and energy provisions in the Inflation Reduction Act of 2022 represent the most comprehensive investments to combat climate change in U.S. history. These investments offer the opportunity to bring about a renaissance of American-made clean energy and renewed energy security, putting the country on a path to historic emissions reductions by the end of this decade.

    At Google, we’ve set a goal to achieve net zero emissions across all of our operations and value chain by 2030. Our net zero goal also includes a moonshot to operate on 24/7 carbon-free energy for all of our data centers and campuses. The climate and energy provisions in the Inflation Reduction Act of 2022 will provide the glide path to the clean electricity resources needed to decarbonize U.S. grids and reach these goals. We’re founding members of the 24/7 Carbon-Free Energy Compact, a coalition of over 70 companies united in this pursuit, and I’m confident that with the tailwinds on climate and energy provided by these policy measures, that number will grow.

    We’ve also integrated sustainability into our core products, like helping drivers and air passengers find fuel-efficient routes in Google Maps and Google Flights or giving homeowners the tools to efficiently heat and cool their house with a Nest Thermostat. It’s our goal to make the sustainable choice the easier choice. The clean energy and climate provisions in this bill will help amplify those small daily choices by making it easier for citizens to adopt clean electric vehicles and upgrade their homes to be more energy efficient.

    Climate change is the most urgent challenge of our time. This historic climate legislation will help the country tackle that challenge, build energy resilience and power the industries of tomorrow.

  • Community in times of need: DevFest for Ukraine Wed, 17 Aug 2022 12:30:00 +0000

    Each year, Google Developer Groups (GDGs) come together for DevFest conferences around the world – not only to exchange knowledge and share experiences, but also to get inspired, celebrate the community and simply be together. It’s a cheerful gathering, focused both on technology and the people behind it.

    GDGs in Ukraine organized the first DevFest in 2012. After 10 years of building a thriving community, 2022 turned out to be different for thousands of Ukrainian developers. Ever since the anti-aircraft sirens woke them up for the first time on February 24, many in the tech industry have been working non-stop for the sake of their country – helping refugees, providing medical assistance to those in need, and trying to work from bomb shelters. Luckily, they’re not alone.

    Help from all sides

    The developer community in Ukraine and abroad decided to use the DevFest conference to raise awareness and funds for those in need. "This time, because of the war in my country, DevFest Ukraine is happening for Ukraine," says Vitaliy Zasadnyy, co-founder of GDG Lviv. "It's a brilliant way to celebrate the future of technology, learn new things, connect with other tech experts and raise funds for a good cause."

    Three people sitting at a table, speaking at a conference.

    Fireside chat with Android team members in the London studio.

    On July 14-15, DevFest for Ukraine gathered more than 20 industry-leading speakers over two days, featuring live streams from London and Lviv. From tech sessions and inspirational keynotes to networking and overviews of the latest developer tools, the event brought together people who shape the future of Android, Web and AI technologies.

    Funds were raised for those in need by participants donating a sum of their choice to access the live stream and recordings after the event. Topics ranged from API design based on AndroidX libraries, to applied ML for Healthcare, to next-generation apps powered by machine learning with TensorFlow.js, and more. Check out the highlights video.

    A woman at a laptop, sitting in a studio next to a large microphone.

    Preparing the AI Stream livestream from the studio in Lviv, Ukraine.

    Support the cause

    All the funds raised during DevFest for Ukraine go to three NGOs that are supporting the country at this turbulent time. The goal was to provide humanitarian aid and direct assistance to affected families. The GDG Ukraine team carefully selected them to ensure efficient use of funds and transparent reporting.

    And here’s the best part: DevFest for Ukraine raised over $130k for the cause so far, and counting! You can still access the recorded sessions to learn about the future of tech.

  • Meet the Korean startup founders building apps for pets and K-pop fans Wed, 17 Aug 2022 05:30:00 +0000

    At our annual Google for Korea event today, we showcased some of the most inspiring Korean creators and entrepreneurs. I also had the chance to sit down with the founders of two standout startups: AI FOR PET and Blip. Since their start, both have won over not just the people of Korea, but people all over the world.

    Side-by-side images of two Korean founders, one woman and one man

    Huh Eun-A, founder of AI FOR PET, and Kim Hong-ki, founder of Blip

    AI FOR PET, founded by Huh Eun-A, has developed a smartphone app called TTcare that uses artificial intelligence (AI) to assess pets’ health. When someone takes a picture of their pet’s eyes or skin, the app assesses the image and alerts the owner if their pet is showing any concerning symptoms of eye or skin related disease. AI FOR PET was a part of this year’s Google for Startups Cloud Academy and the ChangGoo program.

    Blip, founded by Kim Hong-ki, is a content platform for Korean pop (K-pop) fans to keep track of their favorite idols - including their latest updates and tour schedules. Blip, which participated in the 2021 ChangGoo program, has amassed over a million downloads on Google Play, to date. 60% of those downloads are from users outside of Korea.

    So, what inspired the beginnings of your startup?

    Huh Eun-A: I’m a pet owner, and know very well that like all living things, pets will fall sick at some point in their lives. I want to help fellow pet owners quickly diagnose any illnesses their pets may have, simply by using the TTcare mobile app. My hope is for animal lovers around the world to never be in the dark about their pet’s health, and for no pet to ever be without healthcare.

    Kim Hong-ki: K-pop artists bring so much positive influence to fans all over the globe. I built Blip to help fans feel closer to their favorite K-Pop artists, and to let them experience the world of K-pop in a new way. My aspiration is for Blip to one day become a verb, and for people to ask the question “Who do you Blip?” instead of “Who do you love?”

    What challenges did you face while growing your startup, and how did the ChangGoo program help you?

    Kim: Blip’s key challenge was growing our user base of K-pop fans, and we wanted to understand how we could do that in a global and sustainable way. Google’s ChangGoo program seemed like a good place to start because it’s a well-known, highly-sought after accelerator program among Korean startups. And so in 2020, we applied to join the program with a beta version of our app but failed to get selected. That motivated us to work hard to improve the product. The next year, we tried again and were accepted. The entire Blip team was thrilled!

    To me, ChangGoo feels like a program created by people who truly want to help startups. The mentors deeply cared about Blip’s team and needs, much like supportive K-pop fans. They provided insights and advice that helped us whenever we weren’t sure of our next steps.

    What made you join the Google for Startups Cloud Academy, and how did the program help you?

    Huh: Building out our AI was core to our product. We developed our model by first exposing it to more than a million photos of eyes so it could differentiate between canine and non-canine eyes, and then exposing it to canine eyes with and without diseases. It only took us a year to develop the model by using TensorFlow, Google’s open-source AI tool that’s accessible to all developers.

    But as important as the technology was, we wanted to make sure the app experience itself was high quality, too. So we reached out to the Google for Startups Cloud Academy to help us improve the app performance - and we even got support from the very team who initially developed TensorFlow! Now, we're able to detect canine ocular disease with just a single photo at 90 percent accuracy.

    The growth you’ve seen so far is really amazing! Can you share any upcoming plans for your startup?

    Huh: We’re training our AI model with cat data, so that cat owners - in addition to dog owners - can use our app. We're also exploring adding capabilities to detect skin and joint conditions in pets. We’ve recently expanded to the US, and hope that with our technology and reach, we can help demonstrate that Korean startups can build great products for the whole world.

    Kim: I’ll be focusing on my employees’ wellbeing. My aspiration is for Blip employees to love their job as much as our fans love their favorite artists on Blip. After all, the slogan of Blip is “Love what you love more”. I want Blip to be a workplace where every employee can do what they love and really enjoy themselves.

  • Making robots more helpful with language Tue, 16 Aug 2022 14:00:00 +0000

    Even the simplest human tasks are unbelievably complex. The way we perceive and interact with the world requires a lifetime of accumulated experience and context. For example, if a person tells you, “I am running out of time,” you don’t immediately worry they are jogging on a street where the space-time continuum ceases to exist. You understand that they’re probably coming up against a deadline. And if they hurriedly walk toward a closed door, you don’t brace for a collision, because you trust this person can open the door, whether by turning a knob or pulling a handle.

    A robot doesn’t innately have that understanding. And that’s the inherent challenge of programming helpful robots that can interact with humans. We know it as “Moravec's paradox” — the idea that in robotics, it’s the easiest things that are the most difficult to program a robot to do. This is because we’ve had all of human evolution to master our basic motor skills, but relatively speaking, humans have only just learned algebra.

    In other words, there’s a genius to human beings — from understanding idioms to manipulating our physical environments — where it seems like we just “get it.” The same can’t be said for robots.

    Today, robots by and large exist in industrial environments, and are painstakingly coded for narrow tasks. This makes it impossible for them to adapt to the unpredictability of the real world. That’s why Google Research and Everyday Robots are working together to combine the best of language models with robot learning.

    Called PaLM-SayCan, this joint research uses PaLM — or Pathways Language Model — in a robot learning model running on an Everyday Robots helper robot. This effort is the first implementation that uses a large-scale language model to plan for a real robot. It not only makes it possible for people to communicate with helper robots via text or speech, but also improves the robot’s overall performance and ability to execute more complex and abstract tasks by tapping into the world knowledge encoded in the language model.

    Using language to improve robots

    PaLM-SayCan enables the robot to understand the way we communicate, facilitating more natural interaction. Language is a reflection of the human mind’s ability to assemble tasks, put them in context and even reason through problems. Language models also contain enormous amounts of information about the world, and it turns out that can be pretty helpful to the robot. PaLM can help the robotic system process more complex, open-ended prompts and respond to them in ways that are reasonable and sensible.

    PaLM-SayCan shows that a robot’s performance can be improved simply by enhancing the underlying language model. When the system was integrated with PaLM, compared to a less powerful baseline model, we saw a 14% improvement in the planning success rate, or the ability to map a viable approach to a task. We also saw a 13% improvement on the execution success rate, or ability to successfully carry out a task. This is half the number of planning mistakes made by the baseline method. The biggest improvement, at 26%, is in planning long horizon tasks, or those in which eight or more steps are involved. Here’s an example: “I left out a soda, an apple and water. Can you throw them away and then bring me a sponge to wipe the table?” Pretty demanding, if you ask me.

    Making sense of the world through language

    With PaLM, we’re seeing new capabilities emerge in the language domain such as reasoning via chain of thought prompting. This allows us to see and improve how the model interprets the task. For example, if you show the model a handful of examples with the thought process behind how to respond to a query, it learns to reason through those prompts. This is similar to how we learn by showing our work on our algebra homework.

    PaLM-SayCan uses chain of thought prompting, which interprets the instruction in order to score the likelihood of completing the task

    So if you ask PaLM-SayCan, “Bring me a snack and something to wash it down with,” it uses chain of thought prompting to recognize that a bag of chips may be a good snack, and that “wash it down” means bring a drink. Then PaLM-SayCan can respond with a series of steps to accomplish this. While we’re early in our research, this is promising for a future where robots can handle complex requests.

    Grounding language through experience

    Complexity exists in both language and the environments around us. That’s why grounding artificial intelligence in the real world is a critical part of what we do in Google Research. A language model may suggest something that appears reasonable and helpful, but may not be safe or realistic in a given setting. Robots, on the other hand, have been trained to know what is possible given the environment. By fusing language and robotic knowledge, we’re able to improve the overall performance of a robotic system.

    Here’s how this works in PaLM-SayCan: PaLM suggests possible approaches to the task based on language understanding, and the robot models do the same based on the feasible skill set. The combined system then cross-references the two to help identify more helpful and achievable approaches for the robot.

    By combining language and robotic affordances, PaLM-SayCan breaks down the requested task to perform it successfully

    For example, if you ask the language model, “I spilled my drink, can you help?,” it may suggest you try using a vacuum. This seems like a perfectly reasonable way to clean up a mess, but generally, it’s probably not a good idea to use a vacuum on a liquid spill. And if the robot can’t pick up a vacuum or operate it, it’s not a particularly helpful way to approach the task. Together, the two may instead be able to realize “bring a sponge” is both possible and more helpful.

    Experimenting responsibly

    We take a responsible approach to this research and follow Google’s AI’s Principles in the development of our robots. Safety is our number-one priority and especially important for a learning robot: It may act clumsily while exploring, but it should always be safe. We follow all the tried and true principles of robot safety, including risk assessments, physical controls, safety protocols and emergency stops. We also always implement multiple levels of safety such as force limitations and algorithmic protections to mitigate risky scenarios. PaLM-SayCan is constrained to commands that are safe for a robot to perform and was also developed to be highly interpretable, so we can clearly examine and learn from every decision the system makes.

    Making sense of our worlds

    Whether it’s moving about busy offices — or understanding common sayings — we still have many mechanical and intelligence challenges to solve in robotics. So, for now, these robots are just getting better at grabbing snacks for Googlers in our micro-kitchens.

    But as we continue to uncover ways for robots to interact with our ever-changing world, we’ve found that language and robotics show enormous potential for the helpful, human-centered robots of tomorrow.

  • Get to know Sophie, the 2022 Doodle for Google contest winner Tue, 16 Aug 2022 14:00:00 +0000

    For this year’s Doodle for Google contest, we asked students across the country to illustrate a Doodle around the prompt, “I care for myself by…” In July, we announced the national finalists, and the thoughtfulness, heart and artistry of one artist stood out in particular. Today, we’re announcing Sophie Araque-Liu of Florida is our 2022 contest winner!

    Sophie’s Doodle, titled “Not Alone,” speaks to the importance of leaning on your support system and asking for help in tough times. I chatted with Sophie to learn more about her and the meaning behind her Doodle, which is on the Google.com homepage today.

    How did you start making art?

    I started making art by doodling in my notebooks in class. Soon it shifted from something I did to pass the time when I was bored to something I looked forward to and loved to do.

    Why did you enter the Doodle for Google contest?

    I entered the Doodle for Google contest this year, because I really wanted to give back to my parents. I feel like it’s very hard for me to show them just how much I appreciate them, so I’m grateful for the chance to be able to show them just how much I love them and give back to them in any way I can.

    I want other people to know that you are also valuable, and you are worth something too, just like anyone else.

    Can you share why you chose to focus on the theme of asking for help?

    I chose to focus on the theme of asking for help based on my own experiences. A couple years ago, I was struggling a lot mentally and I was honestly pretty embarrassed and scared to tell my friends and family. But when I did open up to them, I was met with so much love and support. So I really wanted to encourage others to not be afraid to look for help if they need it!

    Why is self-care important to you?

    Self care is important to me because I believe that mental health is just as important as physical health. For me and for so many other people, it can be easy to sacrifice too much of yourself and to push yourself too hard. I want other people to know that you are also valuable, and you are worth something too, just like anyone else.

    How does it feel to be the winner of this year’s Doodle for Google contest?

    It feels incredible! I truly did not think that I would win, so I am so surprised and happy! I’m really really proud of myself for making it so far, and I know the competition wasnot easy at all. I think I’m honestly in shock and I still haven’t processed it yet. It’s just so amazing and every time I think about it I can’t help but smile hard!

    Congratulations, Sophie! Be sure to bookmark the Doodle for Google websitefor updates around the 2023 contest, set to open submissions again this winter.

  • Bringing readers even more local news Tue, 16 Aug 2022 13:00:00 +0000

    Local news is local knowledge. It’s shared understanding. It’s a chronicle of the places we live and the culture that defines them. Local news is essential to people and their communities. But at the same time, we also recognize the job of gathering and monetizing news is increasingly challenging for local news publishers.

    Today, we’re hosting more than 100 American and Canadian local news leaders at our annual Community News Summit in Chicago. Journalists and business leaders are sharing their successes and challenges in running small, community-oriented news organizations. The program features hands-on workshops on specific Google products and tools, best practices on topics such as search and sustainability, and discussion about local news consumer behavior.

    Through our products, partnerships and programs, like the Google News Initiative, Google has long worked to help people cut through the noise and connect to the stories that matter most in their local communities. In June, we announced a redesigned, more customizable Google News experience for desktop to help people dive deeper into important stories and more easily find local news from around the world.

    The newly redesigned Google News on desktop, with local news now easier to find.

    The newly redesigned Google News on desktop, with local news now easier to find.

    We’ve also improved our systems so authoritative local news sources appear more often alongside national publications, when relevant, in our general news features such as Top Stories. This improvement ensures people will see authoritative local stories when they’re searching for news, helping both the brand and the content of news publishers reach more people.

    We also recently introduced a new way to help people identify stories that have been frequently cited by other news organizations, giving them a simple way to find the most helpful or relevant information for a news story. This label appears on Top Stories, and you can find it on anything from an investigative article, to an interview, an announcement, a press release or a local news story, as long as other publishers indicate its relevance by linking to it. The highly cited label is currently available in English in the U.S. with plans to expand globally over the coming weeks.

    A GIF of a phone screen showing an example of new information literacy tips on notices for rapidly evolving situations. Tampa Bay Rays is being typed into the search bar.

    An example of new information literacy tips on notices for rapidly evolving situations.

    We work closely with publishers and news industry associations to build a sustainable digital future for local news media. Having a digital news revenue strategy through subscribers and advertising is a key component for local news publishers to be sustainable. That’s why we're partnering with six different news associations in the U.S., each serving a unique constituency of publishers, to develop custom programs that support their members’ digital capabilities.

    In addition to publishers, we’re also working with local broadcasters. The National Association of Broadcasters’ PILOT innovation division recently launched a Google News Initiative-supported program designed to improve online audience engagement and monetization for local broadcasters. The program helps stations implement their first-party data and direct-to-consumer business models.

    We’ve also launched a $15 million digital and print ad campaign placed exclusively with U.S. local news media. The campaign directly supports publishers through the purchase of ad space in their papers and on their websites, and highlights our work with local publishers across the country. We’re encouraging readers everywhere to support their local news publishers, and are showcasing publishers who have made significant contributions to their communities through innovative reporting.

    An example of a local ad campaign that says 'we're supporting the local news our communities need.'

    Local news publishers are the heart of the communities they serve. They are one of our most trusted sources of information that impacts our daily lives. Their stories connect us to our neighbors, hold power to account, drive civic engagement and more. We hope you’ll join us and support local publishers in your area by subscribing, donating or advertising today. Together, we can help ensure a sustainable future for local news and all who depend on it.

  • Lucky number Android 13: The latest features and updates Mon, 15 Aug 2022 15:00:00 +0000

    Android 13 helps ensure your devices feel unique to you – on your terms. It comes jam-packed with new capabilities for your phone and tablet, like extending app color theming to even more apps, language settings that can be set on an app level, improved privacy controls and even the ability to copy text and media from one Android device and paste it to another with just a click.

    There are many reasons to love Android 13, but here are our top 13:

    Personalized to you

    1. Android 13 comes with an evolved look and style that builds on Material You. You can customize non-Google apps to match your phone’s wallpaper theme and colors, making your home screen more cohesive and unique to your style.

    2. For the many Android users who speak more than one language, we’ve added a top feature request. You can assign specific languages to individual apps so you can keep your phone’s system in one language, and each of your apps in a different language.

    3. Android 13 features an updated media player that tailors its look and feel based on the music or podcast you’re listening to. For example, when you’re listening to music, the media player spotlights album artwork and has a playback bar that dances as you progress through a song. It even works for media played through Chrome.

    4. Your wellbeing has been an important theme for Android – and getting enough sleep is key! Android 13 allows you to further customize Bedtime mode with wallpaper dimming and dark theme. These screen options help your eyes adjust to the dark when you're about to go to bed – and get back to sleep if you wake up and check your phone in the middle of the night.

    Keeping you protected and secure

    5. Gone are the days when you had to share your entire media library with your apps. In Android 13, you can select only the specific photos and videos they’ll need to access.

    6. Prevent any unwanted access to your clipboard. If you copy sensitive data like your email address, phone number or login credentials on your device, Android will automatically clear your clipboard history after a period of time.

    7. Android 13 helps keep your notifications under control and makes sure you only get the alerts you ask for. The apps you download will now need your explicit permission to send notifications, rather than being allowed to send notifications by default.

    Helping your devices work better together

    8. Feel like you’re in the middle of the action with Spatial Audio. On supported headphones that enable head tracking, Spatial Audio shifts the source of the sound to adapt with how you turn your head, giving you a more immersive listening experience on your Android phone or tablet.

    9. When you’re on your laptop, you don’t want to break your workflow to respond to a chat from your phone. Soon, you'll be able to stream your messaging apps directly to your Chromebook so you can send and receive messages from your laptop.

    10. Android 13 adopts Bluetooth Low Energy (LE) Audio, a new Bluetooth audio standard that results in lower latency than classic audio. This allows you to hear audio that’s in better sync with the sound’s source, reducing delay. With Bluetooth Low Energy (LE) Audio, you can also enjoy enhanced audio quality and broadcast audio to multiple devices at the same time.

    11. You’ll soon be able to copy content — like a URL, picture, text or video — from your Android phone and paste it on your tablet. Or you can start on your tablet and paste to your phone.

    12. Multitasking on your tablet is even easier with Android 13. With the newly updated taskbar on tablets, you can see all your apps at a glance and easily drag and drop any app in your library into split-screen mode.

    13. Android tablets will register your palm and stylus pen as separate touches. So whether you’re writing or drawing on your tablet, you’ll experience fewer accidental stray marks that come from simply resting your hand on the screen.

    Android 13 is packed with these and many other features, like HDR video support on third-party camera apps, an updated media output switcher, braille displays for Talkback and more. And it goes beyond the phone to give you a connected set of experiences across your other devices like your tablets and laptops.

    Android 13 is rolling out to Pixel devices starting today. Later this year, Android 13 will also roll out to your favorite devices from Samsung Galaxy, Asus, HMD (Nokia phones), iQOO, Motorola, OnePlus, Oppo, Realme, Sharp, Sony, Tecno, vivo, Xiaomi and more.


  • Helping members of the military community find meaningful civilian careers Mon, 15 Aug 2022 13:00:00 +0000

    Every year, Google’s Veterans Network (VetNet) employee resource group hosts its VetNet Career Week to offer veterans, transitioning service members and their spouses or partners the tools, support and advice needed to help translate their experience and skills into civilian careers. This year’s event partnered with over 30 companies and welcomed more than 3,000 attendees to attend panel discussions, free skill-building sessions and 1-on-1 resume reviews with Google representatives. Also unique for this year, Google partnered with Welcome.US to extend Career Week to those seeking refuge in the U.S.

    Our team sat down with Googlers Chris House and Tony Mendez, who attended last year's event as participants and are now Googlers, and Jenna Clark, a Googler and veteran who volunteered at last year’s event.

    There is a ton of opportunity out there, and veterans have the skills.

    Can you share a little about your military background?

    Tony: I enlisted in the U.S. Army in 2009 as an infantry soldier and was commissioned as one of the Army’s first cyber operations officers in 2014. I led an incident response team that investigated breaches in control systems networks for a few years, and eventually transitioned to conducting proactive security assessments.

    Chris: I was in the U.S. Navy for eight years, working on a submarine and on naval nuclear reactor design and operation.

    Jenna: I enlisted into the Air Force in 2002 and spent just shy of 10 years working as an all-source intelligence analyst. In the Air Force, I spent my first six years attached to an aircrew, working to keep them informed of threats in the area, and later I was transferred to an intelligence squadron.

    What drew you to participate in VetNet career week?

    Tony: I learned about VetNet Career Week through a friend who was considering leaving the military. I’ve always had a hobby interest in Android security and loved Google products since the Nexus 5 phone, but never thought I was “ready” to apply. I signed up for the resume review to help me articulate how my experience was relevant to a company like Google.

    Jenna: When I left the military, I struggled to find an opportunity. It was after attending networking and resume workshops that I was able to get my foot in the door at a startup in Boston. Within six months, I was promoted. This is what draws me to volunteer at Career Week. Veterans have diverse skill sets that are easily transferable to corporate — we just need a chance.

    Video screenshot of virtual VetNet Career Week event

    Lisa Gevelber, VP of Grow with Google, Google for Startups, and Americas Marketing, hosted a fireside chat during the virtual VetNet Career Week event last year.

    Fast forward one year, how does it feel to be a Googler?

    Chris: It feels great! It’s an incredible place to be, and I think the aspect that I’m most enthused about is how supportive, transparent and energizing the company culture has been. I’ve enjoyed the support VetNet has offered, whether it’s through events like Career Week to guide the post-military transition process, or simple social hours where we’ve all just bonded over shared experiences in the military and at Google.

    Tony: Admittedly, I didn’t match with the first team that interviewed me, but it was a blessing in disguise. My current team in Android security is a perfect fit for my skill set and managerial style. I couldn’t be happier!

    Why do you think events like this are so important for the military communities and their families?

    Jenna: I think it’s important because it shows support towards veterans in a very real and helpful way. There is a ton of opportunity out there, and veterans have the skills — it’s just those skills need to be translated, and that requires commitment on both sides.

    Tony: It’s hard to leave an organization that so thoroughly affects all aspects of your life. VetNet Career Week helps really demonstrate caring and support for the military community that’s uncommon outside of the military.

    Chris: Probably the most important aspect, for me, was just seeing how many people had made similar transitions and how many well regarded companies valued a veteran's experience. I'm grateful for the time that the Googler I chatted with invested in my resume review and supporting my transition from the military.

  • Finding community and customers through Growth Academy: Women Founders Thu, 11 Aug 2022 12:00:00 +0000

    With thousands of highly-valued tech companies, a global-first market approach, and a strong economy dominated by entrepreneurship, it’s clear why Israel’s nickname is 'The Startup Nation.'

    However, this thriving startup ecosystem isn’t equally supportive of all aspiring founders. According to the latest Israeli Tech Gender Distribution Report, spearheaded by Google for Startups and IVC Data and Insights, only 2% of startups with a woman founder raised above $50 million between 2018 and 2021. While the number of entirely women-led companies has doubled in the past decade, they still only comprise 6.3% of Israeli startups — and only 13.9% of startups had at least one woman co-founder in a mixed-gender founding team.

    I fall into the latter category. My cofounder Gal Benbeniste and I met during college, where we bonded over how outdated the investment world is. What started with trying to figure out a simple way to automate became FinityX, a deep-tech startup that helps investors implement AI tools as part of their investment process to save time and resources, and improve quality.

    While I have been humbled byFinityX’s rapid growth and recognition, as one of the very few women in the deep-tech space I’ve always wanted to be able to access the same capital, business networks, and mentorship readily available to my male cofounder.

    So I was thrilled when Google for Startups launched a Growth Academy program tailored specifically for the needs of early-stage women founders. Based on the successful Startup Growth Lab curriculum, the program includes leadership workshops with Israeli VCs such as Entree Capital, Ibex and Viola, leadership sessions with top industry lecturers, and one-on-one Google product mentorship. “Ever since Google for Startups opened Campus Tel Aviv in 2012, diversity and inclusion has been an essential focus to our work," said Marta Mozes, marketing manager of Google for Startups in Israel. "When we discovered this data about female founders in Israel, we knew we had to be part of the change."

    Meet the other Israeli entrepreneurs, representing industries from family vacation-planning to finance, who joined me at Google for Startups Growth Academy: Women Founders:

    • Miri Berger, Cofounder & CEO of 6Degrees
    • Kerri Kariti, Cofounder & CPO of Claritee
    • Vardit Legali, Cofounder & CEO of Clawdia
    • Ronny Schwartz Dgani, Cofounder & CMO of Expecting.ai
    • Inbal Glantser and Naama Yacobson, Cofounders of Homaze
    • Tamar Liberman, Tal Provizor Narkiss, and Lee Winfield, Cofounders of It’s July
    • Mika Kayt, Founder & CEO of Outgage
    • Danielle Shpigel and Yarden Kaufmann, Cofounders of Unika
    A group of 20 people , mostly women and all wearing lanyards, smile at the camera in front of a gray step-and-repeat. Those in the back are standing while four women in the front sit on clear plastic chairs.

    The Growth Academy: Women Founders cohort celebrating their accomplishments with the Google for Startups team and Israeli Minister of Innovation, Science and Technology, Orit Farkash HaCohen (center)

    Google for Startups Growth Academy: Women Founders connected me to knowledge, tools and fields I needed to open my mind and increase my skill set. Googlers like Marta helped us with actionable skills such as how to nail our one-line pitch, how to navigate the cooling market, and the importance of a customer journey (hint: it starts at the very first meeting, and never ends). With help from Google mentors, I established FinityX’s go-to-market strategy and secured our first paying customers, including large hedge funds and brokers.

    Most importantly, Growth Academy: Women Founders created a tight-knit support system of women entrepreneurs (and male allies) to help push each other forward. We can be real with one another and talk about the shiny expectations versus reality of being a founder. And now that four members of my cohort are moving their offices to Campus Tel Aviv, I’ll be able to visit any time!

    Last week, three months of hard work wrapped up with a graduation ceremony, hosted by Israel’s Minister of Innovation, Science and Technology, Orit Farkash HaCohen. In the spirit of the program, women were represented in every aspect of the event — from the caterers, to the speakers and the DJ.

    I think Minister HaCohen summarized my feelings about Growth Academy: Women Founders best when she said, "I am invited to many events, but this is the kind that warms the heart. This is a prime example of business leadership that engages in social change.”

  • Survey shows how people decide what to trust online Thu, 11 Aug 2022 04:01:00 +0000

    Alex Mahadevan is director of MediaWiseat the Poynter Institute. He has taught digital media literacy to thousands of middle and high schoolers, and has trained hundreds of journalists from around the world in verification and digital investigative tools. We caught up with Alex to find out about a recent information literacy survey his organization conducted in partnership with YouGov, with support from Google. Learn moreabout how Google is working on information literacy and helping you spot misinformation online.

    Why was this survey conducted?

    Misinformation isn’t a new problem, but it’s becoming increasingly difficult to separate fact from fiction, especially on the internet. We wanted to learn more about how people across generational lines verify information and decide what to trust and share online. And we knew this research would help us expand on the educational resources MediaWise has to offer.

    What were the parameters for the survey?

    We surveyed more than 8,500 respondents of various ages in the United States, Brazil, the United Kingdom, Germany, Nigeria, India and Japan. We asked a wide range of questions aimed at assessing information literacy skills and verification habits. Those include queries about everything from the tools and techniques someone uses to investigate a post they see online, to the reasons why they may have shared misleading information in the past.

    What are some of the biggest takeaways?

    The survey found that 62% of respondents think they see false or misleading information on at least a weekly basis – that’s a staggering number. And people are aware that it’s a serious issue. Roughly 50% of all Gen X, Millennial and Gen Z respondents (these are people ages 18 to 57) said they’re concerned about their family being exposed to it.

    Infographic says that 62% of respondents think they see false or misleading information daily or weekly. Underneath, the image shows the question respondents were asked: "How often do you see what you think is false or misleading information online." The image also shows a circular diagram depicting the breakdown of responses: 35.7% daily, 26.5% weekly, 12.7% monthly, 18.1% less than monthly, and 7.0% never.

    Sixty-two percent of respondents think they see false or misleading information daily or weekly.

    What did the survey tell you about how people cross-check information they find online?

    Gen Zers are two times more likely than the Silent Generation (people 68 or older) to use a search engine to verify information, and also two times more likely than Baby Boomers to check social media comments to verify something they’ve seen online. They’re also more likely to use advanced search techniques, like reverse image search, or to engage in lateral reading – that’s when you open multiple tabs and perform multiple searches at once — an effective technique studied by the Stanford History Education Group.

    We also learned that, when deciding if something they’ve heard or read about is true, respondents across all generations agree that the most important thing is whether conclusions are supported by sources or facts. That was important for us to see: Facts matter.

    Image shows text reading: Gen Z, Millennials and Gen X feel slightly more confident in identifying false or misleading information than boomers and the Silent Generation. The image also shows the question respondents were asked in the survey: How confident are you that you can identify that an image, video or post online is false or misleading.

    Gen Z, Millennials and Gen X feel slightly more confident in identifying false or misleading information than boomers and the Silent Generation.

    Any final thoughts?

    Our findings underscore how important it is to be able to trust the information you find online, and how taking the time to check multiple sources to verify what you see or to use resources like Google Search can be helpful in making sense of a complicated digital landscape. That’s why we’re working together to educate people about information literacy. We have a long running partnership with the Google News Initiative, which has provided support for projects like Find Facts Fast – our free microlearning course which anyone can take via text message or WhatsApp to improve their digital media literacy skills — and the Spanish version, MediaWise en Español.

    Today we are also announcing a new partnership with Google, Poynter Institute for Media Studies, MediaWise and PBS NewsHour’s Student Reporting Labs to develop weekly information literacy lesson plans for teachers of middle and high school students across the U.S. The lesson plans will be available for free to teachers using PBS LearningMedia and for download on Poynter’s website. We’re excited to build on our partnership to give people the skills they need to recognize misinformation when they see it and help stop its spread.

  • New ways we're helping you find high-quality information Thu, 11 Aug 2022 04:01:00 +0000

    People turn to Google every day for information in the moments that matter most. Sometimes that’s to look for the best recipe for dinner, other times it’s to check the facts about a claim they heard about from a friend.

    No matter what you’re searching for, we aim to connect you with high-quality information, and help you understand and evaluate that information. We have deeply invested in both information quality and information literacy on Google Search and News, and today we have a few new developments about this important work.

    Our latest quality improvements to featured snippets

    We design our ranking systems to surface relevant information from the most reliable sources available – sources that demonstrate expertise, authoritativeness and trustworthiness. We train our systems to identify and prioritize these signals of reliability. And we’re constantly refining these systems — we make thousands of improvements every year to help people get high-quality information quickly.

    Today we’re announcing one such improvement: a significant innovation to improve the quality of featured snippets. Featured snippets are the descriptive box at the top of the page that prominently highlights a piece of information from a result and the source, in response to your query. They’re helpful both for people searching on Google, and for web publishers, as featured snippets drive traffic to sites.

    By using our latest AI model, Multitask Unified Model (MUM), our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact. Our systems can check snippet callouts (the word or words called out above the featured snippet in a larger font) against other high-quality sources on the web, to see if there’s a general consensus for that callout, even if sources use different words or concepts to describe the same thing. We've found that this consensus-based technique has meaningfully improved the quality and helpfulness of featured snippet callouts.

    A screenshot shows a query for “how long does it take for light from the sun to reach earth,” with a featured snippet highlighting a helpful article about the question and a bolded callout saying “8 and ⅓ minutes.”

    With a consensus-based technique, we’re improving featured snippets.

    AI models are also helping our systems understand when a featured snippet might not be the most helpful way to present information. This is particularly helpful for questions where there is no answer: for example, a recent search for “when did snoopy assassinate Abraham Lincoln” provided a snippet highlighting an accurate date and information about Lincoln’s assassination, but this clearly isn’t the most helpful way to display this result.

    We’ve trained our systems to get better at detecting these sorts of false premises, which are not very common, but are cases where it’s not helpful to show a featured snippet. We’ve reduced the triggering of featured snippets in these cases by 40% with this update.

    Information literacy

    Beyond designing our systems to return high-quality information, we also build information literacy features in Google Search that help people evaluate information, whether they found it on social media or in conversations with family or friends. In fact, in a study this year, researchers found that people regularly use Google as a tool to validate information encountered on other platforms. We’ve invested in building a growing range of information literacy features — including Fact Check Explorer, Reverse image search, and About this result — and today, we’re announcing several updates to make these features even more helpful.

    Expanding About this result to more places

    About this result helps you see more context about any Search result before you ever visit a web page, just by tapping the three dots next to the result. Since launching last year, people have used About this result more than 2.4 billion times, and we’re bringing it to even more people and places - with eight more languages including Portuguese (PT), French (FR), Italian (IT), German (DE), Dutch (NL), Spanish (ES), Japanese (JP) and Indonesian (ID), coming later this year.

    This week, we’re adding more context to About this result, such as how widely a source is circulated, online reviews about a source or company, whether a company is owned by another entity, or even when our systems can’t find much info about a source – all pieces of information that can provide important context.

    And we’ve now launched About this page in the Google app, so you can get helpful context about websites as you’re browsing the web. Just swipe up from the navigation bar on any page to get more information about the source – helping you explore with confidence, no matter where you are online.

    A gif shows the About this page feature, where someone swipes up on the navigation bar in the Google app while browsing the website for the Rainforest Alliance, and sees a panel with information about the source from across the web.

    With About this page in the Google app, you can get helpful context on websites as you’re browsing.

    Expanding content advisories for information gaps

    Sometimes interest in a breaking news topic travels faster than facts, or there isn’t enough reliable information online about a given subject. Information literacy experts often refer to these situations as data voids. To address these, we show content advisories in situations when a topic is rapidly evolving, indicating that it might be best to check back later when more sources are available.

    Now we’re expanding content advisories to searches where our systems don’t have high confidence in the overall quality of the results available for the search. This doesn’t mean that no helpful information is available, or that a particular result is low-quality. These notices provide context about the whole set of results on the page, and you can always see the results for your query, even when the advisory is present.

    A gif shows a content advisory that says “It looks like there aren’t many great results for this search” along with tips like checking the source and trying new search terms.

    New content advisories on searches where our systems don’t have high confidence in the overall quality of the results.

    Educating people about misinformation

    Beyond our products, we’re making investments into programs and partnerships to help educate people about misinformation. Since 2018, the Google News Initiative (GNI) has invested nearly $75 million in projects and partnerships working to strengthen media literacy and combat misinformation around the world.

    Today, we’re announcing that Google is partnering with MediaWise at the Poynter Institute for Media Studies and PBS NewsHour Student Reporting Labs to develop information literacy lesson plans for teachers of middle and high school students. It will be available for free to teachers using PBS Learning Media and for download on Poynter’s website. We’ve partnered with MediaWise since it was founded. And today’s announcement builds on the GNI’s support of its microlearning course through text and WhatsApp called Find Facts Fast.

    We also announced today the results of a survey conducted by the Poynter Institute and YouGov, with support from Google, on the ways people across generational lines verify information. You can read more in our blog post.

    Helping people everywhere find the information they need

    Google was built on the premise that information can be a powerful thing for people around the world. We’re determined to keep doing our part to help people everywhere find what they’re looking for and give them the context they need to make informed decisions about what they see online.

  • Duo, meet Meet: One upgraded app for video calling and meetings Wed, 10 Aug 2022 15:00:00 +0000

    As we announced in June, we’re upgrading the Google Duo experience to include all Google Meet features and bringing our two video calling services together into a single solution. This upgrade, which started rolling out last month, gives everyone access to new features like scheduling and joining meetings, virtual backgrounds, in-meeting chat and more, in addition to your current video calling features.

    Smartphone screen showing home screen of Meet app, leading to a video chat

    Additional meeting features let you start an instant video call with your entire study group or connect with your colleagues at a recurring scheduled time. Before you join a meeting, you’ll be able to change your background or apply visual effects. During the meeting, you’ll also be able to use in-meeting chat and captions for more ways to participate.

    Animation showing different background options on Google Meet

    We’re also launching live sharing for Google Meet. Live sharing allows all meeting participants to interact with the content that’s being shared. So whether you’re co-watching videos on YouTube, curating a playlist on Spotify, taking turns while playing games like Heads Up!, UNO! Mobile or Kahoot! during an ice breaker, everyone will be able to join in on the action.

    Animation of Google Meet live sharing with Spotify on a smartphone screen

    What to expect

    Over the past few weeks, we’ve started rolling out these new features to your Duo app, and now, users are beginning to see their app name and icon update to Google Meet. This upgrade will take place throughout the month across mobile and tablet devices, and will come later for other devices. To ensure a smooth transition, keep your app updated to the latest version.

    Animation of Google Duo icon changing to a Google Meet icon

    If you’re using the existing Google Meet app, there will be no change to your experience. Your existing Meet app and icon will change to Google Meet (original). You can continue using this app to join and schedule meetings, but we recommend using the updated Google Meet app to get combined video meeting and calling features all in one place. We will continue to invest in bringing more features to Google Meet to help people to connect, collaborate and share experiences on any device, at home, at school and at work.
    We're committed to making the transition as smooth as possible. For more information, please see our Help Center.

  • Help kids learn to read with Read Along, now available on the web Tue, 09 Aug 2022 16:00:00 +0000

    Over the past three years, more than 30 million kids have read more than 120 million stories on Read Along. The app, which was first released as Bolo in India in 2019 and released globally as Read Along the following year, helps kids learn to read independently with the help of a reading assistant, Diya.

    As kids read stories aloud, Diya listens and gives both correctional and encouraging feedback to help kids develop their reading skills. Read Along has been an Android app so far, and to make it accessible to more users, we have launched the public beta of the website version. The website contains the same magic: Diya’s help and hundreds of well illustrated stories across several languages.

    With the web version, parents can let their children use Read Along on bigger screens by simply logging into a browser from laptops or PCs at readalong.google.com. Just like the Android app, all the speech recognition happens in the browser so children’s voice data remains private and we do not send it to any servers. You can learn more about data processing on the website version by reading our privacy policy.

    The website also opens up new opportunities for teachers and education leaders around the world, who can use Read Along as a reading practice tool for students in schools. The product supports multiple popular browsers like Chrome, Firefox and Edge, with support for iOS and more browsers such as Safari coming soon. With the sign-in option, you can login from a unique account for each child on the same device. We recommend using Google Workspace for Education accounts in schools and Google accounts with Family Link at home.

    In addition to the website launch, we are also adding some brand-new stories. We have partnered with two well-known YouTube content creators, ChuChu TV and USP Studios, to adapt some of their popular videos into a storybook format. Our partnership with Kutuki continues as we adapt their excellent collection of English and Hindi alphabet books and phonics books for early readers; those titles will be available later this year.

    Reading is a critical skill to develop at a young age, and with Read Along Web, we are taking another step towards ensuring each kid has that option. Join us by visiting readalong.google.com and help kids learn to read with the power of their voice.

  • Ask a Techspert: What’s breaking my text conversations? Tue, 09 Aug 2022 15:00:00 +0000

    Not to brag, but I have a pretty excellent group chat with my friends. We use it to plan trips, to send happy birthdays and, obviously, to share lots and lots of GIFs. It’s the best — until it’s not. We don’t all have the same kind of phones; we’ve got both Android phones and iPhones in the mix. And sometimes, they don’t play well together. Enter “green bubble issues” — things like, missing read receipts and typing indicators, low-res photos and videos, broken group chats…I could go on describing the various potential communication breakdowns, but you probably know what I’m talking about. Instead, I decided to ask Google’s Elmar Weber: What’s the problem with messaging between different phone platforms?

    First, can you tell me what you do at Google?

    I lead several engineering organizations including the team that builds Google’s Messages app, which is available on most Android phones today.

    OK, then you’re the perfect person to talk to! So my first question: When did this start being a problem? I remember wayback when I had my first Android phone, I would text iPhone friends…and it was fine.

    Texting has been around for a long time. Basic SMS texting — which is what you’re talking about here — is 30 years old. SMS, which means Short Message Service, was originally only 160 characters. Back then you couldn’t do things like send photos or reactions or read receipts. In fact, mobile phones weren’t made for messaging, they were designed for making phone calls. To send a message you actually had to hit the number buttons to get to the letters that you’d have to spell out. But people started using it a ton, and it sort of exploded. So this global messaging industry took off. MMS (Multimedia Messaging Service) was then introduced in the early 2000s, which let people send photos and videos for the first time. But that came with a lot of limitations too.

    Got it. Then the messaging apps all started building their own systems to support modern messaging features like emoji reactions and typing indicators, because SMS/MMS were created long before those things were even dreamed of?

    Yes, exactly.

    I guess…we need a new SMS?

    Well the new SMS is RCS, which stands for Rich Communication Services. It enables things like high-resolution photo and video sharing, read receipts, emoji reactions, better security and privacy with end-to-end encryption and more. Most major carriers support RCS, and Android users have been using it for years.

    How long has RCS been around?

    Version one of RCS was released December 15, 2008.

    Who made it?

    RCS isn’t a messaging app like Messages or WhatsApp — it’s an industry-wide standard. Similar to other technical standards (USB, 5G, email), it was developed by a group of different companies. In the case of RCS, it was coordinated by an association of global wireless operators, hardware chip makers and other industry players.

    RCS makes messaging better, so if Android phones use this, then why are texts from iPhones still breaking? RCS sounds like an upgrade — so shouldn’t that fix everything?

    There’s the hitch! So Android phones use RCS, and iPhones still don’t. iPhones still rely on SMS and MMS for conversations with Android users, which is why your group chats feel so outdated. Think of it like this: If you have two groups of people who use different spoken languages, they can communicate effectively in their respective languages to other people who speak their language, but they can’t talk to each other. And when they try to talk to one another, they have to act out what they're saying, as though they're playing charades. Now think of RCS as a magic translator that helps multiple groups speak fluently — but every group has to use the translator, and if one doesn’t, they’re each going to need to use motions again.

    Do you think iPhones will start using RCS too?

    I hope so! It’s not just about things like the typing indicators, read receipts or emoji reactions — everyone should be able to pick up their phone and have a secure, modern messaging experience. Anyone who has a phone number should get that, and that’s been lost a little bit because we’re still finding ourselves using outdated messaging systems. But the good news is that RCS could bring that back and connect all smartphone users, and because so many different companies and carriers are working together on it, the future is bright.

    Check outAndroid.com/GetTheMessageto learn why now is the time for Apple to fix texting.

  • How Google Assistant helped me spend more time outside Mon, 08 Aug 2022 15:00:00 +0000

    Summer is my favorite season, and whenever it comes around, I always try to soak up as much sunshine as I can. But with my schedule, it can be tough to carve out quality outdoor time. So as we hit the end of summer in the U.S., I set a challenge for myself — to get outside every day during the week. And as a member of the Google Assistant team, I knew Assistant could help give me that extra nudge out the door. Here’s how it went.


    After a full day of meetings at the office, I needed to clear my head. Instead of just heading home like I normally would, I asked my Assistant, “Hey Google, what parks are nearby?” It showed a handful of options near me. I ended up heading to Murphey Candler Park, one of my favorites in Atlanta, for a long walk to help me recharge my batteries.

    Trees and rocks surround a large, glistening lake.

    Murphey Candler Park in Atlanta was one of the nearby park options Assistant shared with me.


    I typically work out on Tuesdays, so in the spirit of my outdoor challenge, I decided to go for a swim. To help keep me accountable and on schedule, I told my Assistant, "Hey Google, remind me to go for a swim at 5 p.m." When I got that 5 p.m. nudge, I packed up for the day and headed to the pool at my apartment complex.

    A pool with white lounge chairs underneath a brick apartment building.

    I took my Tuesday workout to the pool, thanks to a helpful reminder from my Assistant.


    During a walk around my neighborhood, I started thinking about my weekend plans. The weather forecast showed that Saturday was going to be particularly beautiful, so I texted my friends to see if they’d be up for a picnic. After we agreed on a place and time, I said to my Assistant “Hey Google, add ‘picnic with friends’ to my calendar for Saturday at 4 p.m.” to make sure it was blocked on my schedule.


    One of the things I love most about working at Google is celebrating work anniversaries, or what we call “Googleversaries.” My friend Akilah hit her third Googleversary on Thursday, so we headed to the pool after work to celebrate. For an extra treat (and to cool off), we decided to get some ice cream — but we didn’t want to lose our poolside spot. This was the perfect opportunity to try out our new Assistant feature with Uber Eats. With a quick, “Hey Google, order ice cream on Uber Eats,” Assistant opened my Uber Eats app to show us nearby delivery options and let us customize our order. Soon enough, our ice cream was on its way.

    A hand holding a pint of cookies and cream ice cream. A flower bush is in the background.

    Enjoying our ice cream order from Uber Eats.


    I wanted to start my weekend on the right foot, and my friend Jessica immediately came to mind. She’s an avid hiker and is always looking for someone to explore new trails with. So as I was packing up at the office, I told my Assistant “Hey Google, text Jessica, ‘Let’s go hiking.’” We did a three-mile, scenic hike on the East Palisades Trail — a great way to wrap up the week and my outdoor challenge.

    Assistant can help you easily send a text, especially when you have your hands full.

    These Google Assistant features made it easy to stick to my goal of getting outside every day, and they’re continuing to help me soak up the rest of the summer. I hope they do the same for you!

Google Ads
Many books were created to help people understand how Google works, its corporate culture and how to use its services and products. The following books are available: Ultimate Guide to Google AdsThe Ridiculously Simple Guide to Google Docs: A Practical Guide to Cloud-Based Word ProcessingMastering Google Adwords: Step-by-Step Instructions for Advertising Your Business (Including Google Analytics)Google Classroom: Definitive Guide for Teachers to Learn Everything About Google Classroom and Its Teaching Apps. Tips and Tricks to Improve Lessons’ Quality.3 Months to No.1: The "No-Nonsense" SEO Playbook for Getting Your Website Found on GoogleUltimate Guide to Google AdsGoogle AdSense Made Easy: Monetize Your Website and Blogs Instantly With These Proven Google Adsense TechniquesUltimate Guide to Google AdWords: How to Access 100 Million People in 10 Minutes (Ultimate Series)

Google Cloud Blog

  • Helping European education providers navigate privacy assessments Thu, 18 Aug 2022 11:29:00 -0000

    Every student and educator deserves access to learning tools that are private and secure. Google Workspace for Education and Chromebooks have positively transformed teaching and learning, while creating safe learning environments for more than 170 million students and educators around the world. Our education products are built with data protection at their core, enabling school administrators to demonstrate their privacy compliance when using our services. 

    Before using the products and services of technology providers like Google, schools in Europe may be required by the EU’s General Data Protection Regulation (GDPR) or similar laws to conduct Data Protection Impact Assessments (DPIAs). A school using Google Workspace for Education is considered a controller of the personal data that it and its students submit, store, send or receive via those Core Services. Under the GDPR, a controller is responsible for assessing whether a DPIA is required, and completing one as appropriate.

    Navigating the complex DPIA requirements under the GDPR can be challenging for many of our customers, and while only customers, as controllers, can complete DPIAs, we are here to help them meet these compliance obligations. Our Cloud DPIA Resource Center outlines the obligations related to DPIAs that customers may have under the GDPR, and provides information about Google Workspace for Education that our customers (and their lawyers) can use as a starting point for assessing and meeting these legal obligations. 

    What every parent and teacher should know about Google Workspace for Education

    For Google Workspace for Education core services like Gmail, Classroom, Calendar, Groups, Drive, Docs, and similar products, Google only processes data (including personal data) provided by customers and their end users in accordance with each customer’s documented instructions. Data in these core services is never used for advertising purposes, and no ads are shown in core services. 

    Below are a few examples of how those core services can benefit students and educators: 

    • Google Calendar and Groups help schools streamline their administration by managing personal and team calendars and creating groups;
    • Google Docs and Drive enable classmates to collaborate in real time; 
    • Google Classroom allows educators to securely and privately provide feedback to students, saving time for both;
    • Google Classroom also allows educators to factor in grading trends when planning future lessons; 

    When using Google Workspace for Education core services, schools are in control of their content from start to finish, and the domain administrator of the school’s system can directly manage this data using our privacy and security settings. Domain administrators have flexibility and autonomy to change default settings and use/enable advanced security upgrade options to meet their data protection requirements.There are equivalent controls in Chrome which would guarantee Google does not have access to the data. Administrators can encrypt Chrome Sync data with a custom passphrase, to which Google doesn’t have access, or even turn off Chrome Sync entirely, so that no sync data is sent to Google 

    We share the same goals as the schools that use our products: keeping educators and students safe, while supporting learning. As schools evaluate their technology needs and undertake risk assessments, we’ll work with them to help answer any questions they may have along the way. Google has cooperated with numerous customers across Europe who conduct DPIAs and we regularly engage with customers, regulators, policymakers, and other stakeholders to provide transparency into our operations, policies, and practices - this is core to who we are and encapsulates our ongoing commitment to privacy compliance.

    In one recent example of this type of collaboration, the Dutch government conducted a DPIA into Google Workspace for Education to facilitate cloud adoption by schools in the Netherlands. As a result of that engagement, Google announced our intention to offer new contractual privacy commitments for service data that align with the commitments we offer for customer data. Once those new commitments become generally available, we will process service data as a processor under customers’ instructions, except for limited processing that we will continue to undertake as a controller. We are confident that these changes will address the requirements of our customers and regulators in Europe. 

    And you may be aware of the Danish DPA’s recent decision on the use of Google Workspace for Education and Chromebooks by the local municipality of Helsingør. Although this decision affects only Helsingør municipality and does not impose a country-wide ban on the use of Google Workspace for Education in Denmark, we know this issue raised questions in many European countries, and has lead to some misconceptions on the privacy and security of the use of Google Workspace for Education and Chromebooks in schools. The Danish DPA has clearly communicated - including in the national media - that the underlying reason for their decision is not a deficiency of privacy, security or GDPR compliance in Google Workspace for Education. 

    Google has worked with Helsingør Municipality to answer questions, review technical settings in their Workspace for Education Admin Console, and share best practices from other European customers who have undertaken a data protection impact assessment. Our sophisticated encryption technology which is not currently matched by any other cloud provider can also guarantee that Google personnel cannot decrypt customer data without their permission.  

    At Google, we recognise the utmost importance of schools assessing the risks that apply to their data when using any technology platform or online service to process student data. We hope that our Cloud DPIA Resource Center helps our customers complete these assessments for Google Workspace for Education, in compliance with the GDPR, and we’ll continue to provide the tools, resources, and support our customers need to ensure appropriate protection of student data.

    For more information about what data we collect and how it is used, please see the Google Workspace for Education Privacy Notice.

    1. According to the GDPR, the controller determines the purposes and means of processing of personal data.
    For example, billing and account management, capacity planning and forecast modeling, detecting, preventing and responding to security risks and technical issues.

  • Kaluza: Powering greener, smarter energy usage with Google Cloud Thu, 18 Aug 2022 08:00:00 -0000

    Editor’s note: Kaluza is a UK-based technology company that provides energy retailers with real-time billing, smart grid services, and seamless customer experiences. In this blog, Tom Mallett, Sustainability Manager, Kaluza, explains how Kaluza leverages Google Cloud to improve energy visibility throughout the company. He also explores how better emissions data informs sustainability solutions that make the world’s energy greener, smarter, and more reliable.

    Consumers are facing tough times right now, with energy bills a very real and rising cost. But meanwhile, the climate crisis hasn’t gone away - and sustainability is very much still front of mind for boths consumers and for businesses, as it rightly should be. In the UK, 40% of emissions come from households, which includes electricity, heating, and transport. But people often do not have the time or resources to investigate and test out myriad ways to save energy, while grappling with lots of other demands at once. That’s why at Kaluza, we’ve made it our mission to help people save money and reduce their household emissions. 

    Born out of OVO Energy back in 2019, Kaluza is a software-as-a-service company that helps to accelerate the shift to a zero carbon world. With our Kaluza Energy Retail product, energy companies can put their customers at the heart of this transition, by providing them with real-time insights to help reduce their bills. And with Kaluza Flex, advanced algorithms charge millions of smart devices at the cheapest and greenest possible times. Kaluza works with some of the biggest energy and auto OEM businesses around the world including AGL in Australia, Fiat and Nissan in the UK, and Mitsubishi Corporation and Chubu in Japan.

    Leveraging Google Cloud data to support 2030 carbon negative goal

    By 2030, we want to avoid the production of 10 million tons of CO2, by reaching 100 million energy users and reducing our energy retail clients’ cost to serve by 50%. And that’s only the half of it. As we’re accelerating the energy transition for our customers, we also want to drastically reduce our own emissions. While the world is rushing towards net zero, we’re going one step further, committing to be carbon negative by 2030.

    But we can only reduce what we can measure. That’s why we’ve developed an internal carbon footprint tool to track the impact of our cloud usage. Our technology stack spans a multicloud estate, and it’s especially easy to get emissions data from Google Cloud applications – thanks to the Carbon Footprint solution.

    For every single process we run through Google Cloud, we get half-hourly electricity usage information, enabling us to point to the exact carbon emission of every process we run on Google Cloud. These insights have helped us shape Kaluza’s own carbon footprint tool, which we use to pull together information from all of our cloud providers in our multi-cloud setup, and create much more effective dashboards, which has been invaluable for our data teams.

    Cutting emissions by 97% with Green Development

    Today, our teams can use our carbon emissions tool to really dig down into the granularity of the data. This enables them to understand what drives their carbon footprint and how to address it. And this is where things get interesting, because better data translates into actual sustainability projects. So far, we’ve launched two large-scale initiatives.

    First, there’s Green Software Development. We’ve created a Green Development handbook, which contains a list of guides and best practices our software developers and engineers can use to make their software greener. With information from our carbon footprint tool, for example, we’ve been able to consolidate a number of large BigQuery queries into a single query at a greener time of day and location, resulting in a 97% reduction of emissions. That means, we’ve reduced the amount of CO2 from 200kg to 6kg, every time we run this query. And that’s just one way we’re making a difference.

    Increasing the efficiency of cloud infrastructure

    Our second big initiative relates to our cloud infrastructure. Choosing a cleaner cloud and a cleaner cloud region to run workloads is one of the simplest and most effective ways we can reduce our carbon emissions. Fortunately, Google Cloud publishes carbon data for all cloud regions. This includes the average percentage of carbon free energy consumed in that particular location on an hourly basis and the grid carbon intensity of the local electricity grid.

    By digging into the data, we can identify cloud waste and take action. For example, while many of our workloads have to run throughout the day, not all of them have to run at certain times. This creates potential for optimization. We’re using data from Google Cloud to understand the state of our workloads. By combining this information with carbon intensity data from the grid, we can identify and reschedule workloads to lower intensity times, and have a positive impact on Kaluza’s emissions.

    Using data to help people make an impact

    Many of our sustainability projects have one important thing in common: they’re bottom-up initiatives, developed by and with our team. With emissions data at our fingertips, we’re constantly organizing hackathons or Green Development days to inspire action and test new ideas.

    Making sustainability actionable and accessible for everyone is part of our core mission, and we’re bringing that same idea to our own teams. The feedback has been encouraging too. At a recent Green Development day, one of our employees said he now really understands how his role can impact on the sustainability of Kaluza and the world. We’re putting sustainability at the heart of our organization, by empowering our employees to take direct climate action in their roles. And by showing employees the direct impact of their work, we can encourage them to build even stronger solutions that will result in more carbon savings for our customers.

    Driving change by turning electric vehicles into green power stations

    There are many ways to make a difference at Kaluza. Our internal pledge to reduce carbon emissions, and pass these savings on to our energy retail clients and their customers, is just one of our sustainability pillars. We’re also using Google Cloud solutions for many other exciting projects, for example the world’s first and largest domestic vehicle-to-grid (V2G) technology deployment we are leading with OVO Energy and Nissan. 

    With V2G, drivers can charge their electric vehicles when renewable energy is in abundance, and sell it back to the grid when it’s short of supply. By analyzing grid and vehicle data in real time with Google Cloud, we’re essentially turning millions of cars into dynamic batteries, to build a greener, more resilient energy system while helping drivers earn hundreds of pounds a year. In a market such as California, this could reduce the stress on the grid at peak times by 40%.

    Powering the future of energy, together

    From houses to vehicles and beyond, at Kaluza, we’re using technology to make the energy transition a simple and affordable option for our clients and their customers. We’re excited to keep working with Google Cloud to scale our business and bring new energy solutions to life. We’re striving to be a market leader in sustainability, and with Google Cloud, we’ve found a cloud vendor whose sustainability goals really align with ours. Together, we’re building a world where net zero is in everyone’s reach.

    Related Article

    Google Cloud announces new products, partners and programs to accelerate sustainable transformations

    In advance of the Google Cloud Sustainability Summit, we announced new programs and tools to help drive sustainable digital transformation.

    Read Article
  • Insights on the future of work and collaboration Wed, 17 Aug 2022 17:30:00 -0000

    Business leaders and IT professionals come to Google Workspace to build secure, cloud-first collaboration solutions that transform how people work together. Here’s the latest from Google Workspace leaders and partners about the evolving future of work and collaboration, all in one place.

    Empowering everyday innovation to build a more adaptive business

    How organizations can rethink their approach to time management coaching

    Google Workspace and Google Cloud help build the future of work at Airbus

    • See how Airbus lived up to its mantra of “Any device, anytime, anywhere,” with help from Google Workspace and Google Cloud. Read the article.

    Shaping the future of work for frontline workers in Asia Pacific

    Boosting collaboration and participation in the hybrid work world

    The future of work requires a more human approach to security

    • Security is no longer just about protecting information or restricting how that information is accessed—it’s about building safe, efficient, and effective ways to facilitate seamless collaboration and information-sharing. Read more.

    Insights from our global hybrid work survey

    • Google Workspace commissioned Economist Impact to conduct a global hybrid work survey. Here are the resulting insights about employee wellbeing, productivity, and the need for better technology.

  • Understanding basic networking in GKE - Networking basics Wed, 17 Aug 2022 17:00:00 -0000

    In this article we'll explore the networking components of Google Kubernetes Engine (GKE) and the various options that exist. Kubernetes is an open source platform for managing containerized workloads and services and GKE is a fully managed environment for running Kubernetes on Google Cloud infrastructure. 

    IP addressing

    Various network components in Kubernetes utilize IP addresses and ports to communicate. IP addresses are unique addresses that identify various components in the network.


    • Containers - These are the smallest components for executing application processes. One or more containers run in a pod.

    • Pods - A collection of containers that are physically grouped together. Pods are assigned to nodes.

    • Nodes - Nodes are worker machines in a cluster (a collection of nodes). A node runs zero or more pods. 


    • ClusterIP - These addresses are assigned to a service.

    • Load balancer - Load balances internal traffic or external traffic to nodes in the cluster.

    • Ingress - Special type of Load balancer that handles HTTP(S) traffic.

    IP addresses are assigned from various subnets to the components and services. Variable length subnet masks (VLSM) are used to create CIDR blocks. The amount of available hosts on a subnet depends on the subnet mask used.

    The formula for calculating available hosts in Google Cloud is 2n- 4, not 2n- 2, which is normally used in on-premise networks.

    The flow of IP address assignment looks like this:

    • Nodes are assigned IP addresses from the cluster's VPC network

    • Internal Load balancer IP addresses by default are automatically assigned from the Node IPv4 block. If necessary, you can create a specified range for your Load balancers and use the loadBalancerIP option to specify the address from that range.

    • Pods are assigned addresses from a range of addresses issued to pods running on that node. The default max pods per node is 110. To allocate an address to this number the amount is multiplied by 2 (110*2=220) and the nearest subnet is used which is /24. This allows a buffer for scheduling of the pods. This limit is customizable at creation time.

    • Containers share the IP address of the Pods they run on.

    • Service (Cluster IP) addresses are assigned from an address pool reserved for services.

    The IP address ranges for VPC-native clusters section of the VPC-native clusters document gives you an example of planning and scoping address ranges.

    Domain Naming System (DNS)

    DNS allows name to IP address resolution. This allows automatic name entries to be created for services. There are a few options in GKE.

    • kube-dns - Kubernetes native add-on service. Kube-dns runs on a deployment that is exposed via a cluster IP. By default pods in a cluster use this service for DNS queries. The “Using kube-dns'' document describes how it works.

    • Cloud DNS - This is Google Cloud DNS managed service. This can be used to manage your cluster DNS. A few benefits of Cloud DNS over kube-dns are:

      • Reduces the management of a cluster-hosted DNS server.

      • Supports local resolution of DNS on GKE nodes. This is done by caching responses locally, which provides both speed and scalability.

      • Integrates with GoogleCloud Operations monitoring suite.

    Service Directory is another service from Google Cloud that can be integrated with GKE and Cloud DNS to manage services via namespaces.

    The gke-networking-recipes github repo has some Service Directory examples you can try out for Internal LoadBalancers, ClusterIP, Headless & NodePort.

    For a deeper understanding of DNS options in GKE please check out the article DNS on GKE: Everything you need to know.

    Load Balancers

    These control access and distribute traffic across clutter resources. Some options in GKE are:


    These handle HTTP(S) traffic destined to services in your cluster. They use an Ingress resource type. When this is used it creates an HTTP(S) load balancer for GKE. When configuring, you can assign a static IP address to the load balancer, to ensure that the address remains the same.

    In GKE there you can provision both external and internal Ingress. The links to the guides below show you how to configure:

    GKE allows you to take advantage of container-native load balancing which directs traffic directly to the pod IP using Network Endpoint Groups (NEGs)

    Service routing

    There are three main points to understand in this topic:

    • Frontend-This exposes your service to clients through a frontend that accepts the traffic based on various rules. This could be a DNS name or Static IP address. 

    • Load balancing-Once the traffic is allowed the load balancer distributes to available resources to serve the request based on rules. 

    • Backend-Various endpoints that can be used in GKE.                 

    Networking Basics GKE


    In GKE you have several ways you can design your clusters networking:

    • Standard - This mode allows the admin the ability to configure the clusters underlying infrastructure. This mode is beneficial if you need a deeper level of control and responsibility.

    • Autopilot - GKE provisions and manages the cluster's underlying infrastructure. This is pre-configured for usage and gives you a bit of hand-off management freedom.

    • Private Cluster (This allows only internal IP connections). If you need a client to have access to the internet (e.g. for updates) you can use a Cloud NAT.

    • Private Service Access, (Lets your VPC communicate with service producer services via private IP addresses. Private Service Connect, (Allows private consumption of services across VPC networks)

    Bringing it all together

    Below is a short high-level recap.

    • IP addresses are assigned to various resource in your cluster

      • Nodes

      • Pods 

      • Containers

      • Services

    • These IP address ranges are reserved for the various resource types. You have the ability to adjust the range size to meet your requirements by subnetting. Restricting unnecessary external access to your cluster is recommended.

    • By default pods have the ability to communicate across the cluster. 

    • To expose applications running on pods you need a service.

    • Cluster IPs are assigned to services.

    • For DNS resolution you can rely on the native option like kube-dns or you can utilize Google Cloud DNS within your GKE cluster.

    • Load balancers can be used internally and external with your cluster to expose applications and distribute traffic.

    • Ingress handles HTTP(S) traffic. This utilizes HTTP(S) load balancers service from Google cloud. Ingress can be used for internal and external configurations.

    To learn more about GKE networking, check out the following:

    Want to ask a question, find out more or share a thought? Please connect with me on Linkedin or Twitter: @ammettw.

  • Announcing curated detections in Chronicle SecOps Suite Wed, 17 Aug 2022 16:00:00 -0000

    A critical component of any security operations team’s job is to deliver high-fidelity detections of potential threats across the breadth of adversary tactics. But increasingly sophisticated threat actors, an expanding attack surface, and an ever-present cybersecurity talent shortage make this task more challenging than ever. 

    Google keeps more people safe online than anyone else. Individuals, businesses and governments globally depend on our products that are secure-by-design and secure-by-default. Part of the “magic” behind Google’s security is the sheer scale of threat intelligence we are able to derive from our billions of users, browsers, and devices. 

    Today, we are putting the power of Google’s intelligence in the hands of security operations teams. We are thrilled to announce the general availability of curated detections as part of our Chronicle SecOps Suite. These detections are built by our Google Cloud Threat Intelligence (GCTI) team, and are actively maintained to reduce manual toil in your team.

    Our detections provide security teams with high quality, actionable, out-of-the-box threat detection content curated, built and maintained by Google Cloud Threat Intelligence (GCTI) researchers. Our scale, and depth of intelligence, gained by securing billions of users everyday, gives us a unique vantage point to craft effective and targeted detections. These native detection sets cover a wide variety of threats for cloud and beyond, including Windows-based attacks like ransomware, remote-access tools (RAT), infostealers, data exfiltration, suspicious activity, and weakened configurations.

    With this launch, security teams can smoothly leverage Google’s expertise and unique visibility into the threat landscape. This release helps understaffed and overstressed security teams keep up with an ever evolving threat landscape, quickly identify threats, and drive effective investigation and response. With this new release, security teams can: 

    • Enable high quality curated detections with a single click from within the Chronicle console. 

    • Operationalize data with high-fidelity threat detections, stitched with context available from authoritative sources (such as IAM and CMDB). 

    • Accelerate investigation and response by finding anomalistic assets and domains with prevalence visualization for the detections triggered. 

    • Map detection coverage to the MITRE ATT&CK framework to better understand adversary tactics and techniques and uncover potential gaps in defenses.

    Detections are constantly updated and refined by GCTI researchers based on the evolving threat landscape. The first release of curated detections includes two categories that cover a broad range of threats, including:

    • Windows-based threats: Coverage for several classes of threats including infostealers, ransomware, RATs, misused software, and crypto activity.

    • Cloud attacks and cloud misconfigurations: Secure cloud workloads with additional coverage around exfiltration of data, suspicious behavior, and additional vectors. 

    Let’s look at an example of how you can put curated detections to work within the Chronicle dashboard, monitor coverage, and map to MITRE ATT&CK®.


    An analyst can learn more details around specific detections and understand how they map to the MITRE ATT&CK framework. There are customized settings to configure deployment and alerting, and specify exceptions via reference lists. 


    You can see each rule which has generated a detection against your log data in the Chronicle rules dashboard. You can observe detections associated with the rule and pivot to investigative views. For example, here is the detection view from the timeline of an Empire Powershell Stager launch triggered by the Windows RAT rule set. You can also easily pivot to associated information and investigate the asset on which it was triggered.


    By surfacing impactful, high-efficacy detections, Chronicle can enable analysts to spend time responding to actual threats and reduce alert fatigue. Our customers who used curated detections during our public preview were able to detect malicious activity and take actions to prevent threats earlier in their lifecycle. And there’s more to come. We will be delivering a steady release of new detection categories covering a wide variety of threats, community-driven content, and other out-of-the-box analytics.

    Ready to put Google’s intelligence to work in your Security Operations Center? Contact Google Cloud sales or your customer success CSM team. You can also learn more about all these new capabilities in Google Chronicle in our product documentation.  

    Thank you to Mike Hom (Product Architect, Chronicle) and Ben Walter (Engineering Manager, Google Cloud Threat Intelligence), who helped with this launch.

    Related Article

    Introducing Cloud Analytics by MITRE Engenuity Center in collaboration with Google Cloud

    To better analyze the growing volumes of heterogeneous security data, Google has partnered with MITRE to create the Cloud Analytics proje...

    Read Article
  • How a Vulnerability Exploitability eXchange can help healthcare prioritize cybersecurity risk Wed, 17 Aug 2022 16:00:00 -0000

    Diagnosing and treating chronic pain can be complex, difficult, and full of uncertainties for a patient and their treating physician. Depending on the condition of the patient and the knowledge of the physician, making the correct diagnosis takes time, and experimenting with different treatments might be required. 

    This trial-and-error process can leave the patient in a world of pain and confusion until the best remedies can be prescribed. It’s a situation similar to the daily struggle that many of today’s security operations teams face. 

    Screaming from the mountain tops “just patch it!” isn’t very helpful when security teams aren't sure if applying a patch might create even worse issues like crashes, incompatibility, or downtime. Like a patient with chronic pain, they may not know the source of the pain in their system. Determining which vulnerabilities to prioritize patching, and ensuring those fixes actually leave you with a more secure system, is one of the hardest tasks a security team can face. This is where a Vulnerability Exploitability eXchange (VEX) comes in.

    The point of VEX

    In previous blogs, we’ve discussed how establishing visibility and awareness into patient safety and technology is vital to creating a resilient healthcare system. We’ve also looked at how combining software bills of materials (SBOM) with Google’s Supply chain Levels for Software Artifacts (SLSA) framework can help build more secure technology that enables resilience. 

    The SBOM provides visibility into the software you’re using and where it comes from, while SLSA provides guidelines that help increase the integrity and security of software you then build. Rapid diagnostic assessments can be added to that equation with VEX, which the National Telecommunications and Information Administration describes as a “companion” document that lives side-by-side with SBOM. 

    To go back to our medical metaphor, VEX is a mechanism for software providers to tell security teams where to look for the source of the pain. VEX data can help with software audits when inventory and vulnerability data need to be captured at a specific point in time. That data also can be embedded into automated security tools to make it easier to prioritize vulnerability patching.  

    You can then think of SBOM as the prescription label on a bottle of medication, SLSA as the child-proof lid and tamper-proof seal guaranteeing the safety of the medication, and VEX as the bottle’s safety warnings. As a diagnostic aide, a VEX can help security teams make accurate diagnoses of “what could hurt” and system weaknesses before the bad guys do. 

    Yet making an accurate assessment of that threat model can be challenging, especially when looking at the software we use to run systems. The ability to quickly and accurately evaluate an organizations’ weaknesses and pain points can be vital to hastening response to a vulnerability and stopping cyberattacks before they become destructive. We believe that VEX is an important part of the equation to help secure the software supply chain. 

    As an example, look no further than the Apache Log4j vulnerabilities revealed in December 2021. Global industries including healthcare were dealt another blow when Apache’s Log4j 2 logging system was found to be so vulnerable that relatively unsophisticated threat actors could quickly infiltrate and take over systems. Through research conducted by Google and information contributed by CISA, we learned of examples of where vulnerabilities in Log4j 2, a single software component, could potentially impact thousands of companies using software that depend on it because of its near-ubiquitous use. 

    While a VEX would not capture zero-day vulnerabilities, it would be able to inform security teams of other known vulnerabilities in Log4j 2. Once vulnerabilities have been published, security teams could use SBOM to find them, and use VEX to understand if remediation is a priority or not.

    How does VEX contribute to visibility?

    A key reason we focus on visibility mechanisms like SBOM and SLSA is because they give us the ability to understand our risks. Without the ability to see into what we must protect, it can be difficult to determine how to quickly reduce risk.

    Visibility is a crucial first step to stopping malicious hackers. Yet without context, visibility leaves security teams overwhelmed with data. Why? Well, where would you start when trying to mitigate the 30,000 known vulnerabilities affecting just open source software, according to the Open Source Vulnerabilities database (OSV)? NIST’s National Vulnerability Database (NVD) is tracking close to 181,000 vulnerabilities. We’ll be patching into the next millennium if we adopt a “patch everything” approach.

    It’s impossible to address every vulnerability individually. To make progress, security teams need to be able to prioritize findings and go after the ones that will have the greatest impact first. The goal of a VEX artifact is to make prioritization a little easier.

    While SBOMs are created or changed when the material included in a build is updated, VEXs are intended to be changed and distributed when a new vulnerability or threat has changed. This means that VEX and SBOM should be maintained separately. Since security researchers and organizations are constantly discovering new cybersecurity vulnerabilities and threats, a more dynamic mechanism like VEX can help ensure builders and operators have the ability to quickly ascertain the risks of the software they are using.

    Let’s dig into this VEX example from CycloneDX. You can see the list of vulnerabilities found, third parties who track and report those vulnerabilities, vulnerability ratings per CVSS, and most importantly, a statement from the developer that guides the operator reading the VEX to those vulnerabilities that are exploitable and need to be protected. At the bottom, you’ll see the VEX “affects” an SBOM. 

    This information allows the user of the VEX document to refer to its companion SBOM. By necessity, the VEX is intentionally decoupled from the SBOM because they need to be updated at different times. A VEX document will need to be updated when new vulnerabilities emerge. An SBOM will need to be updated when changes to the software are made by a manufacturer. Although they can and need to be updated separately, the contents of each document can stay aligned because they are linked. 

    Increasing resilience powered by visibility—SBOM+VEX+SLSA 

    VEX could dramatically improve how security vulnerabilities are handled. It’s not uncommon to find operators buried in vulnerabilities, best-guessing the ones that need fixing, and trying to make sense of tens (and sometimes hundreds) of pages of documentation to determine the best, lowest impact fix.

    With SBOM+SLSA+VEX, operators are using software-driven mechanisms to conduct analyses and evaluate risk instead of relying on intuition and best guesses. The tripartite SBOM+SLSA+VEX approach provides an up-to-date list of issues and perspective on what needs attention. This is a transformative development in security—enabling teams to get a better handle on doing vulnerability mitigation, starting where it could hurt the most.

    Driven by repeated cyberattacks on critical infrastructure such as healthcare, government regulators have taken a more interested stance in software security and supply chains. Strengthening the effectiveness of SBOMs in the United States is a big part of the newly 

    proposed Protecting and Transforming Cyber Health Care (PATCH) Act. The law would require medical device manufacturers adhere to minimum cybersecurity standards in their products, including the creation of SBOMs for their devices, and plans to monitor and patch any cybersecurity vulnerabilities that are discovered during the device’s lifetime.

    Meanwhile, new draft medical device cybersecurity guidance from the FDA continues that agency’s involvement in aggressively encouraging medical device manufacturers to improve the cybersecurity resilience of their products. The White House spoke for SBOMs, as well. An Executive Order from May 2021 lays out requirements for secure software development, including the production and distribution of SBOM for software used by the federal government.

    Regardless of how these initiatives pan out, Google believes controls like those provided by SBOM+SLSA+VEX are critical to protect software and build a resilient healthcare ecosystem. This approach provides detailed, critical risk exposure data to security teams so they can take necessary steps to reduce immediate and long-term risks. 

    What do we suggest you do?

    At Google, we are working with the Open Source Security Foundation on supporting SBOM development. Our Know, Prevent, Fix report on secure software development creates a broader outline of how Google thinks about securing open source software from preventable vulnerabilities. You can read more about these efforts for securing workloads on Google Cloud from our Cloud Architecture Center. Take a look at Cloud Build, a Google Cloud service that can be used to generate up to SLSA Level 2 build artifacts.

    Customers often have difficulty getting full visibility and control over vulnerabilities because of their dependence on open source software (OSS). Assured Open Source Software (Assured OSS) is the Google Cloud service that helps teams both secure the external OSS packages they use and overcome avoidable vulnerabilities by simply eliminating them from the code base. Finally, ask us about Google's Cybersecurity Action Team, the world’s premier security advisory team and its singular mission supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.

    If you’re a software supplier, please consider our suggestions above. Whether you are or not, you should begin:

    • Contractually mandating SBOM+VEX+SLSA (or their equivalent) artifacts to be provided for all new software.

    • Train procurement teams to ask for and use SBOM+VEX+SLSA to make purchasing decisions. There should be no reason an organization procures software or hardware with known, preventable issues. Even if they do, the information these mechanisms provide should help security teams decide if they can live with the risks before equipment enters their networks.

    • Establishing a governance program that ensures those who control procurement decisions are aware of and owning the risks associated with software they are buying.

    • Enabling security teams to build pipelines to ingest SBOM+VEX+SLSA artifacts into their security operations and use it to strategically advise and drive mitigation activities.

    At Google, we believe the path to resilience begins with building visibility and structural awareness into the software, hardware, and equipment it rides on as a critical first step. Time will tell if VEX becomes widely adopted, but the point behind it won’t change—we can’t know how we are vulnerable without visibility. VEX is an important concept in this regard.

    Next month, we’ll be shifting gears slightly to focus on building resilience by establishing a security culture that obsesses over its patients and products.

    Related Article

    How SLSA and SBOM can help healthcare's cybersecurity resiliency

    There’s more to securing healthcare technology than just data privacy. Here’s why resilient healthcare security needs SBOM and SLSA.

    Read Article
  • What’s new with Google Cloud Tue, 16 Aug 2022 21:00:00 -0000

    Want to know the latest from Google Cloud? Find it here in one handy location. Check back regularly for our newest updates, announcements, resources, events, learning opportunities, and more. 

    Tip: Not sure where to find what you’re looking for on the Google Cloud blog? Start here: Google Cloud blog 101: Full list of topics, links, and resources.

    Week of Aug 15 - Aug 19, 2022

    • Cloud SQL now supports deletion protection for MySQL, Postgres and SQL Server instances. With the deletion protection flag, you can now protect your instance from unintended deletions. The flag is enabled by default in the Cloud Console and when enabled, delete is blocked and the flag has to be disabled before an instance can be deleted. To disable the deletion protection flag, the user must have at least Cloud SQL Editor role.With the deletion protection flag, you now have added protection that will prevent accidental or malicious deletion of databases that can create expensive outages for applications. To learn more about deletion protection refer to Cloud SQL documentation

    Week of Aug 8 - Aug 12, 2022

    • Artifact Registry now supports use of organization policies that can require Customer Managed Encryption Keys (CMEK) protection and can limit which Cloud Key Management System CryptoKeys can be used for CMEK protection. Learn More
    • Google Cloud Deploy documentation has been re-formatted to make it easier to find information being sought.Docs
    • Google Cloud Deploy new blog post describing many new features and benefits added over the first half of the year. Blog
    • Google Cloud Deploy new GUI update that surfaces information related to a target’s execution environment.  Developers can now easily find and confirm where Google Cloud Deploy render and deploy operations take place in addition to worker pool type, execution environment, service account, and artifact storage location. Learn More 

    Week of Aug 1 - Aug 5, 2022

    • Bigtable-BigQuery federation is now Generally Available. Query Bigtable directly from BigQuery and combine with other data sources for real-time analytical insights. No ETL required.  Learn more
    • Join us August 30th for the “Power your business with modern cloud apps” webinar. We will be sharing best practices and strategies for how to simplify, streamline, and secure your application development using Google Cloud services like GKE, Apigee API, Anthos, and Cloud Run. Register today.

    Week of July 25 - July 29, 2022

    • Cloud Pub/Sub is introducing a new type of subscription called a “BigQuery subscription” that writes directly from Cloud Pub/Sub to BigQuery. You no longer have to write or run your own pipelines for data ingestion from Pub/Sub into BigQuery. This new extract, load, and transform (ELT) path will be able to simplify your event-driven architecture. Learn more.
    • BigLake enables you to maximize the true potential of your data spread across clouds, storage formats, data lakes, and warehouses. It is now Generally available, and you can use it to build multi-cloud data lakes that work across GCP and OSS query engines, in a secure and governed manner. Learn more.
    • Cloud Healthcare API is now available in 4 additional regions allowing customers to serve their own users faster, more reliably, and securely. The Cloud Healthcare API provides a managed solution for storing and accessing healthcare data in Google Cloud, providing a critical bridge between existing care systems and applications hosted on Google Cloud. Learn More.
        • asia-southeast2 (Jakarta)

        • us-east1 (South Carolina) 

        • us-west1 (Oregon)

        • us-west3 (Salt Lake City)

    • Cloud Deploy - You can now view and compare Kubernetes and Skaffold configuration files for releases, using Google Cloud Console. Learn More.
    • Cloud Deploy now offers an Easy Mode option that creates a skaffold.yalm file automatically from a Kubernetes manifest.  The feature is accessed from the command line by adding --from-k8s-manifest=FROM_K8S_MANIFEST to the gcloud deploy releases create command. The generated skaffold.yaml is suitable for onboarding, learning, and demonstrating Google Cloud Deploy.  Learn More

    Week of July 18 - July 22, 2022

    • Launched three major new Dataflow features to General Availability: Dataflow Go GA, Dataflow Prime GA, and Dataflow ML GA. 
    • The Data Engineer Spotlight is THIS WEEK! Register today to experience four technical sessions, expert speakers, a q&a session, and tons of on demand content. 
    • Speed up your workflow executions by running steps concurrently! Workflows now supports parallel steps, which can reduce the overall execution time for workflows that include long-running operations like HTTP requests and callbacks. Our latest codelab shows you how to more quickly process a dataset by parallelizing multiple BigQuery jobs within a workflow. Read more in our blog post.
    • Google Cloud introduces Batch. Batch is a fully-managed service which helps you run batch jobs easily, reliably, and at scale. Without additional software, Batch dynamically and efficiently manages resource provisioning, scheduling, queuing, and execution, freeing up time for you to focus on analyzing results. It is free, and you only pay for the resources used, but you can further reduce cost with Spot VMs and Custom Machine Types. Read more in the launch blog. 
    • Run your Arm workloads on Google Kubernetes Engine (GKE) with Tau T2A VMs in preview. Arm nodes come packed with key GKE features, including the ability to run using GKE Autopilot. We’ve also updated many popular Google Cloud developer tools and partnered with leading CI/CD, observability, and security ISVs to simplify running Arm workloads on GKE.

    Week of July 11 - July 15, 2022

    • Cloud Deploy users can now suspend a delivery pipeline. Suspending a pipeline is useful  for situations when there’s a problem with a release and you want to make sure no further actions occur. Suspended pipelines also allow teams to pause releases for a defined time period like holidays, busy seasons, etc.
    • Cloud Deploy users can now permanently abandon a release. An abandoned release has the following restrictions - it cannot be promoted, it cannot be rolled back, and it cannot be unabandoned. Some reasons to abandon a release include a serious bug or bugs in the release, a major security issue in the release, or the release includes a deprecated feature.

    Week of July 4 - July 8, 2022

    • Blue-green upgrade mechanism for upgrading GKE node pools is now generally available. With blue-green upgrades, you now have more control over the upgrade process for highly available production workloads. GKE creates a new set of nodes, moves your workloads and gives you “soak” time before committing the upgrade. You can also quickly rollback in the event your workloads cannot tolerate the upgrade.
    • Get a deep dive into managing traffic fluctuations with Google Cloud. European travel group REWE explores the value of Cloud Spanner In mitigating and supporting traffic surges and optimizing the consumer experience during peak travel seasons. 
    • Differentiation brings great customer experiences. Differentiation achievements help customers select a partner with confidence, knowing that Google Cloud has verified their skills and customer success across our products, horizontal solutions and key industries.

    Week of June 27 - July 1, 2022

    • Time-sharing GPUs on GKE are generally available. Time-sharing allows multiple containers to share a single physical GPU attached to a node. This helps achieve greater cost effectiveness by improving GPU Utilization and workload throughput.
    • Dual-stack networking is now available (preview) for GKE.  With this feature, you can now allocate dual-stack IPv4 and IPv6 addresses for Pods and nodes.  For Services, you can allocate single-stack (IPv4 only or IPv6 only) or dual-stack addresses.
    • View your GKE costs directly in Cloud Billing. Now in preview, you can view a detailed breakdown of cluster costs directly in the Google Cloud console or the Cloud Billing export to BigQuery.  With this detailed information, you can more easily allocate the costs of your GKE clusters and workloads across different teams.
    • Cloud Deploy is now available in 5 additional regions improving performance and flexibility. Learn More.
        • asia-east2 (Hong Kong)

        • europe-west2 (London)

        • europe-west3 (Frankfurt)

        • us-east4 (N. Virginia)

        • us-west2 (Los Angeles)

    • Cloud Deploy deployment of containers to Anthos user clusters using Connect gateway is now generally available. Learn more.
    • Launched Query Insights for Cloud Spanner - a new visualization tool for visualizing Query performance metrics and debugging Query Performance issues  in the Cloud console!
    • Now in preview, BigQuery BI Engine Preferred Tables. Preferred tables enable BigQuery customers to prioritize specific tables for acceleration by BI Engine to ensure predictable performance and optimized use of resources. Read our blog to learn more.
    • MITRE ATT&CK® mappings for Google Cloud security capabilities through our research partnership with the MITRE Engenuity Center for Threat-Informed Defense. Learn more.
    • Launched a new way of accessing billing information — from the Cloud Console mobile app. Now, with your Android or iOS mobile device, you can access not only your resources (App Engine, Compute, Databases, Storage or IAM), logs, incidents, errors, but also your billing information. With these enhanced billing features, we are making it easier for you to understand your cloud spend. 
    • Eventarc adds support for Firebase Realtime Database. Now you can create Eventarc triggers to send Firebase Realtime Database events to your favorite destinations that Eventarc supports. 
    • PostgreSQL interface for Cloud Spanner is generally available. The PostgreSQL interface for Spanner combines the scalability and reliability of Spanner that enterprises trust with the familiarity and portability of PostgreSQL that development teams love. Devops teams that have scaled their databases with brittle sharding or complex replication can now simplify their architecture with Spanner, using the tools and skills they already have. Get started today, for as low as $65 USD/month. Learn more.

    Week of June 20 - June 24, 2022

    • Read the latest Cloud Data Hero Story. This edition focuses on Francisco, the founder of Direcly, a Google Cloud partner. Francisco immigrated from Quito, Ecuador and founded his company from the ground up, without any external funding. Now, he’s finding innovative ways to leverage Google Cloud’s products for companies like Royal Caribbean International.

    Week of June 13 - June 17, 2022

    • Launched higher reservation limits for BigQuery BI Engine. BigQuery BI Engine now supports a default maximum reservation of 250GB per project for all customers. Previously this was at 100GB. You can still request additional BI Engine reservations for your projects here. This is being rolled out in the Google Cloud Console over the next few days to all customers. Alternatively, all customers can already use DDL statement as follows 

      • ALTER BI_CAPACITY `<PROJECT_ID>.region-<REGION>.default` SET OPTIONS(size_gb = 250);

    • Don’t miss our first ever Google Cloud Sustainability Summit on June 28, 2022. Learn how business and technology leaders are building for the future, and get insights to help you enact sustainable change within your organization. At this digital event, you’ll have a chance to explore the latest tools and best practices that can help you solve your most complex challenges. And you’ll be among the first to find out about product updates across Google Cloud, Earth Engine, and Google Workspace. Register today for this no-cost, solution-packed event.
    • On June 14, 2022, we are unveiling the winners of this year’s Google Cloud Customer Awards.We received an unprecedented number of entries and every participant can be proud of what their organization is achieving in the cloud today. The second annual Google Cloud Customer Awards celebrates organizations around the world who have continued to flex and adapt to new demands, while turning new ideas into interesting realities. Read our blog to check out the results.
    • The Cloud Digital Leader track is now part of the Google Cloud career readiness program, available for eligible faculty preparing their students for a cloud-first workforce. Students will build cloud literacy and learn the value of Google Cloud in driving digital transformation while also preparing for the Cloud Digital Leader certification exam. Learn more.

    Week of June 6 - June 10, 2022

    • Artifact Registry - Audit logs for Maven, npm, and Python repositories are now available in Cloud Logging. Documentation
    • Cloud Deploy New Region - Cloud Deploy is now available in the australia-southeast1 (Syndey) region. Release Notes
    • Cloud Deploy Terraform provider support. Cloud Deploy declarative resources, Delivery Pipeline and Target, are now available via the Google Cloud Deploy Terraform Provider. Documentation
    • Anthos on VMware user cluster lifecycle from the Google Cloud Console isin GA now. You will now be able to create, delete, update, and see Anthos on VMware user clusters from the Google Cloud Console. To learn more about the feature, check out  the Anthosdocumentation.
    • Granular instance sizing for Cloud Spanner is now generally available. Get started for as low as $40 per month and take advantage of 99.999% availability and scale as needed without downtime. With granular instance sizing, at a much lower cost you can still get all of the Spanner benefits like transparent replication across zones and regions, high-availability, resilience to different types of failures, and the ability to scale up and down as needed without any downtime.  Learn more.

    Week of May 30 - June 3, 2022

    • Google Cloud Deploy support for Skaffold version 1.37.1 has been updated to version 1.37.2, which is now the default Skaffold version. (Skaffold Docs)
    • Google Cloud just made it easier to compare the cost of modernization options. Want to look at Lift & Shift vs. Containerization options? The latest version of our fit assessment now includes cost guidance. See the release notes for more details.
    • Did you notice the new “Protect” tab in Google Kubernetes Engine? Protect for GKE automatically scans, identifies and suggests fixes for workload configuration risks by comparing your running workload config against industry best practices like the Kubernetes Pod Security Standards. Check out the documentation to learn more.
    • Google Cloud makes data warehouse migrations even easier with automated SQL translation as part of the BigQuery Migration Service. Learn more.
    • Google Cloud simplifies customer verification and benefits processing with Document AI for Identity cards now generally available. Automate identity verification and fraud detection workflows by extracting information from identity cards with a high degree of accuracy. Learn more.

    Week of May 23 - May 27, 2022

    • Artifact Registry now is available in more regions. Artifact Registry is now available in the following regions - europe-west9 (Paris, France), europe-southwest1 (Madrid, Spain), and us-east5 Columbus, United States). Release Notes 
    • Change streams for Cloud Spanner is now generally available.With change streams, Spanner users are now able to track and stream out changes (inserts, updates, and deletes) from their Cloud Spanner database in near real-time. Learn more.
    • Artifact Registry now supports new repository types. Apt and Yum repositories are now generally available. Release Notes
    • Business Messages announces expansion of its partner ecosystem to includeTwilio, Genesys, and Avaya - each widely recognized global platforms for customer care and communications. Read how they help businesses implement both AI Bot and Live Agent chat solutions to stay open for conversations and advance customers through the purchase funnel. And be sure to check out the new Business Messages partner directory!
    • Learn how to set up metrics and alerts to monitor errors in Cloud SQL for SQL Server error log using Google Cloud’s Operation Suite with this blog post.

    Week of May 16 - May 20, 2022

    • Machine learning is among the most exciting, fastest-moving technology disciplines. Join us June 9th for Google Cloud Applied ML Summit, a digital event that brings together some of the world’s leading ML and data science professionals to explore the latest cutting-edge AI tools for developing, deploying, and managing ML models at scale.
    • Join us virtually on June 2nd at the Google Cloud Startup Summit where you’ll hear the latest announcements about how we’re investing in and supporting the startup ecosystem. You'll also learn from technology experts about streamlining your app development and creating better user experiences, and get insights from innovative venture capitalists and founders to help your startup grow. This event is headlined by our keynote with Google Cloud CEO Thomas Kurian and Dapper Labs Co-Founder and CEO Roham Gharegozlou as they discuss the paradigm changes being brought by web3 and how startups can prepare for this shift.
    • Google Cloud Managed Service for Prometheus introduced a new high-usage pricing tierto bring more value for Kubernetes users who want to move all of their metrics operations to the service, and dropped the pricing for existing tiers by 25 percent.
    • Hear from the SRE teamat Maisons du Monde detail their journey from building open source Prometheus to deciding that Managed Service for Prometheus was the best fit for their organization.
    • Google Cloud has launched Autonomic Security Operations (ASO) for the U.S. public sector, a solution to modernize threat management, in line with the objectives of the White House Executive Order 14028 and Office of Management and Budget M-21-31. ASO is a transformational approach to security operations, powered by our Chronicle and Siemplify, to comprehensively detect and respond to cyber telemetry across an agency while meeting the Event Logging Tier requirements of the EO.

    Week of May 9 - May 13, 2022

    • We just published a blog post announcing the latest Google Cloud’s STAC-M3™ benchmark results. Following up on our 2018 STAC-M3 benchmark audit, a redesigned Google Cloud architecture achieved significant improvements: Up to 18x faster, Up to 9x higher throughput, and new record in STAC-M3.ß1.1T.YRHIBID-2.TIME. We also published a whitepaper on how we designed and optimized the cluster for API-driven cloud resources.
    • Security Command Center (SCC) released new finding types that alert customers when SCC is either misconfigured or configured in a way that prevents it from operating as expected. These findings provide remediation steps to return SCC to an operational state. Learn more and see examples.

    Week of May 2 - May 6, 2022

    • As part of Anthos release 1.11, Anthos Clusters on Azure and Anthos Clusters on AWS now support Kubernetes versions 1.22.8-gke.200 and 1.21.11-gke.100. As a preview feature, you can now choose Windows as your node pool image type when you create node pools with Kubernetes version 1.22.8. For more information, check out the Anthos multi cloud website.
    • The Google Cloud Future of Data whitepaper explores why the future of data will involve three key themes: unified, flexible, and accessible.
    • Learn about BigQuery BI Engine and how to analyze large and complex datasets interactively with sub-second query response time and high concurrency. Now generally available.
    • Announcing the launch of the second series of the Google Cloud Technical Guides for Startups, a video series for technical enablement aimed at helping startups to start, build and grow their businesses.
    • Solving for food waste with data analytics in Google Cloud. Explore why it is so necessary as a retailer to bring your data to the cloud to apply analytics to minimize food waste.
    • Mosquitoes get the swat with new Mosquito Forecast built by OFF! Insect Repellents and Google Cloud. Read how SC Johnson built an app that predicts mosquito outbreaks in your area.

    Week of April 25 - April 29, 2022

    Week of April 18 - April 22, 2022 

    Week of April 11 - April 15, 2022 

    • Machine learning company Moloco uses Cloud Bigtable to process 5+ million ad bid requests per second. Learn how Moloco uses Bigtable to keep up in a speedy market and process ad requests at unmatched speed and scale.
    • The Broad Institute of MIT and Harvard speeds scientific research with Cloud SQL. One of our customers, the Broad Institute, shares how they used Cloud SQL to accelerate scientific research. In this customer story, you will learn how the Broad Institute was able to get Google’s database services up and running quickly and lower their operational burden by using Cloud SQL.
    • Data Cloud Summit ‘22 recap blog on April 12: Didn’t get a chance to watch the Google Data Cloud Summit this year? Check out our recap to learn the top five takeaways - learn more about product announcements, customer speakers, partners, product demos and check out more resources on your favorite topics.
    • The new Professional Cloud Database Engineer certification in beta is here. By participating in this beta, you will directly influence and enhance the learning and career path for Cloud Database Engineers globally. Learn more and sign up today.
    • Learn how to use Kubernetes Jobs and cost-optimized Spot VMs to run and manage fault-tolerant AI/ML batch workloads on Google Kubernetes Engine.
    • Expanding Eventarc presence to 4 new regions—asia-south2, australia-southeast2, northamerica-northeast2, southamerica-west1. You can now create Eventarc resources in 30 regions.

    Week of April 4 - April 8, 2022 

    • Join us at the Google Data Cloud Summit on Wednesday, April 6, at 9 AM PDT.  Learn how Google Cloud technologies across AI, machine learning, analytics, and databases have helped organizations such as Exabeam, Deutsche Bank, and PayPal to break down silos, increase agility, derive more value from data, and innovate faster. Register today for this no cost digital event.
    • Announcing the first Data Partner Spotlight, on May 11th 
      We saved you a seat at the table to learn about the Data Cloud Partners in the Google Cloud ecosystem. We will spotlight technology partners, and deep dive into their solutions, so business leaders can make smarter decisions, and solve complex data challenges with Google Cloud. Register today for this digital event
    • Introducing Vertex AI Model Registry, a central repository to manage and govern the lifecycle of your ML models. Designed to work with any type of model and deployment target, including BigQuery ML, Vertex AI Model Registry makes it easy to manage and deploy models. Learn more about Google’s unified data and AI offering.
    • Vertex AI Workbenchis now GA, bringing together Google Cloud’s data and ML systems into a single interface so that teams have a common toolset across data analytics, data science, and machine learning. With native integrations across BigQuery, Spark, Dataproc, and Dataplex data scientists can build, train and deploy ML models 5X faster than traditional notebooks. Don’t miss this ‘How to’ session from the Data Cloud Summit.

    Week of Mar 28 - April 1, 2022

    • Learn how Google Cloud’s network and Network Connectivity Center can transform the private wires used for voice trading.
    • Anthos bare metal 1.11 minor release is available now. Containerd is the default runtime in Anthos clusters on bare metal in this release.  Examples of the feature enhancements are as below:
        • Upgraded Anthos clusters on bare metal to use Kubernetes version 1.22;

        • AddedEgress Network Address Translation (NAT) gateway capability to provide persistent, deterministic routing for egress traffic from clusters

        • Enabled IPv4/IPv6 dual-stack support

        • Additional enhancements in the release can be found in the the release note  here

    Week of Mar 21 - Mar 25, 2022

    • Google Cloud’s Behnaz Kibria reflects on a recent fireside chat that she moderated with Google Cloud’s Phil Moyer and former SEC Commissioner, Troy Paredes at FIA Boca. The discussion focused on the future of markets and policy, the new technologies that are already paving the way for greater speed and transparency, and what it will take to ensure greater resiliency, performance and security over the longer term. Read the blog.
    • Eventarc adds support for Firebase Alerts. Now you can create Eventarc triggers to send Firebase Alerts events to your favorite destinations that Eventarc supports.
    • Now you can control how your alerts handle missing data from telemetry data streams using Alert Policies in the Cloud Console or via API. In cloud ecosystems there are millions of data sources, and often, there are pauses or breaks in their telemetry data streams. Configure how this missing data influences your open incidents:

      • Option 1: Missing data is treated as “above the threshold”- and your incidents will stay open.

      • Option 2: missing data is evaluated as “below the threshold” and the incident will close after your retest window period.

    Week of Mar 14 - Mar 18, 2022

    • Natural language processing is a critical AI tool for understanding unstructured, often technical healthcare information, like clinical notes and lab reports. See how leading healthcare organizations are exploring NLP to unlock hidden value in their data.
    • A handheld lab: Read how Cue Health is revolutionizing healthcare diagnostics for COVID-19 and beyond—all from the comfort of home.
    • Providing reliable technical support for an increasingly distributed, hybrid workforce is becoming all the more crucial, and challenging. Cloud Customer Care has added a range of new offerings and features for businesses of all sizes to help you find the Google Cloud technical support services that are best for your needs and budget.
    • #GoogleforGames Dev Summit is NOW LIVE. Watch the keynote followed by over 20 product sessions on-demand to help you build high quality games and reach audiences around the world. Watch → g.co/gamedevsummit
    • Meeting (and ideally, exceeding) consumer expectations today is often a heavy lift for many companies—especially those running modern apps on legacy, on-premises databases. Read how Google Cloud database services provide you the best options for industry-leading reliability, global scale & open standards, enabling you to make your next big idea a reality. Read this blog.

    Week of Mar 07 - Mar 11, 2022

    • Learn how Google Cloud Partner Advantage partners help customers solve real-world business challenges in retail and ecommerce through data insights.
    • Introducing Community Security Analytics, an open-source repository of queries for self-service security analytics. Get started analyzing your own Google Cloud logs with BigQuery or Chronicle to detect potential threats to your workloads, and to audit usage of your data. Learn more.
    • On a mission to accelerate the world's adoption of a modern approach to threat management through Autonomic Security Operations, our latest update expands our ASO technology stack with Siemplify, offers a solution to the latest White House Executive Order 14028, introduces a community-based security analytics repository, and announces key R&D initiatives that we’re investing in to bolster threat-informed defenses worldwide. Read more here
    • Account defender, available today in public preview, is a feature in reCAPTCHA Enterprise that takes behavioral detection a step further. It analyzes the patterns of behavior for an individual account, in addition to the patterns of behavior of all user accounts associated with your website. Read more here.
    • Maximize your Cloud Spanner savings with new committed use discounts. Get up to 40% discount on Spanner compute capacity by purchasing committed use discounts. Once you make a commitment to spend a certain amount on an hourly basis on Spanner from a billing account, you can get discounts on instances in different instance configurations, regions, and projects associated with that billing account. This flexibility helps you achieve a high utilization rate of your commitment across regions and projects without manual intervention, saving you time and money. Learn more. 
    • In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Learn more.
    • Learn how to accelerate vendor due diligence on Google Cloud by leveraging third party risk management providers.
    • Hybrid work should not derail DEI efforts. If you’re moving to a hybrid work model, here’s how to make diversity, equity and inclusion central to it.
    • Learn how Cloud Data Fusion provides scalable data integration pipelines to help consolidate a customer’s SAP and non-SAP datasets within BigQuery.
    • Hong Kong–based startup TecPal builds and manages smart hardware and software for household appliances all over the world using Google Cloud. Find out how.
    • Eventarc adds support for Firebase Remote Config and Test Lab in preview. Now you can create Eventarc triggers to send Firebase Remote Config or Firebase Test Lab events to your favorite destinations that Eventarc supports. 
    • Anthos Service Mesh Dashboard is now available (public preview) on the Anthos clusters on Bare Metaland Anthos clusters on VMware . Customers can now get out-of-the-box telemetry dashboards to see a services-first view of their application on the Cloud Console.
    • Micro Focus Enterprise Server Google Cloud blueprint performs an automated deployment of Enterprise Server inside a new VPC or existing VPC. Learn more.
    • Learn how to wire your application logs with more information without adding a single line of code and get more insights with the new version of the Java library.
    • Pacemaker Alerts in Google Cloudcluster alerting enables the system administrator to be notified about critical events of the enterprise workloads in GCP like the SAP solutions.

    Week of Feb 28 - Mar 04, 2022

    • Announcing the Data Cloud Summit, April 6th!—Ready to dive deep into data? Join us at the Google Data Cloud Summit on Wednesday, April 6, at 9 AM PDT. This three-hour digital event is packed with content and experiences designed to help you unlock innovation in your organization. Learn how Google Cloud technologies across AI, machine learning, analytics, and databases have helped organizations such as Exabeam, Deutsche Bank, and PayPal to break down silos, increase agility, derive more value from data, and innovate faster. Register today for this no cost digital event.
    • Google Cloud addresses concerns about how its customers might be impacted by the invasion of Ukraine. Read more.
    • Eventarc is now HIPAA compliant— Eventarc is covered under the Google Cloud Business Associate Agreement (BAA), meaning it has achieved HIPAA compliance. Healthcare and life sciences organizations can now use Eventarc to send events that require HIPAA compliance.
    • Eventarc trigger for Workflows is now available in Preview. You can now select Workflows as a destination to events originating from any supported event provider
    • Error Reporting automatically captures exceptions found in logs ingested by Cloud Logging from the following languages: Go, Java, Node.js, PHP, Python, Ruby, and .NET, aggregates them, and then notifies you of their existence.
    • Learn moreabout how USAA partnered with Google Cloud to transform their operations by leveraging AI to drive efficiency in vehicle insurance claims estimation.
    • Learn how Google Cloud and NetApp’s ability to “burst to cloud”, seamlessly spinning up compute and storage on demand accelerates EDA design testing.
    • Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team.
    • Google Cloud Easy as Pie Hackathon, the results are in.
    • VPC Flow Logs Org Policy Constraints allow users to enforce VPC Flow Logs enablement across their organization, and impose minimum and maximum sampling rates. VPC Flow Logs are used to understand network traffic for troubleshooting, optimization and compliance purposes.
    • Google Cloud Managed Service for Prometheus is now generally available. Get all of the benefits of open source-compatible monitoring with the ease of use of Google-scale managed services. 
    • Google Cloud Deploy now supports Anthos clusters bringing opinionated, fully managed continuous delivery for hybrid and multicloud workloads. Cloud Deploy provides integrated best practices, security, and metrics from a centralized control plane.
    • Learn Google Workspace’s vision for frontline workers and how our Frontline solution innovations can bridge collaboration and productivity across workforce in-office and remote.

    Week of Feb 21 - Feb 25, 2022

    • Read how Paerpay promotes bigger tabs and faster, more pleasant transactions with Google Cloud  and the Google for Startups Cloud Program.
    • Learn about the advancements we’ve released for our Google Cloud Marketplace customers and partners in the last few months.
    • BBVA collaborated with Google Cloud to create one of the most successful Google Cloud training programs for employees to date. Read how they did it
    • Google for Games Developer Summit returns March 15 at 9AM PT! Learn about our latest games solutions and product innovations. It’s online and open to all. Check out the full agenda g.co/gamedevsummit 
    • Build a data mesh on Google Cloud with Dataplex (now GA 🎉). Read how Dataplex enables customers to centrally manage, monitor, and govern distributed data, and makes it securely accessible to a variety of analytics and data science tools.
    • While understanding what is happening now has great business value, forward-thinking companies like Tyson Foods are taking things a step further, using real-time analytics integrated with artificial intelligence (AI) and business intelligence (BI) to answer the question, “what might happen in the future?
    • Join us for the first Google Cloud Security Talks of 2022, happening on March 9th. Modernizing SecOps is a top priority for so many organizations. Register to attend and learn how you can enhance your approach to threat detection, investigation and response!
    • Google Cloud introduces their Data Hero series with a profile on Lynn Langit, a data cloud architect, educator, and developer on GCP.
    • Building ML solutions? Check out these guidelines for ensuring quality in each process of the MLOps lifecycle.
    • Eventarc is now Payment Card Industry Data Security Standard (PCI DSS)-compliant.

    Week of Feb 14 - Feb 18, 2022

    • The Google Cloud Retail Digital Pulse-Asia Pacificis an ongoing annual assessment carried out in partnership with IDC Retail Insights to understand the maturity of retail digital transformation in the Asia Pacific Region. The study covers 1304 retailers across eight markets & sub-segments to investigate their digital maturity across five dimensions - strategy, people, data , technology and process to arrive at a 4-stage Digital Pulse Index, with 4 being the most mature. It provides great insights in various stages of digital maturity of asian retailers, their drivers for digitisation, challenges, innovation hotspots and the focus areas with respect to use cases and technologies.
    • Deploying Cloud Memorystore for Redis for any scale: Learn how you can scale Cloud Memorystore for high volume use cases by leveraging client-side sharding. This blog provides a step by step walkthrough which demonstrates how you can adapt your existing application to scale to the highest levels with the help of the Envoy Proxy. Read our blog to learn more.
    • Check out how six SAP customers are driving value with BigQuery.
    • This Black History Month, we're highlighting Black-led startups using Google Cloud to grow their businesses. Check out how DOSS and its co-founder, Bobby Bryant, disrupts the real estate industry with voice search tech and analytics on Google Cloud.
    • Vimeo leverages managed database services from Google Cloud to serve up billions of views around the world each day. Read how it uses Cloud Spanner to deliver a consistent and reliable experience to its users no matter where they are.
    • How can serverless best be leveraged? Can cloud credits be maximized? Are all managed services equal? We dive into top questions for startups.
    • Google introduces Sustainability value pillar in GCP Active Assist solutionto accelerate our industry leadership in Co2 reduction and environmental protection efforts. Intelligent carbon footprint reduction tool is launched in preview.
    • Central States health insurance CIO Pat Moroney shares highs and lows from his career transforming IT. Read more
    • Traffic Director client authorization for proxyless gRPC services is now generally available. Combine with managed mTLS credentials in GKE to centrally manage access between workloads using Traffic Director. Read more.
    • Cloud Functions (2nd gen) is now in public preview. The next generation of our Cloud Functions Functions-as-a-Service platform gives you more features, control, performance, scalability and events sources. Learn more.

    Week of Feb 7 - Feb 11, 2022

    • Now announcing the general availability of the newest instance series in our Compute Optimized family, C2D—powered by 3rd Gen AMD EPYC processors. Read how C2D provides larger instance types, and memory per core configurations ideal for customers with performance-intensive workloads.
    • Digital health startup expands its impact on healthcare equity and diversity with Google Cloud Platform and the Google for Startups Accelerator for Black Founders. Rear more.
    • Storage Transfer Service support for agent pools is now generally available (GA) . You can use agent pools to create isolated groups of agents as a source or sink entity in a transfer job. This enables you to transfer data from multiple data centers and filesystems concurrently, without creating multiple projects for a large transfer spanning multiple filesystems and data centers. This option is available via API, Console, and gcloud transfer CLI.
    • The five trends driving healthcare and life sciences in 2022 will be powered by accessible data, AI, and partnerships.
    • Learn how COLOPL, Minna Bank and 7-Eleven Japan use Cloud Spanner to solve their scalability, performance and digital transformation challenges.

    Week of Jan 31 - Feb 4, 2022

    • Pub/Sub Lite goes regional. Pub/Sub Lite is a high-volume messaging service with ultra-low cost that now offers regional Lite topics, in addition to existing zonal Lite topics. Unlike zonal topics which are located in a single zone, regional topics are asynchronously replicated across two zones. Multi-zone replication protects from zonal failures in the service. Read about it here.

    • Google Workspace is making it easy for employees to bring modern collaboration to work, even if their organizations are still using legacy tools. Essentials Starter is a no-cost offer designed to help people bring the apps they know and love to use in their personal lives to their work life. Learn more.

    • We’re now offering 30 days free access to role-based Google Cloud training with interactive labs and opportunities to earn skill badges to demonstrate your cloud knowledge. Learn more.

    • Security Command Center (SCC) Premium adds support for additional compliance benchmarks, including CIS Google Cloud Computing Foundations 1.2 and OWASP Top 10 2017 & 2021. Learn more about how SCC helps manage and improve your cloud security posture.

    • Storage Transfer Service now offers Preview support transfers from self-managed object storage systems via user-managed agents. With this new feature, customers can seamlessly copy PBs of data from cloud or on-premise object storage to Google Cloud Storage. Object Storage sources must be compatible with Amazon S3 APIs. For customers migrating from AWS S3 to GCS, this feature gives an option to control network routes to Google Cloud. Fill this signup form to access this STS feature.

    Week of Jan 24-Jan 28, 2022

    • Learn how Sabre leveraged a 10-year partnership with Google Cloud to power the travel industry with innovative technology. As Sabre embarked on a cloud transformation, it sought managed database services from Google Cloud that enabled low latency and improved consistency. Sabre discovered how the strengths of both Cloud Spanner and Bigtable supported unique use cases and led to high performance solutions.

    • Storage Transfer Service now offers Preview support for moving data between two filesystems and keeping them in sync on a periodic schedule. This launch offers a managed way to migrate from a self-managed filesystem to Filestore. If you have on-premises systems generating massive amounts of data that needs to be processed in Google Cloud, you can now use Storage Transfer Service to accelerate data transfer from an on-prem filesystem to a cloud filesystem. See Transfer data between POSIX file systems for details.
    • Storage Transfer Service now offers Preview support for preserving POSIX attributes and symlinks when transferring to, from, and between POSIX filesystems. Attributes include the user ID of the owner, the group ID of the owning group, the mode or permissions, the modification time, and the size of the file. See Metadata preservation for details.
    • Bigtable Autoscaling is Generally Available (GA): Bigtable Autoscaling automatically adds or removes capacity in response to the changing demand for your applications. With autoscaling, you only pay for what you need and you can spend more time on your business instead of managing infrastructure.  Learn more.

    Week of Jan 17-Jan 21, 2022

    • Sprinklr and Google Cloud join forces to help enterprises reimagine their customer experience management strategies. Hear more from Nirav Sheth, Nirav Sheth, Director of ISV/Marketplace & Partner Sales.
    • Firestore Key Visualizer is Generally Available (GA): Firestore Key Visualizer is an interactive, performance monitoring tool that helps customers observe and maximize Firestore’s  performance. Learn more.
    • Like many organizations, Wayfair faced the challenge of deciding which cloud databases they should migrate to in order to modernize their business and operations. Ultimately, they chose Cloud SQL and Cloud Spanner because of the databases’ clear path for shifting workloads as well as the flexibility they both provide. Learn how Wayfair was able to migrate quickly while still being able to serve production traffic at scale.

    Week of Jan 10-Jan 14, 2022

    • Start your 2022 New Year’s resolutions by learning at no cost how to use Google Cloud. Read more to find how to take advantage of these training opportunities.
    • 8 megatrends drive cloud adoption—and improve security for all. Google Cloud CISO Phil Venables explains the eight major megatrends powering cloud adoption, and why they’ll continue to make the cloud more secure than on-prem for the foreseeable future. Read more.

    Week of Jan 3-Jan 7, 2022

    • Google Transfer Appliance announces General Availability of online mode. Customers collecting data at edge locations (e.g. cameras, cars, sensors) can offload to Transfer Appliance and stream that data to a Cloud Storage bucket. Online mode can be toggled to send the data to Cloud Storage over the network, or offline by shipping the appliance. Customers can monitor their online transfers for appliances from Cloud Console.

    Week of Dec 27-Dec 31, 2021

    • The most-read blogs about Google Cloud compute, networking, storage and physical infrastructure in 2021. Read more.

    • Top Google Cloud managed container blogs of 2021.

    • Four cloud security trends that organizations and practitioners should be planning for in 2022—and what they should do about them. Read more.

    • Google Cloud announces the top data analytics stories from 2021 including the top three trends and lessons they learned from customers this year. Read more.

    • Explore Google Cloud’s Contact Center AI (CCAI) and its momentum in 2021. Read more.

    • An overview of the innovations that Google Workspace delivered in 2021 for Google Meet. Read more.

    • Google Cloud’s top artificial intelligence and machine learning posts from 2021. Read more.

    • How we’ve helped break down silos, unearth the value of data, and apply that data to solve big problems. Read more.

    • A recap of the year’s infrastructure progress, from impressive Tau VMs, to industry-leading storage capabilities, to major networking leaps. Read more.

    • Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team. Read more.

    • Google Cloud - A cloud built for developers — 2021 year in review. Read more.

    • API management continued to grow in importance in 2021, and Apigee continued to innovate capabilities for customers, new solutions, and partnerships. Read more.

    • Recapping Google’s progress in 2021 toward running on 24/7 carbon-free energy by 2030 — and decarbonizing the electricity system as a whole. Read more.

    Week of Dec 20-Dec 24, 2021

    • And that’s a wrap! After engaging in countless customer interviews, we’re sharing our top 3 lessons learned from our data customers in 2021. Learn what customer data journeys inspired our top picks and what made the cut here.
    • Cloud SQL now shows you minor version information. For more information, see our documentation.
    • Cloud SQL for MySQL now allows you to select your MySQL 8.0 minor version when creating an instance and upgrade MySQL 8.0 minor version. For more information, see our documentation.
    • Cloud SQL for MySQL now supports database auditing. Database auditing lets you track specific user actions in the database, such as table updates, read queries, user privilege grants, and others. To learn more, see MySQL database auditing.

    Week of Dec 12-Dec 17, 2021

    • A CRITICAL VULNERABILITY in a widely used logging library, Apache’s Log4j, has become a global security incident. Security researchers around the globe warn that this could have serious repercussions. Two Google Cloud Blog posts describe how Cloud Armorand Cloud IDS both help mitigate the threat.
    • Take advantage of these ten no-cost trainings before 2022. Check them out here.
    • Deploy Task Queues alongside your Cloud Application: Cloud Tasks is now available in 23 GCP Regions worldwide. Read more.
    • Managed Anthos Service Mesh support for GKE Autopilot (Preview): GKE Autopilot with Managed ASM provides ease of use and simplified administration capabilities, allowing customers to focus on their application, not the infrastructure. Customers can now let Google handle the upgrade and lifecycle tasks for both the cluster and the service mesh. Configure Managed ASM with asmcli experiment in GKE Autopilot cluster.
    • Policy Troubleshooter for BeyondCorp Enterprise is now generally available! Using this feature, admins can triage access failure events and perform the necessary actions to unblock users quickly. Learn more by registering for Google Cloud Security Talks on December 15 and attending the BeyondCorp Enterprise session. The event is free to attend and sessions will be available on-demand.
    • Google Cloud Security Talks, Zero Trust Edition: This week, we hosted our final Google Cloud Security Talks event of the year, focused on all things zero trust. Google pioneered the implementation of zero trust in the enterprise over a decade ago with our BeyondCorp effort, and we continue to lead the way, applying this approach to most aspects of our operations. Check out our digital sessions on-demand to hear the latest updates on Google’s vision for a zero trust future and how you can leverage our capabilities to protect your organization in today’s challenging threat environment.

    Week of Dec 6-Dec 10, 2021

    • 5 key metrics to measure cloud FinOps impact in 2022 and beyond - Learn about the 5 key metrics to effectively measure the impact of Cloud FinOps across your organization and leverage the metrics to gain insights, prioritize on strategic goals, and drive enterprise-wide adoption. Learn more
    • We announced Cloud IDS, our new network security offering, is now generally available. Cloud IDS, built with Palo Alto Networks’ technologies, delivers easy-to-use, cloud-native, managed, network-based threat detection with  industry-leading breadth and security efficacy. To learn more, and request a 30 day trial credit, see the Cloud IDS webpage.

    Week of Nov 29-Dec 3, 2021

    • Join Cloud Learn, happening from Dec. 8-9: This interactive learning event will have live technical demos, Q&As, career development workshops, and more covering everything from Google Cloud fundamentals to certification prep. Learn more.

    • Get a deep dive into BigQuery Administrator Hub– With BigQuery Administrator Hub you can better manage BigQuery at scale with Resource Charts and Slot Estimator Administrators. Learn more about these tools and just how easy they are to usehere.

    • New data and AI in Media blog - How data and AI can help media companies better personalize; and what to watch out for. We interviewed Googlers, Gloria Lee, Executive Account Director of Media & Entertainment, and John Abel, Technical Director for the Office of the CTO, to share exclusive insights on how media organizations should think about and ways to make the most out of their data in the new era of direct-to-consumer. Watch our video interview with Gloria and John and read more.

    • Datastream is now generally available (GA): Datastream, a serverless change data capture (CDC) and replication service, allows you to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. Datastream currently supports CDC ingestion from Oracle and MySQL to Cloud Storage, with additional sources and destinations coming in the future. Datastream integrates with Dataflow and Cloud Data Fusion to deliver real time replication to a wide range of destinations, including BigQuery, Cloud Spanner and Cloud SQL. Learn more.

    Week of Nov 22 - Nov 26, 2021

    • Security Command Center (SCC) launches new mute findings capability: We’re excited to announce a new “Mute Findings” capability in SCC that helps you gain operational efficiencies by effectively managing the findings volume based on your organization’s policies and requirements. SCC presents potential security risks in your cloud environment as ‘findings’ across misconfigurations, vulnerabilities, and threats. With the launch of ‘mute findings’ capability, you gain a way to reduce findings volume and focus on the security issues that are highly relevant to you and your organization. To learn more, read this blog post and watch thisshort demo video.

    Week of Nov 15 - Nov 19, 2021

    • Cloud Spanner is our distributed, globally scalable SQL database service that decouples compute from storage, which makes it possible to scale processing resources separately from storage. This means that horizontal upscaling is possible with no downtime for achieving higher performance on dimensions such as operations per second for both reads and writes. The distributed scaling nature of Spanner’s architecture makes it an ideal solution for unpredictable workloads such as online games. Learn how you can get started developing global multiplayer games using Spanner.

    • New Dataflow templates for Elasticsearch releasedto help customers process and export Google Cloud data into their Elastic Cloud. You can now push data from Pub/Sub, Cloud Storage or BigQuery into your Elasticsearch deployments in a cloud-native fashion. Read more for a deep dive on how to set up a Dataflow streaming pipeline to collect and export your Cloud Audit logs into Elasticsearch, and analyze them in Kibana UI.

    • We’re excited to announce the public preview of Google Cloud Managed Service for Prometheus, a new monitoring offering designed for scale and ease of use that maintains compatibility with the open-source Prometheus ecosystem. While Prometheus works well for many basic deployments, managing Prometheus can become challenging at enterprise scale. Learn more about the service in our blog and on the website.

    Week of Nov 8 - Nov 12, 2021

    Week of Nov 1 - Nov 5, 2021

    • Time to live (TTL) reduces storage costs, improves query performance, and simplifies data retention in Cloud Spanner by automatically removing unneeded data based on user-defined policies. Unlike custom scripts or application code, TTL is fully managed and designed for minimal impact on other workloads. TTL is generally available today in Spanner at no additional cost. Read more.
    • New whitepaper available: Migrating to .NET Core/5+ on Google Cloud - This free whitepaper, written for .NET developers and software architects who want to modernize their .NET Framework applications, outlines the benefits and things to consider when migrating .NET Framework apps to .NET Core/5+ running on Google Cloud. It also offers a framework with suggestions to help you build a strategy for migrating to a fully managed Kubernetes offering or to Google serverless. Download the free whitepaper.
    • Export from Google Cloud Storage: Storage Transfer Service now offers Preview support for exporting data from Cloud Storage to any POSIX file system. You can use this bidirectional data movement capability to move data in and out of Cloud Storage, on-premises clusters, and edge locations including Google Distributed Cloud. The service provides built-in capabilities such as scheduling, bandwidth management, retries, and data integrity checks that simplifies the data transfer workflow. For more information, see Download data from Cloud Storage.
    • Document Translation is now GA! Translate documents in real-time in 100+ languages, and retain document formatting. Learn more about new features and see a demo on how Eli Lilly translates content globally.
    • Announcing the general availability of Cloud Asset Inventory console - We’re excited to announce the general availability of the new Cloud Asset Inventory user interface. In addition to all the capabilities announced earlier in Public Preview, the general availability release provides powerful search and easy filtering capabilities. These capabilities enable you to view details of resources and IAM policies, machine type and policy statistics, and insights into your overall cloud footprint. Learn more about these new capabilities by using the searching resources and searching IAM policies guides. You can get more information about Cloud Asset Inventory using our product documentation.

    Week of Oct 25 - Oct 29, 2021

    • BigQuery table snapshots are now generally available. A table snapshot is a low-cost, read-only copy of a table's data as it was at a particular time.
    • By establishing a robust value measurement approach to track and monitor the business value metrics toward business goals, we are bringing technology, finance, and business leaders together through the discipline of Cloud FinOps to show how digital transformation is enabling the organization to create new innovative capabilities and generate top-line revenue. Learn more.
    • We’ve announced BigQuery Omni, a new multicloud analytics service that allows data teams to perform cross-cloud analytics - across AWS, Azure, and Google Cloud - all from one viewpoint. Learn how BigQuery Omni works and what data and business challenges it solves here.

    Week of Oct 18 - Oct 22, 2021

    • Available now are our newest T2D VMs family based on 3rd Generation AMD EPYC processors. Learn more.
    • In case you missed it — top AI announcements from Google Cloud Next. Catch up on what’s new, see demos, and hear from our customers about how Google Cloud is making AI more accessible, more focused on business outcomes, and fast-tracking the time-to-value.
    • Too much to take in at Google Cloud Next 2021? No worries - here’s a breakdown of the biggest announcements at the 3-day event.
    • Check out the second revision of Architecture Framework, Google Cloud’s collection of canonical best practices.

    Week of Oct 4 - Oct 8, 2021

    • We’re excited to announce Google Cloud’s new goal of equipping more than 40 million people with Google Cloud skills. To help achieve this goal, we’re offering no-cost access to all our training content this month. Find out more here
    • Support for language repositories in Artifact Registry is now generally available. Artifact Registry allows you to store all your language-specific artifacts in one place. Supported package types include Java, Node and Python. Additionally, support for Linux packages is in public preview. Learn more.
    • Want to know what’s the latest with Google ML-Powered intelligence service Active Assist and how to learn more about it at Next’21? Check out this blog.

    Week of Sept 27 - Oct 1, 2021

    • Announcing the launch of Speaker ID. In 2020, customer preference for voice calls increased by 10 percentage points (to 43%) and was by far the most preferred service channel. But most callers still need to pass through archaic authentication processes which slows down the time to resolution and burns through valuable agent time. Speaker ID, from Google Cloud, brings ML-based speaker identification directly to customers and contact center partners, allowing callers to authenticate over the phone, using their own voice. Learn more.
    • Your guide to all things AI & ML at Google Cloud Next. Google Cloud Next is coming October 12–14 and if you’re interested in AI & ML, we’ve got you covered. Tune in to hear about real use cases from companies like Twitter, Eli Lilly, Wayfair, and more. We’re also excited to share exciting product news and hands on AI learning opportunities. Learn more about AI at Next and register for free today!
    • It is now simple to use Terraform to configure Anthos features on your GKE clusters. Check out part two of this series which explores adding Policy Controller audits to our Config Sync managed cluster. Learn more.

    Week of Sept 20 - Sept 24, 2021

    • Announcing the webinar, Powering market data through cloud and AI/ML. We’re sponsoring a Coalition Greenwich webinar on September 23rd where we’ll discuss the findings of our upcoming study on how market data delivery and consumption is being transformed by cloud and AI. Moderated by Coalition Greenwich, the panel will feature Trey Berre from CME Group, Brad Levy from Symphony, and Ulku Rowe representing Google Cloud. Register here.
    • New research from Google Cloud reveals five innovation trends for market data. Together with Coalition Greenwich we surveyed exchanges, trading systems, data aggregators, data producers, asset managers, hedge funds, and investment banks to examine both the distribution and consumption of market data and trading infrastructure in the cloud. Learn more about our findings here.
    • If you are looking for a more automated way to manage quotas over a high number of projects, we are excited to introduce a Quota Monitoring Solution from Google Cloud Professional Services. This solution benefits customers who have many projects or organizations and are looking for an easy way to monitor the quota usage in a single dashboard and use default alerting capabilities across all quotas.

      Week of Sept 13 - Sept 17, 2021

      • New storage features help ensure data is never lost. We are announcing extensions to our popular Cloud Storage offering, and introducing two new services, Filestore Enterprise, and Backup for Google Kubernetes Engine (GKE). Together, these new capabilities will make it easier for you to protect your data out-of-the box, across a wide variety of applications and use cases: Read the full article.
      • API management powers sustainable resource management. Water, waste, and energy solutions company, Veolia, uses APIs and API Management platform Apigee to build apps and help their customers build their own apps, too. Learn from their digital and API-first approach here.
      • To support our expanding customer base in Canada, we’re excited to announce that the new Google Cloud Platform region in Toronto is now open. Toronto is the 28th Google Cloud region connected via our high-performance network, helping customers better serve their users and customers throughout the globe. In combination with Montreal, customers now benefit from improved business continuity planning with distributed, secure infrastructure needed to meet IT and business requirements for disaster recovery, while maintaining data sovereignty.
      • Cloud SQL now supports custom formatting controls for CSVs.When performing admin exports and imports, users can now select custom characters for field delimiters, quotes, escapes, and other characters. For more information, see our documentation.

      Week of Sept 6 -Sept 10, 2021

      • Hear how Lowe’s SRE was able to reduce their Mean Time to Recovery (MTTR) by over 80% after adopting Google’s Site Reliability Engineering practices and Google Cloud’s operations suite.

      Week of  Aug 30 -Sept 3, 2021

      • A what’s new blog in the what’s new blog? Yes, you read that correctly. Google Cloud data engineers are always hard at work maintaining the hundreds of dataset pipelines that feed into our public datasets repository, but they’re also regularly bringing new ones into the mix. Check out our newest featured datasets and catch a few best practices in our living blog: What are the newest datasets in Google Cloud?
      • Migration success with Operational Health Reviews from Google Cloud’s Professional Service Organization - Learn how Google Cloud’s Professional Services Org is proactively and strategically guiding customers to operate effectively and efficiently in the Cloud, both during and after their migration process.
      • Learn how we simplified monitoring for Google Cloud VMware Engine and Google Cloud operations suite. Read more.

      Week of Aug 23 -Aug 27, 2021

      • Google Transfer Appliance announces preview of online mode. Customers are increasingly collecting data that needs to quickly be transferred to the cloud. Transfer Appliances are being used to quickly offload data from sources (e.g. cameras, cars, sensors) and can now stream that data to a Cloud Storage bucket. Online mode can be toggled as data is copied into the appliance and either send the data offline by shipping the appliance to Google or copy data to Cloud Storage over the network. Read more.
      • Topic retention for Cloud Pub/Sub is now Generally Available. Topic retention is the most comprehensive and flexible way available to retain Pub/Sub messages for message replay. In addition to backing up all subscriptions connected to the topic, new subscriptions can now be initialized from a timestamp in the past. Learn more about the feature here.
      • Vertex Predictions now supports private endpoints for online prediction. Through VPC Peering, Private Endpoints provide increased security and lower latency when serving ML models. Read more.

      Week of Aug 16 -Aug 20, 2021

      • Look for us to take security one step further by adding authorization features for service-to-service communications for gRPC proxyless services, as well as to support other deployment models, where proxyless gRPC services are running somewhere other than GKE, for example Compute Engine. We hope you'll join us and check out the setup guide and give us feedback.
      • Cloud Run now supports VPC Service Controls. You can now protect your Cloud Run services against data exfiltration by using VPC Service Controls in conjunction with Cloud Run’s ingress and egress settings. Read more.
      • Read how retailers are leveraging Google Cloud VMware Engine to move their on-premises applications to the cloud, where they can achieve the scale, intelligence, and speed required to stay relevant and competitive. Read more.
      • A series of new features for BeyondCorp Enterprise, our zero trust offering. We now offer native support for client certificates for eight types of VPC-SC resources. We are also announcing general availability of the on-prem connector, which allows users to secure HTTP or HTTPS based on-premises applications outside of Google Cloud. Additionally, three new BeyondCorp attributes are available in Access Context Manager as part of a public preview. Customers can configure custom access policies based on time and date, credential strength, and/or Chrome browser attributes. Read more about these announcements here.
      • We are excited to announce that Google Cloud, working with its partners NAG and DDN, demonstrated the highest performing Lustre file system on the IO500 ranking of the fastest HPC storage systems — quite a feat considering Lustre is one of the most widely deployed HPC file systems in the world.  Read the full article.
      • The Storage Transfer Service for on-premises data API is now available in Preview. Now you can use RESTful APIs to automate your on-prem-to-cloud transfer workflows.  Storage Transfer Service is a software service to transfer data over a network. The service provides built-in capabilities such as scheduling, bandwidth management, retries, and data integrity checks that simplifies the data transfer workflow.
      • It is now simple to use Terraform to configure Anthos features on your GKE clusters. This is the first part of the 3 part series that describes using Terraform to enable Config Sync.  For platform administrators,  this natural, IaC approach improves auditability and transparency and reduces risk of misconfigurations or security gaps. Read more.
      • In this commissioned study, “Modernize With AIOps To Maximize Your Impact”, Forrester Consulting surveyed organizations worldwide to better understand how they’re approaching artificial intelligence for IT operations (AIOps) in their cloud environments, and what kind of benefits they’re seeing. Read more.
      • If your organization or development environment has strict security policies which don’t allow for external IPs, it can be difficult to set up a connection between a Private Cloud SQL instance and a Private IP VM. This article contains clear instructions on how to set up a connection from a private Compute Engine VM to a private Cloud SQL instance using a private service connection and the mysqlsh command line tool.

      Week of Aug 9 -Aug 13, 2021

      • Compute Engine users have a new, updated set of VM-level “in-context” metrics, charts, and logs to correlate signals for common troubleshooting scenarios across CPU, Disk, Memory, Networking, and live Processes.  This brings the best of Google Cloud’s operations suite directly to the Compute Engine UI. Learn more.
      • ​​Pub/Sub to Splunk Dataflow template has been updatedto address multiple enterprise customer asks, from improved compatibility with Splunk Add-on for Google Cloud Platform, to more extensibility with user-defined functions (UDFs), and general pipeline reliability enhancements to tolerate failures like transient network issues when delivering data to Splunk. Read more to learn about how to take advantage of these latest features. Read more.
      • Google Cloud and NVIDIA have teamed up to make VR/AR workloads easier, faster to create and tetherless! Read more.
      • Register for the Google Cloud Startup Summit, September 9, 2021 at goo.gle/StartupSummit for a digital event filled with inspiration, learning, and discussion. This event will bring together our startup and VC community to discuss the latest trends and insights, headlined by a keynote by Astro Teller, Captain of Moonshots at X the moonshot factory. Additionally, learn from a variety of technical and business sessions to help take your startup to the next level.
      • Google Cloud and Harris Poll healthcare research reveals COVID-19 impacts on healthcare technology. Learn more.
      • Partial SSO is now available for public preview. If you use a 3rd party identity provider to single sign on into Google services, Partial SSO allows you to identify a subset of your users to use Google / Cloud Identity as your SAML SSO identity provider (short video and demo).

      Week of Aug 2-Aug 6, 2021

      • Gartner named Google Cloud a Leader in the 2021 Magic Quadrant for Cloud Infrastructure and Platform Services, formerly Infrastructure as a Service. Learn more.
      • Private Service Connect is now generally available. Private Service Connect lets you create private and secure connections to Google Cloud and third-party services with service endpoints in your VPCs. Read more.
      • 30 migration guides designed to help you identify the best ways to migrate, which include meeting common organizational goals like minimizing time and risk during your migration, identifying the most enterprise-grade infrastructure for your workloads, picking a cloud that aligns with your organization’s sustainability goals, and more. Read more.

      Week of Jul 26-Jul 30, 2021

      • This week we hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. Read more about our consumer packaged goods strategy and a guide to key summit content for brands in this blog from Giusy Buonfantino, Google Cloud’s Vice President of CPG.

      • We’re hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. Read more.

      • See how IKEA uses Recommendations AI to provide customers with more relevant product information. Read more.

      • ​​Google Cloud launches a career program for people with autism designed to hire and support more talented people with autism in the rapidly growing cloud industry. Learn more

      • Google Cloud follows new API stability tenets that work to minimize unexpected deprecations to our Enterprise APIs. Read more.

      Week of Jul 19-Jul 23, 2021

      • Register and join us for Google Cloud Next, October 12-14, 2021 at g.co/CloudNext for a fresh approach to digital transformation, as well as a few surprises. Next ‘21 will be a fully customizable digital adventure for a more personalized learning journey. Find the tools and training you need to succeed. From live, interactive Q&As and informative breakout sessions to educational demos and real-life applications of the latest tech from Google Cloud. Get ready to plug into your cloud community, get informed, and be inspired. Together we can tackle today’s greatest business challenges, and start solving for what’s next.
      • "Application Innovation" takes a front row seat this year– To stay ahead of rising customer expectations and the digital and in-person hybrid landscape, enterprises must know what application innovation means and how to deliver this type of innovation with a small piece of technology that might surprise you. Learn more about the three pillars of app innovation here.
      • We announced Cloud IDS, our new network security offering, which is now available in preview. Cloud IDS delivers easy-to-use, cloud-native, managed, network-based threat detection. With Cloud IDS, customers can enjoy a Google Cloud-integrated experience, built with Palo Alto Networks’ industry-leading threat detection technologies to provide high levels of security efficacy. Learn more.
      • Key Visualizer for Cloud Spanner is now generally available. Key Visualizer is a new interactive monitoring tool that lets developers and administrators analyze usage patterns in Spanner. It reveals trends and outliers in key performance and resource metrics for databases of any size, helping to optimize queries and reduce infrastructure costs. See it in action.
      • The market for healthcare cloud is projected to grow 43%. This means a need for better tech infrastructure, digital transformation & Cloud tools. Learn how Google Cloud Partner Advantage partners help customers solve business challenges in healthcare.

      Week of Jul 12-Jul 16, 2021

      • Simplify VM migrations with Migrate for Compute Engine as a Service: delivers a Google-managed cloud service that enables simple, frictionless, and large-scale enterprise migrations of virtual machines to Google Compute Engine with minimal downtime and risk. API-driven and integrated into your Google Cloud console for ease of use, this service uses agent-less replication to copy data without manual intervention and without VPN requirements. It also enables you to launch non-disruptive validations of your VMs prior to cutover.  Rapidly migrate a single application or execute a sprint with hundred systems using migration groups with confidence. Read more here.
      • The Google Cloud region in Delhi NCR is now open for business, ready to host your workloads. Learn more and watch the region launch event here.
      • Introducing Quilkin: the open-source game server proxy. Developed in collaboration with Embark Studios, Quilkin is an open source UDP proxy, tailor-made for high performance real-time multiplayer games. Read more.
      • We’re making Google Glass on Meet available to a wider network of global customers. Learn more.
      • Transfer Appliance supports Google Managed Encryption Keys — We’re announcing the support for Google Managed Encryption Keys with Transfer Appliance, this is in addition to the currently available Customer Managed Encryption Keys feature. Customers have asked for the Transfer Appliance service to create and manage encryption keys for transfer sessions to improve usability and maintain security. The Transfer Appliance Service can now manage the encryption keys for the customers who do not wish to handle a key themselves. Learn more about Using Google Managed Encryption Keys.

      • UCLA builds a campus-wide API program– With Google Cloud's API management platform, Apigee, UCLA created a unified and strong API foundation that removes data friction that students, faculty, and administrators alike face. This foundation not only simplifies how various personas connect to data, but also encourages more innovations in the future. Learn their story.

      • An enhanced region picker makes it easy to choose a Google Cloud region with the lowest CO2 outputLearn more.
      • Amwell and Google Cloud explore five ways telehealth can help democratize access to healthcareRead more.
      • Major League Baseball and Kaggle launch ML competition to learn about fan engagement. Batter up!
      • We’re rolling out general support of Brand Indicators for Message Identification (BIMI) in Gmail within Google Workspace. Learn more.

      • Learn how DeNA Sports Business created an operational status visualization system that helps determine whether live event attendees have correctly installed Japan’s coronavirus contact tracing app COCOA.

      • Google Cloud CAS provides a highly scalable and available private CA to address the unprecedented growth in certificates in the digital world. Read more about CAS.

      Week of Jul 5-Jul 9, 2021

      • Google Cloud and Call of Duty League launch ActivStat to bring fans, players, and commentators the power of competitive statistics in real-time. Read more.
      • Building applications is a heavy lift due to the technical complexity, which includes the complexity of backend services that are used to manage and store data. Firestore alters this by having Google Cloud manage your backend complexity through a complete backend-as-a-service! Learn more.
      • Google Cloud’s new Native App Development skills challenge lets you earn badges that demonstrate your ability to create cloud-native apps. Read more and sign up.

      Week of Jun 28-Jul 2, 2021

      • Storage Transfer Service now offers preview support for Integration with AWS Security Token Service. Security conscious customers can now use Storage Transfer Service to perform transfers from AWS S3 without passing any security credentials. This release will alleviate the security burden associated with passing long-term AWS S3 credentials, which have to be rotated or explicitly revoked when they are no longer needed. Read more.
      • The most popular and surging Google Search terms are now available in BigQuery as a public dataset. View the Top 25 and Top 25 rising queries from Google Trends from the past 30-days, including 5 years of historical data across the 210 Designated Market Areas (DMAs) in the US. Learn more.
      • A new predictive autoscaling capability lets you add additional Compute Engine VMs in anticipation of forecasted demand. Predictive autoscaling is generally available across all Google Cloud regions. Read more or consult the documentation for more information on how to configure, simulate and monitor predictive autoscaling.
      • Messages by Google is now the default messaging app for all AT&T customers using Android phones in the United States. Read more.
      • TPU v4 Pods will soon be available on Google Cloud, providing the most powerful publicly available computing platform for machine learning training. Learn more.
      • Cloud SQL for SQL Server has addressed multiple enterprise customer asks with the GA releases of both SQL Server 2019 and Active Directory integration, as well as the Preview release of Cross Region Replicas.  This set of releases work in concert to allow customers to set up a more scalable and secure managed SQL Server environment to address their workloads’ needs. Read more.

      Week of Jun 21-Jun 25, 2021

      • Simplified return-to-office with no-code technologyWe've just released a solution to your most common return-to-office headaches: make a no-code app customized to solve your business-specific challenges. Learn how to create an automated app where employees can see office room occupancy, check what desks are reserved or open, review disinfection schedules, and more in this blog tutorial.
      • New technical validation whitepaper for running ecommerce applications—Enterprise Strategy Group's analyst outlines the challenges of organizations running ecommerce applications and how Google Cloud helps to mitigate those challenges and handle changing demands with global infrastructure solutions. Download the whitepaper.
      • The fullagendafor Google for Games Developer Summit on July 12th-13th, 2021 is now available. A free digital event with announcements from teams including Stadia, Google Ads, AdMob, Android, Google Play, Firebase, Chrome, YouTube, and Google Cloud. Hear more about how Google Cloud technology creates opportunities for gaming companies to make lasting enhancements for players and creatives. Register at g.co/gamedevsummit
      • BigQuery row-level security is now generally available, giving customers a way to control access to subsets of data in the same table for different groups of users. Row-level security (RLS) extends the principle of least privilege access and enables fine-grained access control policies in BigQuery tables. BigQuery currently supports access controls at the project-, dataset-, table- and column-level. Adding RLS to the portfolio of access controls now enables customers to filter and define access to specific rows in a table based on qualifying user conditions—providing much needed peace of mind for data professionals.
      • Transfer from Azure ADLS Gen 2: Storage Transfer Service offers Preview support for transferring data from Azure ADLS Gen 2 to Google Cloud Storage. Take advantage of a scalable, serverless service to handle data transfer. Read more.
      • reCAPTCHA V2 and V3 customers can now migrate site keys to reCAPTCHA Enterprise in under 10 minutes and without making any code changes. Watch our Webinar to learn more. 
      • Bot attacks are the biggest threat to your business that you probably haven’t addressed yet. Check out our Forbes article to see what you can do about it.

      Week of Jun 14-Jun 18, 2021

      • A new VM family for scale-out workloads—New AMD-based Tau VMs offer 56% higher absolute performance and 42% higher price-performance compared to general-purpose VMs from any of the leading public cloud vendors. Learn more.
      • New whitepaper helps customers plot their cloud migrations—Our new whitepaper distills the conversations we’ve had with CIOs, CTOs, and their technical staff into several frameworks that can help cut through the hype and the technical complexity to help devise the strategy that empowers both the business and IT. Read more or download the whitepaper.
      • Ubuntu Pro lands on Google Cloud—The general availability of Ubuntu Pro images on Google Cloud gives customers an improved Ubuntu experience, expanded security coverage, and integration with critical Google Cloud features. Read more.
      • Navigating hybrid work with a single, connected experience in Google Workspace—New additions to Google Workspace help businesses navigate the challenges of hybrid work, such as Companion Mode for Google Meet calls. Read more.
      • Arab Bank embraces Google Cloud technology—This Middle Eastern bank now offers innovative apps and services to their customers and employees with Apigee and Anthos. In fact, Arab Bank reports over 90% of their new-to-bank customers are using their mobile apps. Learn more.
      • Google Workspace for the Public Sector Sector events—This June, learn about Google Workspace tips and tricks to help you get things done. Join us for one or more of our learning events tailored for government and higher education users. Learn more.

      Week of Jun 7-Jun 11, 2021

      • The top cloud capabilities industry leaders want for sustained innovation—Multicloud and hybrid cloud approaches, coupled with open-source technology adoption, enable IT teams to take full advantage of the best cloud has to offer. Our recent study with IDG shows just how much of a priority this has become for business leaders. Read more or download the report.
      • Announcing the Firmina subsea cable—Planned to run from the East Coast of the United States to Las Toninas, Argentina, with additional landings in Praia Grande, Brazil, and Punta del Este, Uruguay, Firmina will be the longest open subsea cable in the world capable of running entirely from a single power source at one end of the cable if its other power source(s) become temporarily unavailable—a resilience boost at a time when reliable connectivity is more important than ever. Read more.
      • New research reveals what’s needed for AI acceleration in manufacturing—According to our data, which polled more than 1,000 senior manufacturing executives across seven countries, 76% have turned to digital enablers and disruptive technologies due to the pandemic such as data and analytics, cloud, and artificial intelligence (AI). And 66% of manufacturers who use AI in their day-to-day operations report that their reliance on AI is increasing. Read more or download the report.
      • Cloud SQL offers even faster maintenance—Cloud SQL maintenance is zippier than ever. MySQL and PostgreSQL planned maintenance typically lasts less than 60 seconds and SQL Server maintenance typically lasts less than 120 seconds. You can learn more about maintenance here.
      • Simplifying Transfer Appliance configuration with Cloud Setup Application—We’re announcing the availability of the Transfer Appliance Cloud Setup Application. This will use the information you provide through simple prompts and configure your Google Cloud permissions, preferred Cloud Storage bucket, and Cloud KMS key for your transfer. Several cloud console based manual steps are now simplified with a command line experience. Read more
      • Google Cloud VMware Engine is now HIPAA compliant—As of April 1, 2021, Google Cloud VMware Engine is covered under the Google Cloud Business Associate Agreement (BAA), meaning it has achieved HIPAA compliance. Healthcare organizations can now migrate and run their HIPAA-compliant VMware workloads in a fully compatible VMware Cloud Verified stack running natively in Google Cloud with Google Cloud VMware Engine, without changes or re-architecture to tools, processes, or applications. Read more.
      • Introducing container-native Cloud DNS—Kubernetes networking almost always starts with a DNS request. DNS has broad impacts on your application and cluster performance, scalability, and resilience. That is why we are excited to announce the release of container-native Cloud DNS—the native integration of Cloud DNS with Google Kubernetes Engine (GKE) to provide in-cluster Service DNS resolution with Cloud DNS, our scalable and full-featured DNS service. Read more.
      • Welcoming the EU’s new Standard Contractual Clauses for cross-border data transfers—Learn how we’re incorporating the new Standard Contractual Clauses (SCCs) into our contracts to help protect our customers’ data and meet the requirements of European privacy legislation. Read more.
      • Lowe’s meets customer demand with Google SRE practices—Learn how Low’s has been able to increase the number of releases they can support by adopting Google’s Site Reliability Engineering (SRE) framework and leveraging their partnership with Google Cloud. Read more.
      • What’s next for SAP on Google Cloud at SAPPHIRE NOW and beyond—As SAP’s SAPPHIRE conference begins this week, we believe businesses have a more significant opportunity than ever to build for their next decade of growth and beyond. Learn more on how we’re working together with our customers, SAP, and our partners to support this transformation. Read more.
      • Support for Node.js, Python and Java repositories for Artifact Registrynow in Preview–With today’s announcement, you can not only use Artifact Registry to secure and distribute container images, but also manage and secure your other software artifacts. Read more.
      • What’s next for SAP on Google Cloud at SAPPHIRE NOW and beyond—As SAP’s SAPPHIRE conference begins this week, we believe businesses have a more significant opportunity than ever to build for their next decade of growth and beyond. Learn more on how we’re working together with our customers, SAP, and our partners to support this transformation. Read more.
      • Google named a Leader in The Forrester Wave: Streaming Analytics, Q2 2021 report–Learn about the criteria where Google Dataflow was rated 5 out 5 and why this matters for our customers here.
      • Applied ML Summit this Thursday, June 10–Watch our keynote to learn about predictions for machine learning over the next decade. Engage with distinguished researchers, leading practitioners, and Kaggle Grandmasters during our live Ask Me Anything session. Take part in our modeling workshops to learn how you can iterate faster, and deploy and manage your models with confidence–no matter your level of formal computer science training. Learn how to develop and apply your professional skills, grow your abilities at the pace of innovation, and take your career to the next level. Register now.

      Week of May 31-Jun 4, 2021

      • Security Command Center now supports CIS 1.1 benchmarks and granular access controlSecurity Command Center (SCC) now supports CIS benchmarks for Google Cloud Platform Foundation v1.1, enabling you to monitor and address compliance violations against industry best practices in your Google Cloud environment. Additionally, SCC now supports fine-grained access control for administrators that allows you to easily adhere to the principles of least privilege—restricting access based on roles and responsibilities to reduce risk and enabling broader team engagement to address security. Read more.
      • Zero-trust managed security for services with Traffic Director–We created Traffic Director to bring to you a fully managed service mesh product that includes load balancing, traffic management and service discovery. And now, we’re happy to announce the availability of a fully-managed zero-trust security solution using Traffic Director with Google Kubernetes Engine (GKE) and Certificate Authority (CA) Service. Read more.
      • How one business modernized their data warehouse for customer success–PedidosYa migrated from their old data warehouse to Google Cloud's BigQuery. Now with BigQuery, the Latin American online food ordering company has reduced the total cost per query by 5x. Learn more.
      • Announcing new Cloud TPU VMs–New Cloud TPU VMs make it easier to use our industry-leading TPU hardware by providing direct access to TPU host machines, offering a new and improved user experience to develop and deploy TensorFlow, PyTorch, and JAX on Cloud TPUs. Read more.
      • Introducing logical replication and decoding for Cloud SQL for PostgreSQL–We’re announcing the public preview of logical replication and decoding for Cloud SQL for PostgreSQL. By releasing those capabilities and enabling change data capture (CDC) from Cloud SQL for PostgreSQL, we strengthen our commitment to building an open database platform that meets critical application requirements and integrates seamlessly with the PostgreSQL ecosystem. Read more.
      • How 6 businesses are transforming with SAP on Google Cloud–Thousands of organizations globally rely on SAP for their most mission critical workloads. And for many Google Cloud customers, part of a broader digital transformation journey has included accelerating the migration of these essential SAP workloads to Google Cloud for greater agility, elasticity, and uptime. Read six of their stories.

      Week of May 24-May 28, 2021

      • Google Cloud for financial services: driving your transformation cloud journey–As we welcome the industry to our Financial Services Summit, we’re sharing more on how Google Cloud accelerates a financial organization’s digital transformation through app and infrastructure modernization, data democratization, people connections, and trusted transactions. Read more or watch the summit on demand.
      • Introducing Datashare solution for financial services–We announced the general availability of Datashare for financial services, a new Google Cloud solution that brings together the entire capital markets ecosystem—data publishers and data consumers—to exchange market data securely and easily. Read more.
      • Announcing Datastream in PreviewDatastream, a serverless change data capture (CDC) and replication service, allows enterprises to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. Read more.
      • Introducing Dataplex: An intelligent data fabric for analytics at scaleDataplex provides a way to centrally manage, monitor, and govern your data across data lakes, data warehouses and data marts, and make this data securely accessible to a variety of analytics and data science tools. Read more
      • Announcing Dataflow Prime–Available in Preview in Q3 2021, Dataflow Prime is a new platform based on a serverless, no-ops, auto-tuning architecture built to bring unparalleled resource utilization and radical operational simplicity to big data processing. Dataflow Prime builds on Dataflow and brings new user benefits with innovations in resource utilization and distributed diagnostics. The new capabilities in Dataflow significantly reduce the time spent on infrastructure sizing and tuning tasks, as well as time spent diagnosing data freshness problems. Read more.
      • Secure and scalable sharing for data and analytics with Analytics Hub–With Analytics Hub, available in Preview in Q3, organizations get a rich data ecosystem by publishing and subscribing to analytics-ready datasets; control and monitoring over how their data is being used; a self-service way to access valuable and trusted data assets; and an easy way to monetize their data assets without the overhead of building and managing the infrastructure. Read more.
      • Cloud Spanner trims entry cost by 90%–Coming soon to Preview, granular instance sizing in Spanner lets organizations run workloads at as low as 1/10th the cost of regular instances, equating to approximately $65/month. Read more.
      • Cloud Bigtable lifts SLA and adds new security features for regulated industries–Bigtable instances with a multi-cluster routing policy across 3 or more regions are now covered by a 99.999% monthly uptime percentage under the new SLA. In addition, new Data Access audit logs can help determine whether sensitive customer information has been accessed in the event of a security incident, and if so, when, and by whom. Read more.
      • Build a no-code journaling app–In honor of Mental Health Awareness Month, Google Cloud's no-code application development platform, AppSheet, demonstrates how you can build a journaling app complete with titles, time stamps, mood entries, and more. Learn how with this blog and video here.
      • New features in Security Command Center—On May 24th, Security Command Center Premium launched the general availability of granular access controls at project- and folder-level and Center for Internet Security (CIS) 1.1 benchmarks for Google Cloud Platform Foundation. These new capabilities enable organizations to improve their security posture and efficiently manage risk for their Google Cloud environment. Learn more.
      • Simplified API operations with AI–Google Cloud's API management platform Apigee applies Google's industry leading ML and AI to your API metadata. Understand how it works with anomaly detection here.
      • This week: Data Cloud and Financial Services Summits–Our Google Cloud Summit series begins this week with the Data Cloud Summit on Wednesday May 26 (Global). At this half-day event, you’ll learn how leading companies like PayPal, Workday, Equifax, and many others are driving competitive differentiation using Google Cloud technologies to build their data clouds and transform data into value that drives innovation. The following day, Thursday May 27 (Global & EMEA) at the Financial Services Summit, discover how Google Cloud is helping financial institutions such as PayPal, Global Payments, HSBC, Credit Suisse, AXA Switzerland and more unlock new possibilities and accelerate business through innovation. Read more and explore the entire summit series.
      • Announcing the Google for Games Developer Summit 2021 on July 12th-13th–With a surge of new gamers and an increase in time spent playing games in the last year, it’s more important than ever for game developers to delight and engage players. To help developers with this opportunity, the games teams at Google are back to announce the return of the Google for Games Developer Summit 2021 on July 12th-13th. Hear from experts across Google about new game solutions they’re building to make it easier for you to continue creating great games, connecting with players and scaling your business. Registration is free and open to all game developers. Register for the free online event at g.co/gamedevsummit to get more details in the coming weeks. We can’t wait to share our latest innovations with the developer community. Learn more.

      Week of May 17-May 21, 2021

      • Best practices to protect your organization against ransomware threats–For more than 20 years Google has been operating securely in the cloud, using our modern technology stack to provide a more defensible environment that we can protect at scale. While the threat of ransomware isn’t new, our responsibility to help protect you from existing or emerging threats never changes. In our recent blog post, we shared guidance on how organizations can increase their resilience to ransomware and how some of our Cloud products and services can help. Read more.

      • Forrester names Google Cloud a Leader in Unstructured Data Security Platforms–Forrester Research has named Google Cloud a Leader in The Forrester Wave: Unstructured Data Security Platforms, Q2 2021 report, and rated Google Cloud highest in the current offering category among the providers evaluated. Read more or download the report.
      • Introducing Vertex AI: One platform, every ML tool you needVertex AI is a managed machine learning (ML) platform that allows companies to accelerate the deployment and maintenance of artificial intelligence (AI) models. Read more.
      • Transforming collaboration in Google Workspace–We’re launching smart canvas, a new product experience that delivers the next evolution of collaboration for Google Workspace. Between now and the end of the year, we’re rolling out innovations that make it easier for people to stay connected, focus their time and attention, and transform their ideas into impact. Read more.
      • Developing next-generation geothermal power–At I/O this week, we announced a first-of-its-kind, next-generation geothermal project with clean-energy startup Fervo that will soon begin adding carbon-free energy to the electric grid that serves our data centers and infrastructure throughout Nevada, including our Cloud region in Las Vegas. Read more.
      • Contributing to an environment of trust and transparency in Europe–Google Cloud was one of the first cloud providers to support and adopt the EU GDPR Cloud Code of Conduct (CoC). The CoC is a mechanism for cloud providers to demonstrate how they offer sufficient guarantees to implement appropriate technical and organizational measures as data processors under the GDPR. This week, the Belgian Data Protection Authority, based on a positive opinion by the European Data Protection Board (EDPB), approved the CoC, a product of years of constructive collaboration between the cloud computing community, the European Commission, and European data protection authorities. We are proud to say that Google Cloud Platform and Google Workspace already adhere to these provisions. Learn more.
      • Announcing Google Cloud datasets solutions–We're adding commercial, synthetic, and first-party data to our Google Cloud Public Datasets Program to help organizations increase the value of their analytics and AI initiatives, and we're making available an open source reference architecture for a more streamlined data onboarding process to the program. Read more.
      • Introducing custom samples in Cloud Code–With new custom samples in Cloud Code, developers can quickly access your enterprise’s best code samples via a versioned Git repository directly from their IDEs. Read more.
      • Retention settings for Cloud SQL–Cloud SQL now allows you to configure backup retention settings to protect against data loss. You can retain between 1 and 365 days’ worth of automated backups and between 1 and 7 days’ worth of transaction logs for point-in-time recovery. See the details here.
      • Cloud developer’s guide to Google I/O 2021Google I/O may look a little different this year, but don’t worry, you’ll still get the same first-hand look at the newest launches and projects coming from Google. Best of all, it’s free and available to all (virtually) on May 18-20. Read more.

      Week of May 10-May 14, 2021

      • APIs and Apigee power modern day due diligence–With APIs and Google Cloud's Apigee, business due diligence company DueDil revolutionized the way they harness and share their Big Information Graph (B.I.G.) with partners and customers. Get the full story.
      • Cloud CISO Perspectives: May 2021–It’s been a busy month here at Google Cloud since our inaugural CISO perspectives blog post in April. Here, VP and CISO of Google Cloud Phil Venables recaps our cloud security and industry highlights, a sneak peak of what’s ahead from Google at RSA, and more. Read more.
      • 4 new features to secure your Cloud Run services–We announced several new ways to secure Cloud Run environments to make developing and deploying containerized applications easier for developers. Read more.
      • Maximize your Cloud Run investments with new committed use discounts–We’re introducing self-service spend-based committed use discounts for Cloud Run, which let you commit for a year to spending a certain amount on Cloud Run and benefiting from a 17% discount on the amount you committed. Read more.
      • Google Cloud Armor Managed Protection Plus is now generally available–Cloud Armor, our Distributed Denial of Service (DDoS) protection and Web-Application Firewall (WAF) service on Google Cloud, leverages the same infrastructure, network, and technology that has protected Google’s internet-facing properties from some of the largest attacks ever reported. These same tools protect customers’ infrastructure from DDoS attacks, which are increasing in both magnitude and complexity every year. Deployed at the very edge of our network, Cloud Armor absorbs malicious network- and protocol-based volumetric attacks, while mitigating the OWASP Top 10 risks and maintaining the availability of protected services. Read more.
      • Announcing Document Translation for Translation API Advanced in preview–Translation is critical to many developers and localization providers, whether you’re releasing a document, a piece of software, training materials or a website in multiple languages. With Document Translation, now you can directly translate documents in 100+ languages and formats such as Docx, PPTx, XLSx, and PDF while preserving document formatting. Read more.
      • Introducing BeyondCorp Enterprise protected profiles–Protected profiles enable users to securely access corporate resources from an unmanaged device with the same threat and data protections available in BeyondCorp Enterprise–all from the Chrome Browser. Read more.
      • How reCAPTCHA Enterprise protects unemployment and COVID-19 vaccination portals–With so many people visiting government websites to learn more about the COVID-19 vaccine, make vaccine appointments, or file for unemployment, these web pages have become prime targets for bot attacks and other abusive activities. But reCAPTCHA Enterprise has helped state governments protect COVID-19 vaccine registration portals and unemployment claims portals from abusive activities. Learn more.
      • Day one with Anthos? Here are 6 ideas for how to get started–Once you have your new application platform in place, there are some things you can do to immediately get value and gain momentum. Here are six things you can do to get you started. Read more.
      • The era of the transformation cloud is here–Google Cloud’s president Rob Enslin shares how the era of the transformation cloud has seen organizations move beyond data centers to change not only where their business is done but, more importantly, how it is done. Read more.

      Week of May 3-May 7, 2021

      • Transforming hard-disk drive maintenance with predictive ML–In collaboration with Seagate, we developed a machine learning system that can forecast the probability of a recurring failing disk—a disk that fails or has experienced three or more problems in 30 days. Learn how we did it.
      • Agent Assist for Chat is now in public previewAgent Assist provides your human agents with continuous support during their calls, and now chats, by identifying the customers’ intent and providing them with real-time recommendations such as articles and FAQs as well as responses to customer messages to more effectively resolve the conversation. Read more.
      • New Google Cloud, AWS, and Azure product map–Our updated product map helps you understand similar offerings from Google Cloud, AWS, and Azure, and you can easily filter the list by product name or other common keywords. Read more or view the map.
      • Join our Google Cloud Security Talks on May 12th–We’ll share expert insights into how we’re working to be your most trusted cloud. Find the list of topics we’ll cover here.
      • Databricks is now GA on Google Cloud–Deploy or migrate Databricks Lakehouse to Google Cloud to combine the benefits of an open data cloud platform with greater analytics flexibility, unified infrastructure management, and optimized performance. Read more.
      • HPC VM image is now GA–The CentOS-based HPC VM image makes it quick and easy to create HPC-ready VMs on Google Cloud that are pre-tuned for optimal performance. Check out our documentation and quickstart guide to start creating instances using the HPC VM image today.
      • Take the 2021 State of DevOps survey–Help us shape the future of DevOps and make your voice heard by completing the 2021 State of DevOps survey before June 11, 2021. Read more or take the survey.
      • OpenTelemetry Trace 1.0 is now available–OpenTelemetry has reached a key milestone: the OpenTelemetry Tracing Specification has reached version 1.0. API and SDK release candidates are available for Java, Erlang, Python, Go, Node.js, and .Net. Additional languages will follow over the next few weeks. Read more.
      • New blueprint helps secure confidential data in AI Platform Notebooks–We’re adding to our portfolio of blueprints with the publication of our Protecting confidential data in AI Platform Notebooks blueprint guide and deployable blueprint, which can help you apply data governance and security policies that protect your AI Platform Notebooks containing confidential data. Read more.
      • The Liquibase Cloud Spanner extension is now GALiquibase, an open-source library that works with a wide variety of databases, can be used for tracking, managing, and automating database schema changes. By providing the ability to integrate databases into your CI/CD process, Liquibase helps you more fully adopt DevOps practices. The Liquibase Cloud Spanner extension allows developers to use Liquibase's open-source database library to manage and automate schema changes in Cloud Spanner. Read more.
      • Cloud computing 101: Frequently asked questions–There are a number of terms and concepts in cloud computing, and not everyone is familiar with all of them. To help, we’ve put together a list of common questions, and the meanings of a few of those acronyms. Read more.

      Week of Apr 26-Apr 30, 2021

      • Announcing the GKE Gateway controller, in Preview–GKE Gateway controller, Google Cloud’s implementation of the Gateway API, manages internal and external HTTP/S load balancing for a GKE cluster or a fleet of GKE clusters and provides multi-tenant sharing of load balancer infrastructure with centralized admin policy and control. Read more.
      • See Network Performance for Google Cloud in Performance Dashboard–The Google Cloud performance view, part of the Network Intelligence Center, provides packet loss and latency metrics for traffic on Google Cloud. It allows users to do informed planning of their deployment architecture, as well as determine in real time the answer to the most common troubleshooting question: "Is it Google or is it me?" The Google Cloud performance view is now open for all Google Cloud customers as a public preview. Check it out.
      • Optimizing data in Google Sheets allows users to create no-code apps–Format columns and tables in Google Sheets to best position your data to transform into a fully customized, successful app–no coding necessary. Read our four best Google Sheets tips.
      • Automation bots with AppSheet Automation–AppSheet recently released AppSheet Automation, infusing Google AI capabilities to AppSheet's trusted no-code app development platform. Learn step by step how to build your first automation bot on AppSheet here.
      • Google Cloud announces a new region in Israel–Our new region in Israel will make it easier for customers to serve their own users faster, more reliably and securely. Read more.
      • New multi-instance NVIDIA GPUs on GKE–We’re launching support for multi-instance GPUs in GKE (currently in Preview), which will help you drive better value from your GPU investments. Read more.
      • Partnering with NSF to advance networking innovation–We announced our partnership with the U.S. National Science Foundation (NSF), joining other industry partners and federal agencies, as part of a combined $40 million investment in academic research for Resilient and Intelligent Next-Generation (NextG) Systems, or RINGS. Read more.
      • Creating a policy contract with Configuration as Data–Configuration as Data is an emerging cloud infrastructure management paradigm that allows developers to declare the desired state of their applications and infrastructure, without specifying the precise actions or steps for how to achieve it. However, declaring a configuration is only half the battle: you also want policy that defines how a configuration is to be used. This post shows you how.
      • Google Cloud products deliver real-time data solutions–Seven-Eleven Japan built Seven Central, its new platform for digital transformation, on Google Cloud. Powered by BigQuery, Cloud Spanner, and Apigee API management, Seven Central presents easy to understand data, ultimately allowing for quickly informed decisions. Read their story here.

      Week of Apr 19-Apr 23, 2021

      • Extreme PD is now GA–On April 20th, Google Cloud’s Persistent Disk launched general availability of Extreme PD, a high performance block storage volume with provisioned IOPS and up to 2.2 GB/s of throughput. Learn more.

      • Research: How data analytics and intelligence tools to play a key role post-COVID-19–A recent Google-commissioned study by IDG highlighted the role of data analytics and intelligent solutions when it comes to helping businesses separate from their competition. The survey of 2,000 IT leaders across the globe reinforced the notion that the ability to derive insights from data will go a long way towards determining which companies win in this new era. Learn more or download the study.

      • Introducing PHP on Cloud Functions–We’re bringing support for PHP, a popular general-purpose programming language, to Cloud Functions. With the Functions Framework for PHP, you can write idiomatic PHP functions to build business-critical applications and integration layers. And with Cloud Functions for PHP, now available in Preview, you can deploy functions in a fully managed PHP 7.4 environment, complete with access to resources in a private VPC network. Learn more.

      • Delivering our 2020 CCAG pooled audit–As our customers increased their use of cloud services to meet the demands of teleworking and aid in COVID-19 recovery, we’ve worked hard to meet our commitment to being the industry’s most trusted cloud, despite the global pandemic. We’re proud to announce that Google Cloud completed an annual pooled audit with the CCAG in a completely remote setting, and were the only cloud service provider to do so in 2020. Learn more.

      • Anthos 1.7 now available–We recently released Anthos 1.7, our run-anywhere Kubernetes platform that’s connected to Google Cloud, delivering an array of capabilities that make multicloud more accessible and sustainable. Learn more.

      • New Redis Enterprise for Anthos and GKE–We’re making Redis Enterprise for Anthos and Google Kubernetes Engine (GKE) available in the Google Cloud Marketplace in private preview. Learn more.

      • Updates to Google Meet–We introduced a refreshed user interface (UI), enhanced reliability features powered by the latest Google AI, and tools that make meetings more engaging—even fun—for everyone involved. Learn more.

      • DocAI solutions now generally availableDocument (Doc) AI platformLending DocAI and Procurement DocAI, built on decades of AI innovation at Google, bring powerful and useful solutions across lending, insurance, government and other industries. Learn more.

      • Four consecutive years of 100% renewable energy–In 2020, Google again matched 100 percent of its global electricity use with purchases of renewable energy. All told, we’ve signed agreements to buy power from more than 50 renewable energy projects, with a combined capacity of 5.5 gigawatts–about the same as a million solar rooftops. Learn more.

      • Announcing the Google Cloud region picker–The Google Cloud region picker lets you assess key inputs like price, latency to your end users, and carbon footprint to help you choose which Google Cloud region to run on. Learn more.

      • Google Cloud launches new security solution WAAP–WebApp and API Protection (WAAP) combines Google Cloud Armor, Apigee, and reCAPTCHA Enterprise to deliver improved threat protection, consolidated visibility, and greater operational efficiencies across clouds and on-premises environments. Learn more about WAAP here.
      • New in no-code–As discussed in our recent article, no-code hackathons are trending among innovative organizations. Since then, we've outlined how you can host one yourself specifically designed for your unique business innovation outcomes. Learn how here.
      • Google Cloud Referral Program now available—Now you can share the power of Google Cloud and earn product credit for every new paying customer you refer. Once you join the program, you’ll get a unique referral link that you can share with friends, clients, or others. Whenever someone signs up with your link, they’ll get a $350 product credit—that’s $50 more than the standard trial credit. When they become a paying customer, we’ll reward you with a $100 product credit in your Google Cloud account. Available in the United States, Canada, Brazil, and Japan. Apply for the Google Cloud Referral Program.

      Week of Apr 12-Apr 16, 2021

      • Announcing the Data Cloud Summit, May 26, 2021–At this half-day event, you’ll learn how leading companies like PayPal, Workday, Equifax, Zebra Technologies, Commonwealth Care Alliance and many others are driving competitive differentiation using Google Cloud technologies to build their data clouds and transform data into value that drives innovation. Learn more and register at no cost.
      • Announcing the Financial Services Summit, May 27, 2021–In this 2 hour event, you’ll learn how Google Cloud is helping financial institutions including PayPal, Global Payments, HSBC, Credit Suisse, and more unlock new possibilities and accelerate business through innovation and better customer experiences. Learn more and register for free: Global & EMEA.
      • How Google Cloud is enabling vaccine equity–In our latest update, we share more on how we’re working with US state governments to help produce equitable vaccination strategies at scale. Learn more.
      • The new Google Cloud region in Warsaw is open–The Google Cloud region in Warsaw is now ready for business, opening doors for organizations in Central and Eastern Europe. Learn more.
      • AppSheet Automation is now GA–Google Cloud’s AppSheet launches general availability of AppSheet Automation, a unified development experience for citizen and professional developers alike to build custom applications with automated processes, all without coding. Learn how companies and employees are reclaiming their time and talent with AppSheet Automation here.
      • Introducing SAP Integration with Cloud Data Fusion–Google Cloud native data integration platform Cloud Data Fusion now offers the capability to seamlessly get data out of SAP Business Suite, SAP ERP and S/4HANA. Learn more.

      Week of Apr 5-Apr 9, 2021

      • New Certificate Authority Service (CAS) whitepaper–“How to deploy a secure and reliable public key infrastructure with Google Cloud Certificate Authority Service” (written by Mark Cooper of PKI Solutions and Anoosh Saboori of Google Cloud) covers security and architectural recommendations for the use of the Google Cloud CAS by organizations, and describes critical concepts for securing and deploying a PKI based on CAS. Learn more or read the whitepaper.
      • Active Assist’s new feature, predictive autoscaling, helps improve response times for your applications–When you enable predictive autoscaling, Compute Engine forecasts future load based on your Managed Instance Group’s (MIG) history and scales it out in advance of predicted load, so that new instances are ready to serve when the load arrives. Without predictive autoscaling, an autoscaler can only scale a group reactively, based on observed changes in load in real time. With predictive autoscaling enabled, the autoscaler works with real-time data as well as with historical data to cover both the current and forecasted load. That makes predictive autoscaling ideal for those apps with long initialization times and whose workloads vary predictably with daily or weekly cycles. For more information, see How predictive autoscaling works or check if predictive autoscaling is suitable for your workload, and to learn more about other intelligent features, check out Active Assist.
      • Introducing Dataprep BigQuery pushdown–BigQuery pushdown gives you the flexibility to run jobs using either BigQuery or Dataflow. If you select BigQuery, then Dataprep can automatically determine if data pipelines can be partially or fully translated in a BigQuery SQL statement. Any portions of the pipeline that cannot be run in BigQuery are executed in Dataflow. Utilizing the power of BigQuery results in highly efficient data transformations, especially for manipulations such as filters, joins, unions, and aggregations. This leads to better performance, optimized costs, and increased security with IAM and OAuth support. Learn more.
      • Announcing the Google Cloud Retail & Consumer Goods Summit–The Google Cloud Retail & Consumer Goods Summit brings together technology and business insights, the key ingredients for any transformation. Whether you're responsible for IT, data analytics, supply chains, or marketing, please join! Building connections and sharing perspectives cross-functionally is important to reimagining yourself, your organization, or the world. Learn more or register for free.
      • New IDC whitepaper assesses multicloud as a risk mitigation strategy–To better understand the benefits and challenges associated with a multicloud approach, we supported IDC’s new whitepaper that investigates how multicloud can help regulated organizations mitigate the risks of using a single cloud vendor. The whitepaper looks at different approaches to multi-vendor and hybrid clouds taken by European organizations and how these strategies can help organizations address concentration risk and vendor-lock in, improve their compliance posture, and demonstrate an exit strategy. Learn more or download the paper.
      • Introducing request priorities for Cloud Spanner APIs–You can now specify request priorities for some Cloud Spanner APIs. By assigning a HIGH, MEDIUM, or LOW priority to a specific request, you can now convey the relative importance of workloads, to better align resource usage with performance objectives. Learn more.
      • How we’re working with governments on climate goals–Google Sustainability Officer Kate Brandt shares more on how we’re partnering with governments around the world to provide our technology and insights to drive progress in sustainability efforts. Learn more.

      Week of Mar 29-Apr 2, 2021

      • Why Google Cloud is the ideal platform for Block.one and other DLT companies–Late last year, Google Cloud joined the EOS community, a leading open-source platform for blockchain innovation and performance, and is taking steps to support the EOS Public Blockchain by becoming a block producer (BP). At the time, we outlined how our planned participation underscores the importance of blockchain to the future of business, government, and society. We're sharing more on why Google Cloud is uniquely positioned to be an excellent partner for Block.one and other distributed ledger technology (DLT) companies. Learn more.
      • New whitepaper: Scaling certificate management with Certificate Authority Service–As Google Cloud’s Certificate Authority Service (CAS) approaches general availability, we want to help customers understand the service better. Customers have asked us how CAS fits into our larger security story and how CAS works for various use cases. Our new white paper answers these questions and more. Learn more and download the paper.
      • Build a consistent approach for API consumers–Learn the differences between REST and GraphQL, as well as how to apply REST-based practices to GraphQL. No matter the approach, discover how to manage and treat both options as API products here.

      • Apigee X makes it simple to apply Cloud CDN to APIs–With Apigee X and Cloud CDN, organizations can expand their API programs' global reach. Learn how to deploy APIs across 24 regions and 73 zones here.

      • Enabling data migration with Transfer Appliances in APAC—We’re announcing the general availability of Transfer Appliances TA40/TA300 in Singapore. Customers are looking for fast, secure and easy to use options to migrate their workloads to Google Cloud and we are addressing their needs with Transfer Appliances globally in the US, EU and APAC. Learn more about Transfer Appliances TA40 and TA300.

      • Windows Authentication is now supported on Cloud SQL for SQL Server in public preview—We’ve launched seamless integration with Google Cloud’s Managed Service for Microsoft Active Directory (AD). This capability is a critical requirement to simplify identity management and streamline the migration of existing SQL Server workloads that rely on AD for access control. Learn more or get started.

      • Using Cloud AI to whip up new treats with Mars Maltesers—Maltesers, a popular British candy made by Mars, teamed up with our own AI baker and ML engineer extraordinaire, Sara Robinson, to create a brand new dessert recipe with Google Cloud AI. Find out what happened (recipe included).

      • Simplifying data lake management with Dataproc Metastore, now GADataproc Metastore, a fully managed, serverless technical metadata repository based on the Apache Hive metastore, is now generally available. Enterprises building and migrating open source data lakes to Google Cloud now have a central and persistent metastore for their open source data analytics frameworks. Learn more.

      • Introducing the Echo subsea cable—We announced our investment in Echo, the first-ever cable to directly connect the U.S. to Singapore with direct fiber pairs over an express route. Echo will run from Eureka, California to Singapore, with a stop-over in Guam, and plans to also land in Indonesia. Additional landings are possible in the future. Learn more.

      Week of Mar 22-Mar 26, 2021

      • 10 new videos bring Google Cloud to life—The Google Cloud Tech YouTube channel’s latest video series explains cloud tools for technical practitioners in about 5 minutes each. Learn more.
      • BigQuery named a Leader in the 2021 Forrester Wave: Cloud Data Warehouse, Q1 2021 report—Forrester gave BigQuery a score of 5 out of 5 across 19 different criteria. Learn more in our blog post, or download the report.
      • Charting the future of custom compute at Google—To meet users’ performance needs at low power, we’re doubling down on custom chips that use System on a Chip (SoC) designs. Learn more.
      • Introducing Network Connectivity Center—We announced Network Connectivity Center, which provides a single management experience to easily create, connect, and manage heterogeneous on-prem and cloud networks leveraging Google’s global infrastructure. Network Connectivity Center serves as a vantage point to seamlessly connect VPNs, partner and dedicated interconnects, as well as third-party routers and Software-Defined WANs, helping you optimize connectivity, reduce operational burden and lower costs—wherever your applications or users may be. Learn more.
      • Making it easier to get Compute Engine resources for batch processing—We announced a new method of obtaining Compute Engine instances for batch processing that accounts for availability of resources in zones of a region. Now available in preview for regional managed instance groups, you can do this simply by specifying the ANY value in the API. Learn more.
      • Next-gen virtual automotive showrooms are here, thanks to Google Cloud, Unreal Engine, and NVIDIA—We teamed up with Unreal Engine, the open and advanced real-time 3D creation game engine, and NVIDIA, inventor of the GPU, to launch new virtual showroom experiences for automakers. Taking advantage of the NVIDIA RTX platform on Google Cloud, these showrooms provide interactive 3D experiences, photorealistic materials and environments, and up to 4K cloud streaming on mobile and connected devices. Today, in collaboration with MHP, the Porsche IT consulting firm, and MONKEYWAY, a real-time 3D streaming solution provider, you can see our first virtual showroom, the Pagani Immersive Experience Platform. Learn more.
      • Troubleshoot network connectivity with Dynamic Verification (public preview)—You can now check packet loss rate and one-way network latency between two VMs on GCP. This capability is an addition to existing Network Intelligence Center Connectivity Tests which verify reachability by analyzing network configuration in your VPCs. See more in our documentation.
      • Helping U.S. states get the COVID-19 vaccine to more people—In February, we announced our Intelligent Vaccine Impact solution (IVIs) to help communities rise to the challenge of getting vaccines to more people quickly and effectively. Many states have deployed IVIs, and have found it able to meet demand and easily integrate with their existing technology infrastructures. Google Cloud is proud to partner with a number of states across the U.S., including Arizona, the Commonwealth of Massachusetts, North Carolina, Oregon, and the Commonwealth of Virginia to support vaccination efforts at scale. Learn more.

      Week of Mar 15-Mar 19, 2021

      • A2 VMs now GA: The largest GPU cloud instances with NVIDIA A100 GPUs—We’re announcing the general availability of A2 VMs based on the NVIDIA Ampere A100 Tensor Core GPUs in Compute Engine. This means customers around the world can now run their NVIDIA CUDA-enabled machine learning (ML) and high performance computing (HPC) scale-out and scale-up workloads more efficiently and at a lower cost. Learn more.
      • Earn the new Google Kubernetes Engine skill badge for free—We’ve added a new skill badge this month, Optimize Costs for Google Kubernetes Engine (GKE), which you can earn for free when you sign up for the Kubernetes track of the skills challenge. The skills challenge provides 30 days free access to Google Cloud labs and gives you the opportunity to earn skill badges to showcase different cloud competencies to employers. Learn more.
      • Now available: carbon free energy percentages for our Google Cloud regions—Google first achieved carbon neutrality in 2007, and since 2017 we’ve purchased enough solar and wind energy to match 100% of our global electricity consumption. Now we’re building on that progress to target a new sustainability goal: running our business on carbon-free energy 24/7, everywhere, by 2030. Beginning this week, we’re sharing data about how we are performing against that objective so our customers can select Google Cloud regions based on the carbon-free energy supplying them. Learn more.
      • Increasing bandwidth to C2 and N2 VMs—We announced the public preview of 100, 75, and 50 Gbps high-bandwidth network configurations for General Purpose N2 and Compute Optimized C2 Compute Engine VM families as part of continuous efforts to optimize our Andromeda host networking stack. This means we can now offer higher-bandwidth options on existing VM families when using the Google Virtual NIC (gVNIC). These VMs were previously limited to 32 Gbps. Learn more.
      • New research on how COVID-19 changed the nature of IT—To learn more about the impact of COVID-19 and the resulting implications to IT, Google commissioned a study by IDG to better understand how organizations are shifting their priorities in the wake of the pandemic. Learn more and download the report.

      • New in API security—Google Cloud Apigee API management platform's latest release, Apigee X, works with Cloud Armor to protect your APIs with advanced security technology including DDoS protection, geo-fencing, OAuth, and API keys. Learn more about our integrated security enhancements here.

      • Troubleshoot errors more quickly with Cloud Logging—The Logs Explorer now automatically breaks down your log results by severity, making it easy to spot spikes in errors at specific times. Learn more about our new histogram functionality here.

      Week of Mar 8-Mar 12, 2021

      • Introducing #AskGoogleCloud on Twitter and YouTube—Our first segment on March 12th features Developer Advocates Stephanie Wong, Martin Omander and James Ward to answer questions on the best workloads for serverless, the differences between “serverless” and “cloud native,” how to accurately estimate costs for using Cloud Run, and much more. Learn more.
      • Learn about the value of no-code hackathons—Google Cloud’s no-code application development platform, AppSheet, helps to facilitate hackathons for “non-technical” employees with no coding necessary to compete. Learn about Globe Telecom’s no-code hackathon as well as their winning AppSheet app here.
      • Introducing Cloud Code Secret Manager Integration—Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. Integrating Cloud Code with Secret Manager brings the powerful capabilities of both these tools together so you can create and manage your secrets right from within your preferred IDE, whether that be VS Code, IntelliJ, or Cloud Shell Editor. Learn more.
      • Flexible instance configurations in Cloud SQL—Cloud SQL for MySQL now supports flexible instance configurations which offer you the extra freedom to configure your instance with the specific number of vCPUs and GB of RAM that fits your workload. To set up a new instance with a flexible instance configuration, see our documentation here.
      • The Cloud Healthcare Consent Management API is now generally available—The Healthcare Consent Management API is now GA, giving customers the ability to greatly scale the management of consents to meet increasing need, particularly amidst the emerging task of managing health data for new care and research scenarios. Learn more.

      Week of Mar 1-Mar 5, 2021

      • Cloud Run is now available in all Google Cloud regions. Learn more.
      • Introducing Apache Spark Structured Streaming connector for Pub/Sub Lite—We’re announcing the release of an open source connector to read streams of messages from Pub/Sub Lite into Apache Spark.The connector works in all Apache Spark 2.4.X distributions, including Dataproc, Databricks, or manual Spark installations. Learn more.
      • Google Cloud Next ‘21 is October 12-14, 2021—Join us and learn how the most successful companies have transformed their businesses with Google Cloud. Sign-up at g.co/cloudnext for updates. Learn more.
      • Hierarchical firewall policies now GA—Hierarchical firewalls provide a means to enforce firewall rules at the organization and folder levels in the GCP Resource Hierarchy. This allows security administrators at different levels in the hierarchy to define and deploy consistent firewall rules across a number of projects so they're applied to all VMs in currently existing and yet-to-be-created projects. Learn more.
      • Announcing the Google Cloud Born-Digital Summit—Over this half-day event, we’ll highlight proven best-practice approaches to data, architecture, diversity & inclusion, and growth with Google Cloud solutions. Learn more and register for free.
      • Google Cloud products in 4 words or less (2021 edition)—Our popular “4 words or less Google Cloud developer’s cheat sheet” is back and updated for 2021. Learn more.
      • Gartner names Google a leader in its 2021 Magic Quadrant for Cloud AI Developer Services report—We believe this recognition is based on Gartner’s evaluation of Google Cloud’s language, vision, conversational, and structured data services and solutions for developers. Learn more.
      • Announcing the Risk Protection Program—The Risk Protection Program offers customers peace of mind through the technology to secure their data, the tools to monitor the security of that data, and an industry-first cyber policy offered by leading insurers. Learn more.
      • Building the future of work—We’re introducing new innovations in Google Workspace to help people collaborate and find more time and focus, wherever and however they work. Learn more.

      • Assured Controls and expanded Data Regions—We’ve added new information governance features in Google Workspace to help customers control their data based on their business goals. Learn more.

      Week of Feb 22-Feb 26, 2021

      • 21 Google Cloud tools explained in 2 minutes—Need a quick overview of Google Cloud core technologies? Quickly learn these 21 Google Cloud products—each explained in under two minutes. Learn more.

      • BigQuery materialized views now GA—Materialized views (MV’s) are precomputed views that periodically cache results of a query to provide customers increased performance and efficiency. Learn more.

      • New in BigQuery BI Engine—We’re extending BigQuery BI Engine to work with any BI or custom dashboarding applications that require sub-second query response times. In this preview, BI Engine will work seamlessly with Looker and other popular BI tools such as Tableau and Power BI without requiring any change to the BI tools. Learn more.

      • Dataproc now supports Shielded VMs—All Dataproc clusters created using Debian 10 or Ubuntu 18.04 operating systems now use Shielded VMs by default and customers can provide their own configurations for secure boot, vTPM, and Integrity Monitoring. This feature is just one of the many ways customers that have migrated their Hadoop and Spark clusters to GCP experience continued improvements to their security postures without any additional cost.

      • New Cloud Security Podcast by Google—Our new podcast brings you stories and insights on security in the cloud, delivering security from the cloud, and, of course, on what we’re doing at Google Cloud to help keep customer data safe and workloads secure. Learn more.

      • New in Conversational AI and Apigee technology—Australian retailer Woolworths provides seamless customer experiences with their virtual agent, Olive. Apigee API Management and Dialogflow technology allows customers to talk to Olive through voice and chat. Learn more.

      • Introducing GKE Autopilot—GKE already offers an industry-leading level of automation that makes setting up and operating a Kubernetes cluster easier and more cost effective than do-it-yourself and other managed offerings. Autopilot represents a significant leap forward. In addition to the fully managed control plane that GKE has always provided, using the Autopilot mode of operation automatically applies industry best practices and can eliminate all node management operations, maximizing your cluster efficiency and helping to provide a stronger security posture. Learn more.

      • Partnering with Intel to accelerate cloud-native 5G—As we continue to grow cloud-native services for the telecommunications industry, we’re excited to announce a collaboration with Intel to develop reference architectures and integrated solutions for communications service providers to accelerate their deployment of 5G and edge network solutions. Learn more.

      • Veeam Backup for Google Cloud now available—Veeam Backup for Google Cloud automates Google-native snapshots to securely protect VMs across projects and regions with ultra-low RPOs and RTOs, and store backups in Google Object Storage to enhance data protection while ensuring lower costs for long-term retention.

      • Migrate for Anthos 1.6 GA—With Migrate for Anthos, customers and partners can automatically migrate and modernize traditional application workloads running in VMs into containers running on Anthos or GKE. Included in this new release: 

        • In-place modernization for Anthos on AWS (Public Preview) to help customers accelerate on-boarding to Anthos AWS while leveraging their existing investment in AWS data sources, projects, VPCs, and IAM controls.

        • Additional Docker registries and artifacts repositories support (GA) including AWS ECR, basic-auth docker registries, and AWS S3 storage to provide further flexibility for customers using Anthos Anywhere (on-prem, AWS, etc). 

        • HTTPS Proxy support (GA) to enable M4A functionality (access to external image repos and other services) where a proxy is used to control external access.

      Week of Feb 15-Feb 19, 2021

      • Introducing Cloud Domains in preview—Cloud Domains simplify domain registration and management within Google Cloud, improve the custom domain experience for developers, increase security, and support stronger integrations around DNS and SSL. Learn more.

      • Announcing Databricks on Google Cloud—Our partnership with Databricks enables customers to accelerate Databricks implementations by simplifying their data access, by jointly giving them powerful ways to analyze their data, and by leveraging our combined AI and ML capabilities to impact business outcomes. Learn more.

      • Service Directory is GA—As the number and diversity of services grows, it becomes increasingly challenging to maintain an inventory of all of the services across an organization. Last year, we launched Service Directory to help simplify the problem of service management. Today, it’s generally available. Learn more.

      Week of Feb 8-Feb 12, 2021

      • Introducing Bare Metal Solution for SAP workloads—We’ve expanded our Bare Metal Solution—dedicated, single-tenant systems designed specifically to run workloads that are too large or otherwise unsuitable for standard, virtualized environments—to include SAP-certified hardware options, giving SAP customers great options for modernizing their biggest and most challenging workloads. Learn more.

      • 9TB SSDs bring ultimate IOPS/$ to Compute Engine VMs—You can now attach 6TB and 9TB Local SSD to second-generation general-purpose N2 Compute Engine VMs, for great IOPS per dollar. Learn more.

      • Supporting the Python ecosystem—As part of our longstanding support for the Python ecosystem, we are happy to increase our support for the Python Software Foundation, the non-profit behind the Python programming language, ecosystem and community. Learn more

      • Migrate to regional backend services for Network Load Balancing—We now support backend services with Network Load Balancing—a significant enhancement over the prior approach, target pools, providing a common unified data model for all our load-balancing family members and accelerating the delivery of exciting features on Network Load Balancing. Learn more.

      Week of Feb 1-Feb 4, 2021

      • Apigee launches Apigee X—Apigee celebrates its 10 year anniversary with Apigee X, a new release of the Apigee API management platform. Apigee X harnesses the best of Google technologies to accelerate and globalize your API-powered digital initiatives. Learn more about Apigee X and digital excellence here.
      • Celebrating the success of Black founders with Google Cloud during Black History Month—February is Black History Month, a time for us to come together to celebrate and remember the important people and history of the African heritage. Over the next four weeks, we will highlight four Black-led startups and how they use Google Cloud to grow their businesses. Our first feature highlights TQIntelligence and its founder, Yared.

      Week of Jan 25-Jan 29, 2021

      • BeyondCorp Enterprise now generally available—BeyondCorp Enterprise is a zero trust solution, built on Google’s global network, which provides customers with simple and secure access to applications and cloud resources and offers integrated threat and data protection. To learn more, read the blog post, visit our product homepage, and register for our upcoming webinar.

      Week of Jan 18-Jan 22, 2021

      • Cloud Operations Sandbox now available—Cloud Operations Sandbox is an open-source tool that helps you learn SRE practices from Google and apply them on cloud services using Google Cloud’s operations suite (formerly Stackdriver), with everything you need to get started in one click. You can read our blog post, or get started by visiting cloud-ops-sandbox.dev, exploring the project repo, and following along in the user guide

      • New data security strategy whitepaper—Our new whitepaper shares our best practices for how to deploy a modern and effective data security program in the cloud. Read the blog post or download the paper.   

      • WebSockets, HTTP/2 and gRPC bidirectional streams come to Cloud Run—With these capabilities, you can deploy new kinds of applications to Cloud Run that were not previously supported, while taking advantage of serverless infrastructure. These features are now available in public preview for all Cloud Run locations. Read the blog post or check out the WebSockets demo app or the sample h2c server app.

      • New tutorial: Build a no-code workout app in 5 steps—Looking to crush your new year’s resolutions? Using AppSheet, Google Cloud’s no-code app development platform, you can build a custom fitness app that can do things like record your sets, reps and weights, log your workouts, and show you how you’re progressing. Learn how.

      Week of Jan 11-Jan 15, 2021

      • State of API Economy 2021 Report now available—Google Cloud details the changing role of APIs in 2020 amidst the COVID-19 pandemic, informed by a comprehensive study of Apigee API usage behavior across industry, geography, enterprise size, and more. Discover these 2020 trends along with a projection of what to expect from APIs in 2021. Read our blog post here or download and read the report here.
      • New in the state of no-code—Google Cloud's AppSheet looks back at the key no-code application development themes of 2020. AppSheet contends the rising number of citizen developer app creators will ultimately change the state of no-code in 2021. Read more here.

      Week of Jan 4-Jan 8, 2021

      • Last year's most popular API posts—In an arduous year, thoughtful API design and strategy is critical to empowering developers and companies to use technology for global good. Google Cloud looks back at the must-read API posts in 2020. Read it here.

      Week of Dec 21-Dec 25, 2020

      Week of Dec 14-Dec 18, 2020

      • Memorystore for Redis enables TLS encryption support (Preview)—With this release, you can now use Memorystore for applications requiring sensitive data to be encrypted between the client and the Memorystore instance. Read more here.
      • Monitoring Query Language (MQL) for Cloud Monitoring is now generally available—Monitoring Query language provides developers and operators on IT and development teams powerful metric querying, analysis, charting, and alerting capabilities. This functionality is needed for Monitoring use cases that include troubleshooting outages, root cause analysis, custom SLI / SLO creation, reporting and analytics, complex alert logic, and more. Learn more.

      Week of Dec 7-Dec 11, 2020

      • Memorystore for Redis now supports Redis AUTH—With this release you can now use OSS Redis AUTH feature with Memorystore for Redis instances. Read more here.
      • New in serverless computing—Google Cloud API Gateway and its service-first approach to developing serverless APIs helps organizations accelerate innovation by eliminating scalability and security bottlenecks for their APIs. Discover more benefits here.
      • Environmental Dynamics, Inc. makes a big move to no-code—The environmental consulting company EDI built and deployed 35+ business apps with no coding skills necessary with Google Cloud’s AppSheet. This no-code effort not only empowered field workers, but also saved employees over 2,550 hours a year. Get the full story here.
      • Introducing Google Workspace for Government—Google Workspace for Government is an offering that brings the best of Google Cloud’s collaboration and communication tools to the government with pricing that meets the needs of the public sector. Whether it’s powering social care visits, employment support, or virtual courts, Google Workspace helps governments meet the unique challenges they face as they work to provide better services in an increasingly virtual world. Learn more.

      Week of Nov 30-Dec 4, 2020

      • Google enters agreement to acquire Actifio—Actifio, a leader in backup and disaster recovery (DR), offers customers the opportunity to protect virtual copies of data in their native format, manage these copies throughout their entire lifecycle, and use these copies for scenarios like development and test. This planned acquisition further demonstrates Google Cloud’s commitment to helping enterprises protect workloads on-premises and in the cloud. Learn more.
      • Traffic Director can now send traffic to services and gateways hosted outside of Google Cloud—Traffic Director support for Hybrid Connectivity Network Endpoint Groups (NEGs), now generally available, enables services in your VPC network to interoperate more seamlessly with services in other environments. It also enables you to build advanced solutions based on Google Cloud's portfolio of networking products, such as Cloud Armor protection for your private on-prem services. Learn more.
      • Google Cloud launches the Healthcare Interoperability Readiness Program—This program, powered by APIs and Google Cloud’s Apigee, helps patients, doctors, researchers, and healthcare technologists alike by making patient data and healthcare data more accessible and secure. Learn more here.
      • Container Threat Detection in Security Command Center—We announced the general availability of Container Threat Detection, a built-in service in Security Command Center. This release includes multiple detection capabilities to help you monitor and secure your container deployments in Google Cloud. Read more here.
      • Anthos on bare metal now GA—Anthos on bare metal opens up new possibilities for how you run your workloads, and where. You can run Anthos on your existing virtualized infrastructure, or eliminate the dependency on a hypervisor layer to modernize applications while reducing costs. Learn more.

      Week of Nov 23-27, 2020

      • Tuning control support in Cloud SQL for MySQL—We’ve made all 80 flags that were previously in preview now generally available (GA), empowering you with the controls you need to optimize your databases. See the full list here.
      • New in BigQuery ML—We announced the general availability of boosted trees using XGBoost, deep neural networks (DNNs) using TensorFlow, and model export for online prediction. Learn more.
      • New AI/ML in retail report—We recently commissioned a survey of global retail executives to better understand which AI/ML use cases across the retail value chain drive the highest value and returns in retail, and what retailers need to keep in mind when going after these opportunities. Learn more  or read the report.

      Week of Nov 16-20, 2020

      • New whitepaper on how AI helps the patent industry—Our new paper outlines a methodology to train a BERT (bidirectional encoder representation from transformers) model on over 100 million patent publications from the U.S. and other countries using open-source tooling. Learn more or read the whitepaper.
      • Google Cloud support for .NET 5.0—Learn more about our support of .NET 5.0, as well as how to deploy it to Cloud Run.
      • .NET Core 3.1 now on Cloud Functions—With this integration you can write cloud functions using your favorite .NET Core 3.1 runtime with our Functions Framework for .NET for an idiomatic developer experience. Learn more.
      • Filestore Backups in preview—We announced the availability of the Filestore Backups preview in all regions, making it easier to migrate your business continuity, disaster recovery and backup strategy for your file systems in Google Cloud. Learn more.
      • Introducing Voucher, a service to help secure the container supply chain—Developed by the Software Supply Chain Security team at Shopify to work with Google Cloud tools, Voucher evaluates container images created by CI/CD pipelines and signs those images if they meet certain predefined security criteria. Binary Authorization then validates these signatures at deploy time, ensuring that only explicitly authorized code that meets your organizational policy and compliance requirements can be deployed to production. Learn more.
      • 10 most watched from Google Cloud Next ‘20: OnAir—Take a stroll through the 10 sessions that were most popular from Next OnAir, covering everything from data analytics to cloud migration to no-code development. Read the blog.
      • Artifact Registry is now GA—With support for container images, Maven, npm packages, and additional formats coming soon, Artifact Registry helps your organization benefit from scale, security, and standardization across your software supply chain. Read the blog.

      Week of Nov 9-13, 2020

      • Introducing the Anthos Developer Sandbox—The Anthos Developer Sandbox gives you an easy way to learn to develop on Anthos at no cost, available to anyone with a Google account. Read the blog.
      • Database Migration Service now available in preview—Database Migration Service (DMS) makes migrations to Cloud SQL simple and reliable. DMS supports migrations of self-hosted MySQL databases—either on-premises or in the cloud, as well as managed databases from other clouds—to Cloud SQL for MySQL. Support for PostgreSQL is currently available for limited customers in preview, with SQL Server coming soon. Learn more.
      • Troubleshoot deployments or production issues more quickly with new logs tailing—We’ve added support for a new API to tail logs with low latency. Using gcloud, it allows you the convenience of tail -f with the powerful query language and centralized logging solution of Cloud Logging. Learn more about this preview feature.
      • Regionalized log storage now available in 5 new regions in preview—You can now select where your logs are stored from one of five regions in addition to global—asia-east1, europe-west1, us-central1, us-east1, and us-west1. When you create a logs bucket, you can set the region in which you want to store your logs data. Get started with this guide.

      Week of Nov 2-6, 2020

      • Cloud SQL adds support for PostgreSQL 13—Shortly after its community GA, Cloud SQL has added support for PostgreSQL 13. You get access to the latest features of PostgreSQL while Cloud SQL handles the heavy operational lifting, so your team can focus on accelerating application delivery. Read more here.
      • Apigee creates value for businesses running on SAP—Google Cloud’s API Management platform Apigee is optimized for data insights and data monetization, helping businesses running on SAP innovate faster without fear of SAP-specific challenges to modernization. Read more here.
      • Document AI platform is live—The new Document AI (DocAI) platform, a unified console for document processing, is now available in preview. You can quickly access all parsers, tools and solutions (e.g. Lending DocAI, Procurement DocAI) with a unified API, enabling an end-to-end document solution from evaluation to deployment. Read the full story here or check it out in your Google Cloudconsole.
      • Accelerating data migration with Transfer Appliances TA40 and TA300—We’re announcing the general availability of new Transfer Appliances. Customers are looking for fast, secure and easy to use options to migrate their workloads to Google Cloud and we are addressing their needs with next generation Transfer Appliances. Learn more about Transfer Appliances TA40 and TA300.

      Week of Oct 26-30, 2020

      • B.H., Inc. accelerates digital transformation—The Utah based contracting and construction company BHI eliminated IT backlog when non technical employees were empowered to build equipment inspection, productivity, and other custom apps by choosing Google Workspace and the no-code app development platform, AppSheet. Read the full story here.
      • Globe Telecom embraces no-code development—Google Cloud’s AppSheet empowers Globe Telecom employees to do more innovating with less code. The global communications company kickstarted their no-code journey by combining the power of AppSheet with a unique adoption strategy. As a result, AppSheet helped Globe Telecom employees build 59 business apps in just 8 weeks. Get the full story.
      • Cloud Logging now allows you to control access to logs via Log Views—Building on the control offered via Log Buckets (blog post), you can now configure who has access to logs based on the source project, resource type, or log name, all using standard IAM controls. Logs views, currently in Preview, can help you build a system using the principle of least privilege, limiting sensitive logs to only users who need this information. Learn more about Log Views.
      • Document AI is HIPAA compliantDocument AI now enables HIPAA compliance. Now Healthcare and Life Science customers such as health care providers, health plans, and life science organizations can unlock insights by quickly extracting structured data from medical documents while safeguarding individuals’ protected health information (PHI). Learn more about Google Cloud’s nearly 100 products that support HIPAA-compliance.

      Week of Oct 19-23, 2020

      • Improved security and governance in Cloud SQL for PostgreSQL—Cloud SQL for PostgreSQL now integrates with Cloud IAM (preview) to provide simplified and consistent authentication and authorization. Cloud SQL has also enabled PostgreSQL Audit Extension (preview) for more granular audit logging. Read the blog.
      • Announcing the AI in Financial Crime Compliance webinar—Our executive digital forum will feature industry executives, academics, and former regulators who will discuss how AI is transforming financial crime compliance on November 17. Register now.
      • Transforming retail with AI/ML—New research provides insights on high value AI/ML use cases for food, drug, mass merchant and speciality retail that can drive significant value and build resilience for your business. Learn what the top use cases are for your sub-segment and read real world success stories. Download the ebook here and view this companion webinar which also features insights from Zulily.
      • New release of Migrate for Anthos—We’re introducing two important new capabilities in the 1.5 release of Migrate for Anthos, Google Cloud's solution to easily migrate and modernize applications currently running on VMs so that they instead run on containers in Google Kubernetes Engine or Anthos. The first is GA support for modernizing IIS apps running on Windows Server VMs. The second is a new utility that helps you identify which VMs in your existing environment are the best targets for modernization to containers. Start migrating or check out the assessment tool documentation (Linux | Windows).
      • New Compute Engine autoscaler controls—New scale-in controls in Compute Engine let you limit the VM deletion rate by preventing the autoscaler from reducing a MIG's size by more VM instances than your workload can tolerate to lose. Read the blog.
      • Lending DocAI in previewLending DocAI is a specialized solution in our Document AI portfolio for the mortgage industry that processes borrowers’ income and asset documents to speed-up loan applications. Read the blog, or check out the product demo.

      Week of Oct 12-16, 2020

      • New maintenance controls for Cloud SQL—Cloud SQL now offers maintenance deny period controls, which allow you to prevent automatic maintenance from occurring during a 90-day time period. Read the blog.
      • Trends in volumetric DDoS attacks—This week we published a deep dive into DDoS threats, detailing the trends we’re seeing and giving you a closer look at how we prepare for multi-terabit attacks so your sites stay up and running. Read the blog.
      • New in BigQuery—We shared a number of updates this week, including new SQL capabilities, more granular control over your partitions with time unit partitioning, the general availability of Table ACLs, and BigQuery System Tables Reports, a solution that aims to help you monitor BigQuery flat-rate slot and reservation utilization by leveraging BigQuery’s underlying INFORMATION_SCHEMA views. Read the blog.
      • Cloud Code makes YAML easy for hundreds of popular Kubernetes CRDs—We announced authoring support for more than 400 popular Kubernetes CRDs out of the box, any existing CRDs in your Kubernetes cluster, and any CRDs you add from your local machine or a URL. Read the blog.
      • Google Cloud’s data privacy commitments for the AI era—We’ve outlined how our AI/ML Privacy Commitment reflects our belief that customers should have both the highest level of security and the highest level of control over data stored in the cloud. Read the blog.

      • New, lower pricing for Cloud CDN—We’ve reduced the price of cache fill (content fetched from your origin) charges across the board, by up to 80%, along with our recent introduction of a new set of flexible caching capabilities, to make it even easier to use Cloud CDN to optimize the performance of your applications. Read the blog.

      • Expanding the BeyondCorp Alliance—Last year, we announced our BeyondCorp Alliance with partners that share our Zero Trust vision. Today, we’re announcing new partners to this alliance. Read the blog.

      • New data analytics training opportunities—Throughout October and November, we’re offering a number of no-cost ways to learn data analytics, with trainings for beginners to advanced users. Learn more.

      • New BigQuery blog series—BigQuery Explained provides overviews on storage, data ingestion, queries, joins, and more. Read the series.

      Week of Oct 5-9, 2020

      • Introducing the Google Cloud Healthcare Consent Management API—This API gives healthcare application developers and clinical researchers a simple way to manage individuals’ consent of their health data, particularly important given the new and emerging virtual care and research scenarios related to COVID-19. Read the blog.

      • Announcing Google Cloud buildpacks—Based on the CNCF buildpacks v3 specification, these buildpacks produce container images that follow best practices and are suitable for running on all of our container platforms: Cloud Run (fully managed), Anthos, and Google Kubernetes Engine (GKE). Read the blog.

      • Providing open access to the Genome Aggregation Database (gnomAD)—Our collaboration with Broad Institute of MIT and Harvard provides free access to one of the world's most comprehensive public genomic datasets. Read the blog.

      • Introducing HTTP/gRPC server streaming for Cloud Run—Server-side HTTP streaming for your serverless applications running on Cloud Run (fully managed) is now available. This means your Cloud Run services can serve larger responses or stream partial responses to clients during the span of a single request, enabling quicker server response times for your applications. Read the blog.

      • New security and privacy features in Google Workspace—Alongside the announcement of Google Workspace we also shared more information on new security features that help facilitate safe communication and give admins increased visibility and control for their organizations. Read the blog.

      • Introducing Google Workspace—Google Workspace includes all of the productivity apps you know and use at home, at work, or in the classroom—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, Chat and more—now more thoughtfully connected. Read the blog.

      • New in Cloud Functions: languages, availability, portability, and more—We extended Cloud Functions—our scalable pay-as-you-go Functions-as-a-Service (FaaS) platform that runs your code with zero server management—so you can now use it to build end-to-end solutions for several key use cases. Read the blog.

      • Announcing the Google Cloud Public Sector Summit, Dec 8-9—Our upcoming two-day virtual event will offer thought-provoking panels, keynotes, customer stories and more on the future of digital service in the public sector. Register at no cost.

    • Empowering everyday innovation to build a more adaptive business Tue, 16 Aug 2022 16:00:00 -0000

      Innovation is often associated with big wins: increasing a competitive advantage, developing a new product, or disrupting a category. But these outcomes are almost always enabled by advancements that occur on a smaller scale—in the ways organizations regularly carve out time for creative thinking, experiment with processes, and deploy technology to collaborate. 

      This approach to innovation frames it as an ongoing exploration of modes of working that make our employees more curious, our jobs more efficient, and our businesses more resilient. When teams are regularly empowered to experiment, the emerging discoveries and productivity gains can help organizations weather unforeseen challenges and uncover new opportunities.

      Viewing innovation as a daily practice also reminds us that it’s not a lightning strike. Rather, it’s an organizational habit that we must intentionally develop and cultivate over time.

      The right mindset, culture, and tools can help teams—especially distributed teams—reliably produce breakthroughs that advance the business. Equipped with resources that support knowledge-sharing, collaboration, and experimentation, you can build a culture of innovation that empowers your employees to build a more creative, productive workplace for everyone.

      Creating an environment where innovation can thrive 

      At the end of the day, it’s up to each organization to determine how to best adopt innovation as a daily practice. But, as a starting point, there are certain values you can introduce to help you build a culture that champions new approaches to thinking and working. 

      It starts with creating an atmosphere of psychological safety. In order for employees and teams to take the risks that innovation requires, they must feel comfortable speaking up, asking questions, and making mistakes. Invite people to indulge their curiosity and inquire about what they don’t understand. Acknowledge there will be setbacks as they test out hypotheses, as trial-and-error is often the catalyst for developing new perspectives and processes. And impress upon your teams that failure is not only permitted, but valued, when it leads to learning and new understanding. 

      Additionally, innovation and inclusivity work together, so invest in leaders and managers who embrace diverse perspectives and listen to their teams. When employees feel heard, they are “3.5 times more likely to contribute to innovation potential.” But listening is just one-half of the equation. Many times, in order for innovative ideas to break through, they also need a little extra push from leadership. As Patricia Satterstrom, Michaela J. Kerrissey, and Julia DiBenigno emphasize in the Harvard Business Review, organizations bolster good ideas by supporting “voice cultivation,” which they define as “the collective, social process through which employees help lower-power team members’ voiced ideas reach implementation.” To help your organization become more innovative, encourage your senior-level employees to amplify their junior colleagues’ contributions.

      2 everyday habit.jpg
      "How virtual collaboration can help build a better working world." EY. Feb. 8, 2022

      This idea is related to another driver for fostering innovation across your organization: a willingness to experiment with team structure. While many business operations require hierarchy to function smoothly, some aspects of the creative process, such as ideation, may benefit from a flatter approach. A good idea can come from anyone, so consider creating a cross-functional innovation committee consisting of employees of all levels to lead company-wide brainstorming activities. Or set up “office hours” for junior employees to share their new ideas with senior team members.

      Finally, innovation thrives in a connected, collaborative environment. Equip your hybrid workforce with seamless, secure technology and tools that enable them to contribute and collaborate most effectively. Create opportunities, such as team-building exercises, for your employees to deepen relationships with their teammates as well as colleagues outside of their department. After all, the closer you are with someone, the easier it is to exchange and pursue creative or unconventional ideas together. 

      Putting innovation into practice

      Cultivating organizational values like safety, inclusivity, and collaboration is the first step toward building innovation into a daily habit. To strengthen your organization’s innovation muscle, consider the following exercises and tools that can help your teams be more creative every day.

      Inhabit a beginner’s mindset. Curiosity is a skill that can be learned, so encourage your employees (and yourself!) to tap into that inner child, embracing the urge to ask “why” or “how”?

      • Host and facilitate regular brainstorming exercises devoted to innovation. You might devote different sessions to specific topics and open-floor discussions. Virtual tools, like Google Meetfor working in real-time or Spaces for ongoing, staggered conversation, ensure your hybrid workforce can participate from anywhere.

      • Screen for curiosity in prospective hires, and incorporate a “curiosity filter” in performance reviews. Use Google Forms to create surveys to help hiring teams and department managers assess people’s appetite for curiosity, creativity, and experimentation.

      Experiment a little bit each day. In particular, encouraging your employees to use technology in new or expanded ways can empower them to get creative. 

      • Schedule knowledge-sharing sessions or create a chat thread in which employees can exchange tips and best practices for maximizing workplace technology and tools.

      • Invite employees closest to the business problem to come up with the solution. With AppSheet, your teams can build no-code mobile and desktop apps to automate workflows, promote collaboration, and simplify other workplace tasks.

      Encourage your employees to focus on innovation when it’s right for them. This kind of work often requires a mix of quiet, heads-down time for individuals and group collaboration. Since people’s preferences for each type of work may depend on their daily workload and schedule, give your employees flexibility to decide when and how to contribute to innovative efforts. 

      • Encourage your employees to use Google Calendar to block out Focus Time devoted to activities that support innovation, such as trends research or conceptual development.

      • Help your teams find their preferred tools for individual and collective work as well as same-time and staggered collaboration. Interactive tools like Jamboard and Miro in Google Meet can enhance real-time working sessions. And setting up a dedicated innovation Space lets each person participate on their own time, when it’s best for them.

      Adopt an iterative approach. Building a creative culture takes time, and it’s an ongoing, multi-step process — much like creating a new product. First, you implement practices that unleash creativity. Once you’ve surfaced a set of good ideas, you identify the best ones to move into development. Finally, you put those ideas to the test, launching and fine-tuning them based on the available data and user feedback. If they’re working, you refine and scale; if they’re not, it might be time to cut your losses and learn from the failure. The same methodology is applicable to daily innovation. Sometimes a discovery can benefit the whole organization, so periodically examine people’s individual creative pursuits to determine if any of their breakthroughs should become standard practice.

      The time is now

      Fostering a culture of experimentation and innovation takes hard work, patience, and a vision for the future. It can feel like a momentous undertaking, but our current moment is primed for it. Thanks to the hybrid work revolution, business leaders have an opportunity to reimagine how we work, collaborate, and innovate to build stronger, more adaptive businesses. Over time, the small steps you take to help your organization develop a daily practice of innovation — like championing creativity and welcoming experimentation with everyday processes and tools — can add up to transformative outcomes.

      Related Article

      How organizations can rethink their approach to time management coaching

      How to better enable employees to be successful and impactful in a hybrid work world.

      Read Article
    • Join us for a show-and-tell edition of Google Cloud Security Talks Tue, 16 Aug 2022 16:00:00 -0000

      If you’re new to Security Talks, you should know that this program is part of an ongoing series where we bring together experts from the Google Cloud security team, including the Google Cybersecurity Action Team and Office of the CISO, and the greater industry to share information on our latest security products, innovations, and best practices. 

      The Q3 installment of the Google Cloud Security Talks on Aug. 31 is a special show-and-tell edition. We’re not just going to share what you need to know about our portfolio of products, we’re also going to be showing you how to use them as well. The format for this round of Security Talks will be focused on practitioners and emphasize how to apply Google Cloud products in popular use cases. This time, Security Talks sessions will spotlight key use cases and include how-to demonstrations. You’ll be able to glean best practices and see how you can apply these exact same scenarios to your own environment. 

      Our agenda is packed with insightful sessions across Zero Trust, security operations, secure cloud, and more, including:

      • How to leverage SOAR to grow your automated response playbook library’s value - but not the complexity

      • The ins and outs of protecting critical apps from fraud and bots

      • How to create and manage compliant environments in Google Cloud

      • How to get started with network-based threat detection in Google Cloud

      • Guidance on where to begin your Zero Trust journey

      • Tips for succeeding with your cloud data security strategy

      • Google Cloud’s latest security innovations and product updates

      And don’t miss the live Cloud Security Podcast roundtable featuring Mandiant Senior Director Robert Wallace, Cybereason Security Strategy Director Ken Westin, and our own Office of the CISO Director of Financial Services Alicja Cade in conversation with host Anton Chuvakin. Our esteemed panel will dig into the latest security trends and how to apply what we’ve learned from them to your own environment.  

      We’re looking forward to seeing you there. Sign up today to reserve your virtual seat. The Google Cloud Security Talks is 100% digital and free to attend. All sessions will be available on demand after the event. Until then, stay secure.

      Related Article

      Join us for Google Cloud Security Talks: Zero Trust edition

      Join us for Google Cloud Security Talks with sessions focused on zero trust. Learn how you can protect your users and critical information.

      Read Article
    • A visual tour of Google Cloud certifications Tue, 16 Aug 2022 16:00:00 -0000

      Interested in becoming Google Cloud certified? Wondering which Google Cloud certification is right for you? We’ve got you covered.

      Check out the latest#GCPSketchnote illustration, a framework to help you determine which Google Cloud certification is best suited to validate your current skill set and propel you toward future cloud career goals.

      which cloud cert.jpg

      Follow the arrows to see where you land, and for tips on how to prepare for your certification on Google Cloud Skills Boost: 

      Continue along the arrows for tips on how to prepare for your certification, while earning completion badges and skill badges through our on-demand learning platform, Google Cloud Skills Boost along the way.

      Where will your certification journey take you?

      Get started preparing for your certification today. New users are eligible for a 30-day no-cost trial on  Google Cloud Skills Boost.

      Related Article

      Meet the new Professional Cloud Database Engineer certification

      Google Cloud launches a new Professional certification.

      Read Article
    • Simplify model serving with custom prediction routines on Vertex AI Tue, 16 Aug 2022 14:00:00 -0000

      The data received at serving time is rarely in the format your model expects. Numerical columns need to be normalized, features created, image bytes decoded, input values validated. Transforming the data can be as important as the prediction itself. That’s why we’re excited to announce custom prediction routines on Vertex AI, which simplify the process of writing pre and post processing code. 

      With custom prediction routines, you can provide your data transformations as Python code, and behind the scenes Vertex AI SDK will build a custom container that you can test locally and deploy to the cloud. 

      Understanding custom prediction routines

      The Vertex AI pre-built containers handle prediction requests by performing the prediction operation of the machine learning framework. Prior to custom prediction routines, if you wanted to preprocess the input before the prediction is performed, or postprocess the model’s prediction before returning the result, you would need to build a custom container from scratch.

      Building a custom serving container requires writing an HTTP server that wraps the trained model, translates HTTP requests into model inputs, and translates model outputs into responses. You can see an example here showing how to build a model server with FastAPI.

      With custom prediction routines, Vertex AI provides the serving-related components for you, so that you can focus on your model and data transformations.

      The predictor

      The predictor class is responsible for the ML-related logic in a prediction request: loading the model, getting predictions, and applying custom preprocessing and postprocessing. To write custom prediction logic, you’ll subclass the Vertex AI Predictor interface. In most cases, customizing the predictor is all you’ll require, but check out this notebook if you’d like to see an example of customizing the request handler.

      This release of custom prediction routines comes with reusable XGBoost and Sklearn predictors, but if you need to use a different framework you can create your own by subclassing the base predictor.

      You can see an example predictor implementation below, specifically the reusable Sklearn predictor. This is all the code you would need to write in order to build this custom model server.

      [StructValue([(u'code', u'import joblib\r\nimport numpy as np\r\n \r\nfrom google.cloud.aiplatform.utils import prediction_utils\r\nfrom google.cloud.aiplatform.prediction.predictor import Predictor\r\n \r\nclass SklearnPredictor(Predictor):\r\n """Default Predictor implementation for Sklearn models."""\r\n \r\n def __init__(self):\r\n return\r\n \r\n def load(self, artifacts_uri: str):\r\n prediction_utils.download_model_artifacts(artifacts_uri)\r\n self._model = joblib.load("model.joblib")\r\n \r\n def preprocess(self, prediction_input: dict) -> np.ndarray:\r\n instances = prediction_input["instances"]\r\n return np.asarray(instances)\r\n \r\n def predict(self, instances: np.ndarray) -> np.ndarray:\r\n return self._model.predict(instances)\r\n \r\n def postprocess(self, prediction_results: np.ndarray) -> dict:\r\n return {"predictions": prediction_results.tolist()}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b733950>)])]

      A predictor implements four methods: 

      • Load: Loads in the model artifacts, and any optional preprocessing artifacts such as an encoder you saved to a pickle file.

      • Preprocess:Performs the logic to preprocess the input data before the prediction request. By default, the preprocess method receives a dictionary which contains all the data in the request body after it has been deserialized from JSON. 

      • Predict: Performs the prediction, which will look something like model.predict(instances) depending on what framework you’re using.

      • Postprocess:Postprocesses the prediction results before returning them to the end user. By default, the output of the postprocess method will be serialized into a JSON object and returned as the response body.

      You can customize as many of the above methods as your use case requires. To customize, all you need to do is subclass the predictor and save your new custom predictor to a Python file. 

      Let’s take a deeper look at how you might customize each one of these methods.


      The load method is where you load in any artifacts from Cloud Storage. This includes the model, but can also include custom preprocessors. 

      For example, let’s say you wrote the following preprocessor to scale numerical features, and stored it as a pickle file called preprocessor.pkl in Cloud Storage.

      [StructValue([(u'code', u'class MySimpleScaler(object):\r\n def __init__(self):\r\n self._means = None\r\n self._stds = None\r\n \r\n def preprocess(self, data):\r\n if self._means is None: # during training only\r\n self._means = np.mean(data, axis=0)\r\n \r\n if self._stds is None: # during training only\r\n self._stds = np.std(data, axis=0)\r\n if not self._stds.all():\r\n raise ValueError("At least one column has standard deviation of 0.")\r\n return (data - self._means) / self._stds'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b568550>)])]

      When customizing the predictor, you would write a load method to read the pickle file, similar to the following, where artifacts_uri is the Cloud Storage path to your model and preprocessing artifacts.

      [StructValue([(u'code', u'def load(self, artifacts_uri: str):\r\n """Loads the preprocessor artifacts."""\r\n super().load(artifacts_uri)\r\n gcs_client = storage.Client()\r\n with open("preprocessor.pkl", \'wb\') as preprocessor_f:\r\n gcs_client.download_blob_to_file(\r\n f"{artifacts_uri}/preprocessor.pkl", preprocessor_f\r\n )\r\n \r\n with open("preprocessor.pkl", "rb") as f:\r\n preprocessor = pickle.load(f)\r\n \r\n self._preprocessor = preprocessor'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b568f10>)])]


      The preprocess method is where you write the logic to perform any preprocessing needed for your serving data. It can be as simple as just applying the preprocessor you loaded in the load method as shown below:

      [StructValue([(u'code', u'def preprocess(self, prediction_input):\r\n inputs = super().preprocess(prediction_input)\r\n return self._preprocessor.preprocess(inputs)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b568e10>)])]

      Instead of loading in a preprocessor, you might write the preprocessing directly in the preprocess method. For example, you might need to check your inputs are in the format you expect. Let’s say your model expects the feature at index 3 to be a string in its abbreviated form. You want to check that at serving time the value for that feature is abbreviated.

      [StructValue([(u'code', u'def preprocess(self, prediction_input):\r\n inputs = super().preprocess(prediction_input)\r\n clarity_dict={"Flawless": "FL",\r\n "Internally Flawless": "IF",\r\n "Very Very Slightly Included": "VVS1",\r\n "Very Slightly Included": "VS2",\r\n "Slightly Included": "S12",\r\n "Included": "I3"}\r\n for sample in inputs:\r\n if sample[3] not in clarity_dict.values():\r\n sample[3] = clarity_dict[sample[3]] \r\n return inputs'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b568050>)])]

      There are numerous other ways you could customize the preprocessing logic. You might need to tokenize text for a language model, generate new features, or load data from an external source.


      This method usually just calls model.predict, and generally doesn't need to be customized unless you're building your predictor from scratch instead of with a reusable predictor.


      Sometimes the model prediction is only the first step. After you get a prediction from the model you might need to transform it to make it valuable to the end user. This might be something as simple as converting the numerical class label returned by the model to the string label as shown below.

      [StructValue([(u'code', u'def postprocess(self, prediction_results):\r\n label_dict = {0: \'rose\',\r\n 1: \'daisy\',\r\n 2: \'dandelion\',\r\n 3: \'tulip\',\r\n 4: \'sunflower\'}\r\n return {"predictions": [label_dict[class_num] for class_num in prediction_results]}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b568150>)])]

      Or you could implement additional business logic. For example, you might want to only return a prediction if the model’s confidence is above a certain threshold. If it’s below, you want the input to be sent to a human instead to double check.

      [StructValue([(u'code', u'def postprocess(self, prediction_results):\r\n returned_predictions = []\r\n for result in prediction_results:\r\n if result > self._confidence_threshold:\r\n returned_predictions.append(result)\r\n else:\r\n returned_predictions.append("confidence too low for prediction")\r\n return {"predictions": returned_predictions}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf0b568d50>)])]

      Just like with preprocessing, there are numerous ways you can postprocess your data with custom prediction routines. You might need to detokenize text for a language model, convert the model output into a more readable format for the end user, or even call a Vertex AI Matching Engine index endpoint to search for data with a similar embedding.

      Local Testing

      When you’ve written your predictor, you’ll want to save the class out to a Python file. Then you can build your image with the command below, where LOCAL_SOURCE_DIR is a local directory that contains the Python file where you saved your custom predictor.

      [StructValue([(u'code', u'from google.cloud.aiplatform.prediction import LocalModel\r\nfrom src_dir.predictor import MyCustomPredictor\r\nimport os\r\n \r\nlocal_model = LocalModel.build_cpr_model(\r\n {LOCAL_SOURCE_DIR},\r\n f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}",\r\n predictor=MyCustomPredictor,\r\n requirements_path=os.path.join(LOCAL_SOURCE_DIR, "requirements.txt"),\r\n)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf1dbc8610>)])]

      Once the image is built, you can test it out by deploying it to a local endpoint and then calling the predict method and passing in the request data. You’ll set artifact_uri to the path in Cloud Storage where you’ve saved your model and any artifacts needed for preprocessing or postprocessing. You can also use a local path for testing.

      [StructValue([(u'code', u'with local_model.deploy_to_local_endpoint(\r\n artifact_uri=f"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}",\r\n credential_path=CREDENTIALS_FILE,\r\n) as local_endpoint:\r\n predict_response = local_endpoint.predict(\r\n request_file=INPUT_FILE,\r\n headers={"Content-Type": "application/json"},\r\n )'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf1dbc8290>)])]

      Deploy to Vertex AI

      After testing the model locally to confirm that the predictions work as expected, the next steps are to push the image to Artifact Registry, import the model to the Vertex AI Model Registry, and optionally deploy it to an endpoint if you want online predictions.

      [StructValue([(u'code', u'# push image\r\nlocal_model.push_image()\r\n \r\n# upload to registry\r\nmodel = aiplatform.Model.upload(local_model=local_model, \r\n display_name=MODEL_DISPLAY_NAME,\r\n artifact_uri=f"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}",)\r\n \r\n \r\n# deploy\r\nendpoint = model.deploy(machine_type="n1-standard-4")'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf1dbc8d10>)])]

      When the model has been uploaded to Vertex AI and deployed, you’ll be able to see it in the model registry. And then you can make prediction requests like you would with any other model you have deployed on Vertex AI. 

      [StructValue([(u'code', u'# get prediction\r\nendpoint.predict(instances=PREDICTION_DATA)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3edf1dbc8190>)])]

      What’s next

      You now know the basics of how to use custom prediction routines to help add powerful customization to your serving workflows without having to worry about model servers or building Docker containers. To get hands on experience with an end to end example, check out this codelab. It’s time to start writing some custom prediction code of your own!

    • How autonomic data security can help define cloud’s future Mon, 15 Aug 2022 16:00:00 -0000

      “Ninety percent of all data today was created in the last two years—that’s 2.5 quintillion bytes of data per day,” according to business data analytics company Domo. That would be a mind-bending statistic, except that it’s already five years old. 

      As data usage has undergone drastic expansion and changes in the past five years, so have your business needs for data. Technology such as cloud computing and AI have changed how we use data, derive value from data, and glean insights from data. Your organization is no longer just crunching and re-crunching the same data sets. Data moves, shifts, and replicates, as you mingle data sets and gain new value in the process, as we say in our Data Cloud story. All the while, your data resides in—and is being created in—new places.

      Data lives in a myriad of locations now and requires access from different locations and mediums, yet many of today’s security models are not geared towards this. In short, your data has fallen out of love with your security model, but attackers have not. So, how do we realign data and security so they are once again in a healthy relationship?

      Google Cloud, as a leader in cloud data management and cloud security, is positioned uniquely to define and lead this effort. We've identified some challenges around the classic approach to data security and the changes triggered by the near-ubiquity of the cloud. The case is compelling for adopting a modern approach to data security. We contend that the optimal way forward is with autonomic data security

      A relatively new concept, autonomic data security is security that’s been integrated with data throughout its lifecycle. It can make things easier on users by freeing them from defining and redefining myriad rules about who can do what, when, where. It’s an approach that keeps pace with constantly evolving cyberthreats and business changes. 

      Autonomic data security can help you keep your IT assets more secure and can make your business and IT processes speedier. For example, data sharing with partners and data access decisions simultaneously becomes faster and more secure. This may sound like magic, but in fact relies on a constant willingness to change and adapt to both business changes and threat evolution.


      Taking the precepts, concepts, and forward-looking solutions presented in this paper into consideration, we strongly believe that now is the right time to assess where you and your business are when it comes to data security. Cloud also brings an incredible scale of computing. Where gigabytes once roamed, petabytes are now common. This means that many data security approaches, especially the manual ones, are no longer practical. 

      To prepare for the future of data security, we recommend you challenge your current model and assumptions and ask critical questions, evaluate where you are, and then start to put a plan in place of how you could start incorporating the autonomic data security pillars into your data security model.

      There are two sets of questions organizations need to discover the answers to as they start this journey. The first set of questions will help you identify the nature and status of your data, and inform the answers to the second set.

      • What data do I have?

      • Who owns it?

      • Is it sensitive?

      • How is it used?

      • What is the value in storing the data?

      The second set focuses on higher-level problems:

      • What is my current approach to data security? 

      • Where does it fail to support the business and counter the threats?

      • Does it support my business? 

      • Should I consider making a change? And if yes, in what direction?

      The path to improved data security starts by asking the right questions. You can read the full Autonomic Data Security paper for a more in-depth exploration here and learn more about the idea in this podcast episode.

      Related Article

      [Infographic] Achieving Autonomic Security Operations: Why metrics matter (but not how you think)

      Metrics can be a vital asset - or a terrible failure - for keeping organizations safe. Follow these tips to ensure security teams are tra...

      Read Article
    • Best practices of migrating Hive ACID Tables to BigQuery Mon, 15 Aug 2022 16:00:00 -0000

      Are you looking to migrate a large amount of Hive ACID tables to BigQuery? 

      ACID enabled Hive tables support transactions that accept updates and delete DML operations. In this blog, we will explore migrating Hive ACID tables to BigQuery. The approach explored in this blog works for both compacted (major / minor) and non-compacted Hive tables. Let’s first understand the term ACID and how it works in Hive.

      ACID stands for four traits of database transactions:  

      • Atomicity (an operation either succeeds completely or fails, it does not leave partial data)

      • Consistency (once an application performs an operation the results of that operation are visible to it in every subsequent operation)

      • Isolation (an incomplete operation by one user does not cause unexpected side effects for other users)

      • Durability (once an operation is complete it will be preserved even in the face of machine or system failure)

      Starting in Version 0.14, Hive supports all ACID properties which enables it to use transactions, create transactional tables, and run queries like Insert, Update, and Delete on tables.

      Underlying the Hive ACID table, files are in the ORC ACID version. To support ACID features, Hive stores table data in a set of base files and all the insert, update, and delete operation data in delta files. At the read time, the reader merges both the base file and delta files to present the latest data. As operations modify the table, a lot of delta files are created and need to be compacted to maintain adequate performance.  There are two types of compactions, minor and major.

      • Minor compaction takes a set of existing delta files and rewrites them to a single delta file per bucket.

      • Major compaction takes one or more delta files and the base file for the bucket and rewrites them into a new base file per bucket. Major compaction is more expensive but is more effective.

      Organizations configure automatic compactions, but they also need to perform manual compactions when automated fails. If compaction is not performed for a long time after a failure, it results in a lot of small delta files. Running compaction on these large numbers of small delta files can become a very resource intensive operation and can run into failures as well. 

      Some of the issues with Hive ACID tables are:

      • NameNode capacity problems due to small delta files.

      • Table Locks during compaction.

      • Running major compactions on Hive ACID tables is a resource intensive operation.

      • Longer time taken for data replication to DR due to small files.

      Benefits of migrating Hive ACIDs to BigQuery

      Some of the benefits of migrating Hive ACID tables to BigQuery are:

      • Once data is loaded into managed BigQuery tables, BigQuery manages and optimizes the data stored in the internal storage and handles compaction. So there will not be any small file issue like we have in Hive ACID tables.

      • The locking issue is resolved here as BigQuery storage read API is gRPC based and is highly parallelized. 

      • As ORC files are completely self-describing, there is no dependency on Hive Metastore DDL. BigQuery has an in-built schema inference feature that can infer the schema from an ORC file and supports schema evolution without any need for tools like Apache Spark to perform schema inference. 

      Hive ACID table structure and sample data

      Here is the sample Hive ACID  table  “employee_trans” Schema

      [StructValue([(u'code', u"hive> show create table employee_trans;\r\nOK\r\nCREATE TABLE `employee_trans`(\r\n `id` int, \r\n `name` string, \r\n `age` int, \r\n `gender` string)\r\nROW FORMAT SERDE \r\n 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' \r\nSTORED AS INPUTFORMAT \r\n 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' \r\nOUTPUTFORMAT \r\n 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'\r\nLOCATION\r\n 'hdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans'\r\nTBLPROPERTIES (\r\n 'bucketing_version'='2', \r\n 'transactional'='true', \r\n 'transactional_properties'='default', \r\n 'transient_lastDdlTime'='1657906607')"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ea858599d90>)])]

      This sample ACID table “employee_trans” has 3 records.

      [StructValue([(u'code', u'hive> select * from employee_trans;\r\nOK\r\n1 James 30 M\r\n3 Jeff 45 M\r\n2 Ann 40 F\r\nTime taken: 0.1 seconds, Fetched: 3 row(s)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ea826ca7c50>)])]

      For every insert, update and delete operation, small delta files are created. This is the underlying directory structure of the Hive ACID enabled table.

      [StructValue([(u'code', u'hdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delete_delta_0000005_0000005_0000\r\nhdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delete_delta_0000006_0000006_0000\r\nhdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delta_0000001_0000001_0000\r\nhdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delta_0000002_0000002_0000\r\nhdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delta_0000003_0000003_0000\r\nhdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delta_0000004_0000004_0000\r\nhdfs://hive-cluster-m/user/hive/warehouse/aciddb.db/employee_trans/delta_0000005_0000005_0000'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ea826ca7cd0>)])]

      These ORC files in an ACID table are extended with several columns:

      [StructValue([(u'code', u'struct<\r\n operation: int,\r\n originalTransaction: bigInt,\r\n bucket: int,\r\n rowId: bigInt,\r\n currentTransaction: bigInt,\r\n row: struct<...>\r\n>'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ea86f3068d0>)])]

      Steps to Migrate Hive ACID tables to BigQuery

      Migrate underlying Hive table HDFS data

      Copy the files present under employee_trans hdfs directory and stage in GCS. You can use either HDFS2GCS solution or Distcp. HDFS2GCS solution uses open source technologies to transfer data and provide several benefits like status reporting, error handling, fault tolerance, incremental/delta loading,  rate throttling, start/stop, checksum validation, byte2byte comparison etc. Here is the high level architecture of the HDFS2GCS solution. Please refer to the public github URL HDFS2GCS to learn more about this tool.

      The source location may contain extra files that we don’t necessarily want to copy. Here, we can use filters based on regular expressions to do things such as copying files with the .ORC extension only.

      1 Hive ACID Tables.jpg

      Load ACID Tables as-is to BigQuery

      Once the underlying Hive acid table files are copied to GCS, use the BQ load tool to load data in BigQuery base table. This base table will have all the change events.

      2 Hive ACID Tables.jpg

      Data verification

      Run  “select *” on the base table to verify if all the changes are captured. 

      Note: Use of “select * …” is used for demonstration purposes and is not a stated best practice.

      3 Hive ACID Tables.jpg

      Loading to target BigQuery table

      The following query will select only the latest version of all records from the base table, by discarding the intermediate delete and update operations.

      You can either load the results of this query into a target table using scheduled query on-demand with the overwrite option or alternatively, you can also create this query as a view on the base table to get the latest records from the base table directly.

      [StructValue([(u'code', u'WITH\r\n latest_records_desc AS (\r\n SELECT\r\n Row.*,\r\n operation,\r\n ROW_NUMBER() OVER (PARTITION BY originalTransaction ORDER BY originalTransaction ASC, bucket ASC, rowId ASC, currentTransaction DESC) AS rownum\r\n FROM\r\n `hiveacid-sandbox.hivetobq.basetable` )\r\nSELECT id,name,age,gender\r\nFROM\r\n latest_records_desc\r\nWHERE\r\n rownum=1\r\n AND operation != 2'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ea86e048fd0>)])]
      4 Hive ACID Tables.jpg

      Once the data is loaded in target BigQuey table, you can perform validation using below steps:

      a. Use the Data Validation Tool to validate the Hive ACID table and the target BigQuery table. DVT provides an automated and repeatable solution to perform schema and validation tasks. This tool supports the following validations:

      • Column validation (count, sum, avg, min, max, group by)

      • Row validation (BQ, Hive, and Teradata only)

      • Schema validation

      • Custom Query validation

      • Ad hoc SQL exploration

      b. If you have analytical HiveQLs running on this ACID table, translate them using the BigQuery SQL translation service and point to the target BigQuery table. 

      Hive DDL Migration (Optional)

      Since ORC is self-contained, leverage BigQuery’s schema inference feature when loading. 

      There is no dependency to extract Hive DDLs from Metastore. 

      But if you have an organization-wide policy to pre-create datasets and tables before migration, this step will be useful and will be a good starting point. 

      a. Extract Hive ACID DDL dumps and translate them using BigQuery translation service to create equivalent BigQuery DDLs. 

      There is a Batch SQL translation service to bulk translate exported HQL (Hive Query Language) scripts from a source metadata bucket in Google Cloud Storage to BigQuery equivalent SQLs  into a target GCS bucket. 

      You can also use BigQuery interactive SQL translator which is a live, real time SQL translation tool across multiple SQL dialects to translate a query like HQL dialect into a BigQuery Standard SQL query. This tool can reduce time and effort to migrate SQL workloads to BigQuery. 

      b. Create managed BigQuery tables using the translated DDLs. 

      Here is the screenshot of the translation service in the BigQuery console.  Submit “Translate” to translate the HiveQLs and “Run” to execute the query. For creating tables from batch translated bulk sql queries, you can use Airflow BigQuery operator (BigQueryInsertJobOperator) to run multiple queries

      5 Hive ACID Tables.jpg

      After the DDLs are converted, copy the ORC files to GCS and perform ELT in BigQuery. 

      The pain points of Hive ACID tables are resolved when migrating to BigQuery. When you migrate the ACID tables to BigQuery, you can leverage BigQuery ML and GeoViz capabilities for real-time analytics. If you are interested in exploring more, please check out the additional resources section. 

      Additional Resources

      Related Article

      Scheduling a command in GCP using Cloud Run and Cloud Scheduler

      How to efficiently and quickly schedule commands like Gsutil using Cloud Run and Cloud Scheduler.

      Read Article
    • Controlling your BigQuery costs Mon, 15 Aug 2022 12:18:00 -0000

      Are you worried about controlling your BigQuery costs across multiple projects? In this blog, you will learn about the different guardrails BigQuery provides to limit costs, and monitor BigQuery consumption. Also, learn how to design the warehouse that scales seamlessly while keeping costs under your control.

      1. Set up user-level quota and project-level quota can help set a cap on the usage

      If you have multiple BigQuery projects and users, you can manage costs by requesting a custom quota that specifies a limit on the amount of query data processed per day. Creating a custom quota on query data lets you control costs at the project-level or at the user-level.

      • Project-level custom quotas limit the aggregate usage of all users in that project.

      • User-level custom quotas are separately applied to all users and service accounts within a project.

      It is not possible to assign a custom quota to a specific user or service account.

      Best practice:Create custom cost control based on the maximum amount of data processed in a day. Start small with a few GBs. It is easy to increase the limit as needed. 

      Steps to set up: Create custom cost controls | BigQuery | Google Cloud 

      1. Go to Quotas page on your Google cloud console Working with quotas | Documentation | Google Cloud

      2. Select the BigQuery API 

      3. Change the Query usage per day per user and Query usage per day quota from Unlimited to limited GBs/TBs (see the screenshot below)

      BigQuery Costs Setup

      2. Limit query costs by restricting the number of bytes billed

      You can limit the number of bytes billed for a query using the maximum bytes billed setting. When you set maximum bytes billed, the number of bytes that the query will read is estimated before the query execution. If the number of estimated bytes is beyond the limit, then the query fails without incurring a charge.

      If a query fails because of the maximum bytes billed setting, an error like the following is returned:

      Error: Query exceeded limit for bytes billed: 1000000. 10485760 or higher required.

      Best practice: Use the maximum bytes billed setting to limit query costs. Start small with a few GBs. It is easy to increase the limit as needed. 

      Steps to set up: Control costs in BigQuery Guide 

      3. Create Budgets and alerts on GCP to catch any cost spikes

      Avoid surprises on your bill by creating Cloud Billing budgets to monitor all of your Google Cloud charges in one place. A budget enables you to track your actual Google Cloud spend against your planned spend. After you've set a budget amount, you set budget alert threshold rules that are used to trigger email notifications. Budget alert emails help you stay informed about how your spend is tracking against your budget. You can also use budgets to automate cost control responses.

      More information - Create, edit, or delete budgets and budget alerts | Cloud Billing | Google Cloud 

      Best practice:Setting up a budget to track the spend is highly recommended. Set threshold rules to trigger email alert notifications. When your costs (actual costs or forecasted costs) exceed a percentage of your budget (based on the rules you set) you will get alert emails. 

      Steps to set up: To create a new budget:

      1. Create and name the budget

      2. Set the budget scope

      3. Set the budget amount

      4. Set the budget threshold rules and actions

      5. Click finish to save the new budget

      4. Write good queries

      Here are a few best practices - 

      • Avoid Select * 

      NOTE :Applying a LIMIT clause to a query does not affect the amount of data that is read. 

      • Don't run queries to explore or preview table data 

      • Dry run and always estimate the costs before running the query

      • Only select the data needed

      Best practice:Always make any new BigQuery user aware about these best practices. Following two documents are a MUST READ.

      Control costs in BigQuery | Google Cloud 

      Optimize query computation | BigQuery | Google Cloud

      5. Partition the table so that BigQuery user is forced to specify WHERE clause

      A partitioned table is a special table that is divided into segments, called partitions, that make it easier to manage and query your data. By dividing a large table into smaller partitions, you can improve query performance, and you can control costs by reducing the number of bytes read by a query. 

      With partitioned tables, the customer is forced to specify WHERE clause and that will enforce the constraint of limiting full table scans.

      If a query uses a qualifying filter on the value of the partitioning column, BigQuery can scan the partitions that match the filter and skip the remaining partitions. This process is called partition pruning.

      Partition pruning is the mechanism BigQuery uses to eliminate unnecessary partitions from the input scan. The pruned partitions are not included when calculating the bytes scanned by the query. In general, partition pruning helps reduce query cost.

      Best practice:Implement partitioning where possible. This only improves performance but also leads to efficient queries. Highly recommend reading the following 2 documents. 

      Query partitioned tables | BigQuery | Google Cloud 

      Control costs in BigQuery | Google Cloud 

      Steps to set up: Creating partitioned tables | BigQuery | Google Cloud 

      6. Reservations - Start small and incrementally add slots based on usage

      BigQuery offers two pricing models for analytics:

      By default, you are billed according to the on-demand pricing model. Using BigQuery Reservations, you can switch to flat-rate pricing by purchasing commitments. Commitments are purchased in units of BigQuery slots. The cost of all bytes processed is included in the flat-rate price.

      Flat-rate pricing offers predictable and consistent costs. You know up-front what you are spending.

      More information - Introduction to Reservations | BigQuery | Google Cloud 

      Best practice:Use the BigQuery slot estimator to understand your on-demand slot consumption. Once your slot usage goes above 100, is relatively steady, start thinking about getting Flex slots/Monthly or Annual reservations.

      Steps to set up:Get started with reservations | BigQuery | Google Cloud 

      7. Monitoring the BigQuery metrics

      BigQuery provides its native admin panel with overview metrics for monitoring. BigQuery is also well integrated with existing GCP services like Cloud Logging to provide detailed logs of individual events and Cloud Monitoring dashboards for analytics, reporting and alerting on BigQuery usage and events. 

      More information- BigQuery Admin Reference Guide

      Best practice: 

      1. The key to successful monitoring is to enable proactive alerts. For example, setting up alerts when the reservation slot utilization rate crosses a predetermined threshold. 

      2. Also, it’s important to enable the individual users and teams in the organization to monitor their workloads using a self-service analytics framework or dashboard. This allows the users to monitor trends for forecasting resource needs and troubleshoot overall performance.

      3. Understand and leverage INFORMATION_SCHEMA for real-time reports and alerts. Review more examples on job stats and technical deep-dive INFORMATION_SCHEMA explained with this blog.

      Steps to set up: 

      1. To get started quickly with monitoring on BigQuery, you can leverage publicly available data studio dashboard and related github resources. 

      2. Looker also provides BigQuery Performance Monitoring Block for monitoring BigQuery usage. 

      3. Blog on how to implement a fully serverless solution for near–real-time cost monitoring using readily available log data - Taking a practical approach to BigQuery cost monitoring | Google Cloud Blog 

       8. Training

      Here is a list of training sessions that will be useful for new BigQuery users.


      Hands on labs: 

      Data engineering and smart analytics learning path: 

    • Securing apps for Googlers using Anthos Service Mesh Fri, 12 Aug 2022 19:00:00 -0000

      Hi there! I'm David Challoner from Access Site Reliability Engineering (SRE), here with Anthony Bushong from Developer Relations to talk about how Corp Eng is adopting Anthos Service Mesh internally at Google. 

      Corp Eng is Google's take on "Enterprise IT". A big part of the Corp Eng mission is running the first and third party software that powers internal business processes - from legal and finance to floor planning and even the app hosting our cafe menus - all with the same security or production standards as any of Google's first party applications.   

      Googlers need to access these applications, which sometimes then need to access other applications or other Google Cloud services. This traffic can cross different trust boundaries which can trigger different policies.

      Access SRE runs the systems that mediate this access, and we implemented Anthos Service Mesh as part of our solution to secure the way Googlers access these applications.

      But why?

      You can probably tell, but the applications Corp Eng is responsible for have disparate requirements. This often means that certain applications are tied to disparate infrastructure due to legal, business or technical reasons - which can be challenging when those infrastructures work and operate differently.

      Enter Anthos. Google Cloud built Anthos to provide a consistent platform interface unifying the experience of working with apps on these varying underlying infrastructures, with the Kubernetes API at its foundation.

      So when searching for the right tool to build a common authorization framework to mediate access to CorpEng services, we turned to Anthos - specifically Anthos Service Mesh, powered by the open-source project, Istio. Whether these services were deployed in Google Cloud, in Corp Eng data centers, or at the edge onsite at actual Google campuses, Anthos Service Mesh delivered a consistent means for us to program secure connectivity.

      To frame the impact ASM had on our organization, it's helpful to introduce the roles of the folks who manage and use it:

      Anthos Mesh 400px

      Figure 1 - Anthos Service Mesh empowers multiple people across different roles to connect services securely

      For security stakeholders, ASM provides an extensible policy enforcement point running next to each application capable of provisioning a certificate based on the identity of the workload and enforcing mandatory fine-grained application-aware access controls.

      For platform operators, ASM is delivered as a managed product, which reduces operational overhead by providing out-of-the-box release channels, maintenance windows, and a published Service Level Objective(SLO). 

      For service owners, ASM enables the decoupling of their applications from networking concerns, while also providing features like rate limiting, load shedding, request tracing, monitoring, and more. Features like these were typically only available for applications that ran on Borg, Google's first-party cluster manager that ultimately inspired the creation of Kubernetes.

      In sum, we were able to secure access to a plethora of different services with minimal operational overhead, all while providing service owners granular traffic control.

      Let's see what this looks like in practice!

      The architecture

      Anthos Blog 2

      Figure 2 - High-level architecture for Corp Eng services and Anthos

      In this flow, user access first reaches the Google Cloud Global Load Balancer [1], configured with Identity Aware Proxy (IAP) and Cloud Armor. IAP is the publicly available implementation of Google's internal philosophy of BeyondCorp, providing an authentication layer that works from untrusted networks without the need for a VPN. 

      Once a user is authenticated, their request then flows to the Ingress Gateway provided by Anthos Service Mesh [2]. This provides additional checks that traffic flows to services only when the request has come through IAP, while also enforcing mutual TLS (mTLS) between the Anthos Service Mesh Gateway to the Corp services owned by various teams.

      Finally, additional policies are enforced by the sidecar running in every single service Pod [3]. Policies are pulled from source control using Anthos Config Management[4], and are propagated to all sidecars by the managed control plane provided by Anthos Service Mesh[5]. 

      Managing the mesh

      If you're not familiar with how Istio works, it follows the pattern of a control plane and a data plane. We talked a little bit about the data plane - it is made up of the sidecar containers running alongside all of our service Pods. The control plane, however, is what's responsible for updating these sidecars with the policies we want to enforce:

      Anthos Blog 3

      Figure 3 - High-level architecture for Istio

      Thus, it is critical for us to ensure that the control plane is healthy. This is where Anthos Service Mesh gives our platform owners a huge advantage with its support for a fully-managed control plane. To provision cloud resources, like many other companies, our organization uses Terraform, the popular open-source infrastructure as code project. This gave us a declarative and familiar means for provisioning the Anthos Service Mesh control plane. 

      First, you enable the managed control plane feature for GKE by creating the google_gke_hub_feature resource below using Terraform.

      [StructValue([(u'code', u'resource "google_gke_hub_feature" "feature_asm" {\r\n name = "servicemesh"\r\n location = "global"\r\n provider = google-beta\r\n}'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e369ffc74d0>)])]

      Keep in mind that at publication time, this is only available via the google-beta provider in Terraform.

      Once created, we then provision a ControlPlaneRevision custom resource in a GKE cluster to spin up a managed control plane for ASM in that cluster:

      [StructValue([(u'code', u'apiVersion: mesh.cloud.google.com/v1alpha1\r\nkind: ControlPlaneRevision\r\nmetadata:\r\n name: asm-managed\r\n namespace: istio-system\r\nspec:\r\n type: managed_service\r\n channel: regular'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e36ad21a710>)])]

      Using this custom resource, we are able to set the release channel for the ASM managed control plane. This allows for our platform team to define the pace of upgrades in accordance with our team's needs.

      In addition to managing the control plane, ASM also provides management functionality around the data plane to ensure each sidecar Envoy is kept up to date with the latest security updates and is compatible with the control plane - one less thing for service operators to worry about. It does this using Kubernetes Mutating Admission Webhooks and Namespace labels to modify our Pod workload definitions to inject the appropriate sidecar proxy version.

      Syncing mandatory access policies

      With the core Anthos Service Mesh components in place, our security practitioners can define consistent, mandatory security policies for every single GKE cluster, using Istio APIs.

      For example, one policy is enforcing strict mTLS between Pods using automatically provisioned workload identity certificates. Earlier, we talked about how this is enforced between the Istio Gateway; that same policy enforces mTLS between all Pods in our cluster. 

      Anthos Blog 4
      Figure 4 - A high-level diagram of mutual TLS

      Another policy we implement is denying all egress traffic by default, requiring service teams to explicitly declare their outbound dependencies. The following is an example of using an Istio Service Entry to allow granular access to a specific external service - in this case, Google. This helps prevent unintended access to external services.

      [StructValue([(u'code', u'apiVersion: networking.istio.io/v1alpha3\r\nkind: ServiceEntry\r\nmetadata:\r\n name: google\r\nspec:\r\n hosts:\r\n - www.google.com\r\n ports:\r\n - number: 443\r\n name: https\r\n protocol: HTTPS\r\n resolution: DNS\r\n location: MESH_EXTERNAL\r\nEOF'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e367f980e10>)])]

      These policies are automatically synced to all service mesh namespaces in each cluster using Anthos Config Management. By using our internal source control system as a source of truth, Anthos Config Management can sync and reconcile policies across all of our GKE clusters, ensuring that these policies are in place for every single one of our services. You can find more details about our implementation of Anthos Config Management here.

      With this in place, our team plans on eventually migrating away from security automation that operates solely based on explicit IP, port and protocol policies. 

      Integration with Identity-aware Proxy

      The publicly available version of the BeyondCorp proxy used by CorpEng is called Identity-aware Proxy (IAP), which offers an integration with Anthos Service Mesh. IAP allows you to authenticate users trying to access your services and apply Context-Aware-Access policies.  This integration comes with two main benefits:

      • Ensuring that user traffic to services in the service mesh only come through Identity-aware Proxy

      • Enforcing Context-aware access (CAA) trust levels for devices, defined by multiple device signals we collect

      Identity-aware Proxy allows us to capture this information in a Request Context Token (RCToken), which is a JSON Web Token (JWT) created by Identity-aware Proxy that can be verified by ASM. IAP inserts this JWT into the Ingress-Authorization header. Using Istio Authorization Policies similar to the following policy, any requests without this JWT are denied:

      [StructValue([(u'code', u'apiVersion: security.istio.io/v1beta1\r\n kind: AuthorizationPolicy\r\n metadata:\r\n name: iap-gateway-require-jwt\r\n namespace: istio-system\r\n spec:\r\n selector:\r\n matchLabels:\r\n app: istio-iap-ingressgateway\r\n action: DENY\r\n rules:\r\n - from:\r\n - source:\r\n notRequestPrincipals: ["*"]'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e36ad149f90>)])]

      Here is an example policy that requires a fullyTrustedDevice access level - this might be a device in your organization that is known to be corporate-owned, fully-updated, and running an IT-approved configuration :

      [StructValue([(u'code', u'apiVersion: security.istio.io/v1beta1\r\nkind: AuthorizationPolicy\r\nmetadata:\r\n name: require-fully-trusted-device\r\n namespace: fooService\r\nspec:\r\n selector:\r\n matchLabels:\r\n app: fooService\r\n action: ALLOW\r\n rules:\r\n - from:\r\n - source:\r\n requestPrincipals: ["*"]\r\n when:\r\n - key: request.auth.claims[google.access_levels]\r\n values: ["accessPolicies/$orgId/accessLevels/fullyTrustedDevice"]'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e36ad06ad50>)])]

      This allows our security team to not only secure service to service communications, or outbound calls from services, but also specifically require incoming requests come from trusted devices and authenticated users using a trusted device.

      Enabling service teams

      As an SRE, one of our priorities is ensuring Service-level indicators (SLIs), SLOs, and Service-level agreements (SLAs) exist for services. Anthos Service Mesh helps us empower service owners to do this for their services, as it exposes horizontal request metrics like latency and availability to all services in the mesh. 

      Before Anthos Service Mesh, each application had to export these separately (if at all).   With ASM  service owners can easily define their Service's SLOs in the cloud console or via terraform using these horizontally exported metrics.  This then allows us to integrate SLOs into our higher-level service definitions so we can enable SLO monitoring and alerting by default. You can see theSRE book for more details on SLOs and Error budgets.

      The takeaway

      ASM is a powerful tool that enterprises can use to modernize their IT infrastructure.  It provides:

      • A shared environment-agnostic enforcement point to manage security policy

      • A unified way to provision identities, describe application dependencies 

      This also enables previously unheard of operational capabilities such as distributed tracing or incremental canary rollouts - which were difficult to find in the typical enterprise application landscape.  

      Because it can be incrementally adopted and composed with existing authorization systems to close gaps - barriers to adoption are low and we recommend you start evaluating it today!

    • Building security guardrails for developers with Google Cloud Fri, 12 Aug 2022 16:00:00 -0000

      Organizations across the globe are using the cloud to drive innovation. Developers are using cloud technology as the engine to test new ideas, fail fast, and automate scalability. Innovation on the cloud requires freedom and flexibility to run experiments and make mistakes. For many organizations with security top of mind, their concern is “How do I balance security and innovation?” 

      As a member of the Security Practice in Google Cloud’s Professional Services Organization, we regularly help customers to solve this question and many more cloud security challenges. Our global team works across industries to bring our Google Cloud security expertise directly to customers. We specialize in cloud security domains such as cloud native compliance, zero trust architecture, application security, data protection, and security operations. 

      In this post, we will provide a couple examples of how we advise our customers to configure preventive security controls using Google Cloud’s native capabilities and industry best practices. Preventive security controls, also known as security “guardrails”, are controls that allow developers the flexibility to innovate within the boundaries of defined security policies. Preventing a misconfiguration or vulnerability before it becomes exploitable. 

      Infrastructure as Code: Deploying securely 

      It can be difficult to solve organizational security challenges solely with technology. Mature cloud security programs are a blend of repeatable, operational processes and automated controls. To help ensure developers are innovating on a secure baseline, we recommend customers design a centralized process for developers to request new GCP projects and register workloads. This allows the security team the ability to properly configure GCP projects with defined security parameters.  To help enable repeatability and consistency of the process, we automate using  Google Cloud Project Factory to centrally deploy opinionated projects to developers.

      The goal of guardrails is to prevent security violations before they can impact the production platform. Stopping a security issue before it occurs can be an effective risk mitigation tactic. Traditionally organizations used cumbersome change management processes to manually control deployment and evaluate security posture. On Google Cloud, we work with customers to design Infrastructure as Code (IaC) pipelines to define security policy checks and automatically validate posture before the deployment. A typical design pattern uses “policy-as-code” tools, such as Terraform Validator, to enforce security guardrails for developers as part of the CI/CD pipeline. This design allows customers to configure security constraints, based on their specific requirements or risk tolerance. 

      Building preventive controls using GCP native capabilities 

      Google Cloud works to deliver  the industry’smost trusted cloud offering native platform and product capabilities that enable organization-wide preventive security control. We collaborate with security teams to design foundational architecture and recommend security services to meet their requirements. To highlight, the following Google Cloud services are commonly used to implement security guardrails for developers: 

      • Organization Policy - Provides centralized and programmatic control over how the organization's resources are deployed. Security teams can select from a list of available constraintsto restrict how a resource is configured, preventing a potential  misconfiguration from occurring. For example the organization policy constraint, constraints/storage.publicAccessPrevention, will prevent a developer from publicly exposing a cloud storage bucket. 

      • VPC Service Controls - Prevents unauthorized data movement by isolating GCP resources and restricting data flows with fine grained rules. VPC Service Controls enable context-based perimeter security to secure API-based services. Developers working on a protected service within a VPC Service Control perimeter will be restricted to the rules defined by the administrator, helping to mitigate the risk of data exfiltration.  For example, customers will configure VPC Service Controls to limit BigQuery access to a developer's specified location or device.  

      • Cloud IAM- Enables granular access to ensure Developers only have access to specific Google Cloud resources. Security teams are able to apply the principle of least privilege, preventing overly permissive roles to reduce the overall attack surface of the platform.

      These native GCP services are supported by Infrastructure as Code pipelines. To help ensure consistent protection for developers, preventive security services should be configured and deployed with the previously discussed managed IaC pipeline. Building a repeatable automated pattern for IaC deployment will simplify the process for developers and protect the environment with the defined security guardrails. 

      For more information on building a secure Google Cloud deployment, check out the Security Foundations Blueprint.

      Related Article

      Welcome to Security Voices

      This living blog is authored by a diverse group of people across multiple security teams at Google. Our voices reflect the diverse world ...

      Read Article
    • Building a scalable MLOps system with Vertex AI AutoML and Pipeline Fri, 12 Aug 2022 13:00:00 -0000

      When you build a Machine Learning (ML) product, consider at least two MLOps scenarios. First, the model is replaceable, as breakthrough algorithms are introduced in academia or industry. Second, the model itself has to evolve with the data in the changing world. 

      We can handle both scenarios with the services provided by Vertex AI. For example:

      • AutoML capability automatically identifies the best model based on your budget, data, and settings.

      • You can easily manage the dataset with Vertex Managed Datasets by creating a new dataset or adding data to an existing dataset.  

      • You can build an ML pipeline to automate a series of steps that start with importing a dataset and end with deploying a model using Vertex Pipelines.

      This blog post shows you how to build this system. You can find the full notebook for reproduction here. Many folks focus on the ML pipeline when it comes to MLOps, but there are more parts to building MLOps as a “system”. In this post, you will see how Google Cloud Storage (GCS) and Google Cloud Functions manage data and handle events in the MLOps system.


      Cloud Function
      Figure 1 Overall MLOps Architecture (original)

      Figure 1 shows the overall architecture presented in this blog. We cover the components and their connection in the context of two common workflows of the MLOps system.


      Vertex AI is at the heart of this system, and it leverages Vertex Managed Datasets, AutoML, Predictions, and Pipelines. We can create and manage a dataset as it grows using Vertex Managed Datasets. Vertex AutoML selects the best model without your knowing much about modeling. Vertex Predictions creates an endpoint (RestAPI) to which the client communicates.

      It is a simple, fully managed yet somewhat complete end-to-end MLOps workflow moves from a dataset to training a model that gets deployed. This workflow can be programmatically written in Vertex Pipelines. Vertex Pipelines outputs the specification for an ML pipeline allowing you to re-run the pipeline whenever or wherever you want. Specify when and how to trigger the pipeline using Cloud Functions and Cloud Storage.

      Cloud Functions is a serverless way to deploy your code in Google Cloud. In this particular project, it triggers the pipeline by listening to changes on the specified Cloud Storage location. Specifically, if a new dataset is added, for example, a new span number is created; the pipeline is triggered to train the dataset, and a new model is deployed. 


      This MLOps system prepares the dataset with either Vertex Dataset’s built-in user interface (UI) or any external tools based on your preference. You can upload the prepared dataset into the designated GCS bucket with a new folder named SPAN-NUMBER. Cloud Functions then detects the changes in the GCS bucket and triggers the Vertex Pipeline to run the jobs from AutoML training to endpoint deployment. 

      Inside the Vertex Pipeline, it checks if there is an existing dataset created previously. If the dataset is new, Vertex Pipeline creates a new Vertex Dataset by importing the dataset from the GCS location and emits the corresponding Artifact. Otherwise, it adds the additional dataset to the existing Vertex Dataset and emits an artifact.

      When the Vertex Pipeline recognizes the dataset as a new one, it trains a new AutoML model and deploys it by creating  a new endpoint. If the dataset isn't new, it tries to retrieve the model ID from Vertex Model and determines whether a new AutoML model or an updated AutoML model is needed. The second branch determines whether the AutoML model has been created. If it hasn't been created, the second branch creates a new model. Also, when the model is trained, the corresponding component emits the artifact as well.

      Directory structure that reflects different distributions

      In this project, I have created two subsets of the CIFAR-10 dataset, SPAN-1 and SPAN-2. A more general version of this project can be found here, which shows how to build training and batch evaluation pipelines pipelines. The pipelines can be set up to cooperate so they can evaluate the currently deployed model and trigger the retraining process.

      ML Pipeline with Kubeflow Pipelines (KFP)

      We chose to use Kubeflow Pipelines to orchestrate the pipeline. There are a few things that I would like to highlight. First, it’s good to know how to make branches with conditional statements in KFP. Second, you need to explore AutoML API specifications to fully leverage AutoML capabilities, such as training a model based on the previously trained one. Last, you also need to find a way to emit artifacts for Vertex Dataset and Vertex Model to consume that Vertex AI can recognize them. Let’s go through these one by one.

      Branching strategy

      In this project, there are two main conditions and two sub-branches inside the second main branch. The main branches split the pipeline based on a condition if there is an existing Vertex Dataset. The sub-branches are applied in the second main branch, which is selected when there is an exciting Vertex Dataset. It looks up the list of models and decides to train an AutoML model from scratch or a previously trained one. 

      ML pipelines written in KFP can have conditions with a special syntax of kfp.dsl.Condition. For instance, we can define the branches as follows:

      [StructValue([(u'code', u'from google_cloud_pipeline_components import aiplatform as gcc_aip\r\n\r\n\r\n# try to get Vertex Dataset ID\r\ndataset_op = get_dataset_id(...) \r\n\r\nwith kfp.dsl.Condition(name="create dataset", \r\n dataset_op.outputs[\'Output\'] == \'None\'):\r\n # Create Vertex Dataset, train AutoML from scratch, deploy model\r\n\r\nwith kfp.dsl.Condition(name="update dataset", \r\n dataset_op.outputs[\'Output\'] != \'None\'):\r\n # Update existing Vertex Dataset\r\n ...\r\n\r\n # try to get Vertex Model ID\r\n model_op = get_model_id(...)\r\n\r\n with kfp.dsl.Condition(name=\'model not exist\',\r\n model_op.outputs[\'Output\'] == \'None\'):\r\n # Create Vertex Dataset, train AutoML from scratch, deploy model\r\n\r\n with kfp.dsl.Condition(name=\'model exist\',\r\n model_op.outputs[\'Output\'] != \'None\'):\r\n # Create Vertex Dataset, train AutoML based on trained one, deploy model'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e234a78f2d0>)])]

      get_dataset_id and get_model_id are custom KFP components used to determine if there is an existing Vertex Dataset and Vertex Model respectively. Both return "None" if a model is found and some other value if a model isn't found. They also emit Vertex AI-aware artifacts. You will see what this means in the next section.

      Emit Vertex AI-aware artifacts

      Artifacts track the path of each experiment in the ML pipeline and display metadata in the Vertex Pipeline UI. When Vertex AI aware artifacts are released into in the pipeline, Vertex Pipeline UI displays links for its internal services such as Vertex Dataset, so that users can visit a web page for more information.

      So how could you write a custom component to generate Vertex AI-aware artifacts? To do this, custom components should have Output[Artifact] in their parameters. Then you need to replace the resourceName of the metadata attribute with a special string format.

      The following code example is the actual definition of get_dataset_id used in the previous code snippet:

      [StructValue([(u'code', u'@component(\r\n packages_to_install=["google-cloud-aiplatform", \r\n "google-cloud-pipeline-components"]\r\n)\r\ndef get_dataset_id(project_id: str, \r\n location: str,\r\n dataset_name: str,\r\n dataset_path: str,\r\n dataset: Output[Artifact]) -> str:\r\n from google.cloud import aiplatform\r\n from google.cloud.aiplatform.datasets.image_dataset import ImageDataset\r\n from google_cloud_pipeline_components.types.artifact_types import VertexDataset\r\n\r\n \r\n aiplatform.init(project=project_id, location=location)\r\n \r\n datasets = aiplatform.ImageDataset.list(project=project_id,\r\n location=location,\r\n filter=f\'display_name={dataset_name}\')\r\n \r\n if len(datasets) > 0:\r\n dataset.metadata[\'resourceName\'] = \r\n f\'projects/{project_id}/locations/{location}/datasets/{datasets[0].name}\'\r\n return f\'projects/{project_id}/locations/{location}/datasets/{datasets[0].name}\'\r\n else:\r\n return \'None\''), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e234a78f750>)])]

      As you see, the dataset is defined in the parameters as Output[Artifact]. Even though it appears in the parameter, it is actually emitted automatically. You just need to provide the necessary data as if it is a function variable. 

      The dataset component retrieves the list of Vertex Dataset by calling the aiplotform.ImageDataset.list API. If the length of it is zero, it simply returns 'None'. Otherwise, it returns the found resource name of the Vertex Dataset and provides the dataset.metadata['resourceName'] with the resource name at the same time. The Vertex AI-aware resource name follows a special string format, which is 'projects/<project-id>/locations/<location>/<vertex-resource-type>/<resource-name>'.

      The <vertex-resource-type>can be anything that points to an internal Vertex AI service. For instance, if you want to specify that the artifact is the Vertex Model, then you should replace <vertex-resource-type> with models. The <resource-name> is the unique ID of the resource, and it can be accessed in the name attribute of the resource found by the aiplatform API. The other custom component, get_model_id, is written in a very similar way as well.

      AutoML based on the previous model

      You sometimes want to train a new model on top of the previously best model. If that is possible, the new model will probably be much better than the one trained from scratch,  because it leverages previously learned knowledge.

      Luckily, Vertex AutoML comes with the ability to train a model using a previous model. AutoMLImageTrainingJobRunOp component lets you train a model by simply providing the base_model argument as follows:
      [StructValue([(u'code', u"training_job_run_op = \r\n gcc_aip.AutoMLImageTrainingJobRunOp(\r\n ...,\r\n base_model=model_op.outputs['model'],\r\n ...\r\n )"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e234a78fa10>)])]

      When training a new AutoML model from scratch, you pass 'None' in the base_model argument, and it is the default value. However, you can set it with a VertexModel artifact, and the component will trigger an AutoML training job based on the other model. 

      One thing to be careful of is that VertexModel artifacts can't be constructed in a typical way of Python programming That means you can't create an instance of VertexModel artifact by setting the id found in the Vertex Model dashboard. The only way you can create one is to set the metadata['resourceName'] parameters properly. The same rule applies to other Vertex AI-related artifacts such as VertexDataset. You can see how the VertexDataset artifact is constructed properly to get an existing Vertex Datasetto import additional data into it. See the full notebook of this project here.


      You can reproduce the same result from this project with the free $300 credit when you create a new GCP account.

      At the time of this blog post, Vertex Pipelines costs about $0.03/run, and the type of underlying VM for each pipeline component is e2-standard-4, which costs about $0.134/hour. Vertex AutoML training costs about $3.465/hour for image classification. GCS holds the actual data, which costs about $2.40/month for 100GiB capacity, and Vertex Dataset is free. 

      To simulate two different branches, the entire experiment took about one to two hours, and the total cost for this project is approximately $16.59. Please find more detailed pricing information about Vertex AI here.


      Many people underestimate the capability of AutoML, but it is a great alternative for app and service developers who have little ML background. Vertex AI is a great platform that provides AutoML as well as Pipeline features to automate the ML workflow. In this article, I have demonstrated how to set up and run a basic MLOps workflow, from data injection to training a model based on the previously-achieved best one, to deploying the model to a Vertex AI platform. With this, we can let our ML model automatically adapt to the changes in a new dataset. What’s left for you to implement is to integrate a model monitoring system to detect data/model drift. One example is found here

    • Snooze your alert policies in Cloud Monitoring Thu, 11 Aug 2022 16:00:00 -0000

      Does your development team want to snooze alerts during non-business hours? Or proactively prevent the creation of expected alerts for an upcoming expected maintenance window? Cloud Alerting in Google's Cloud operations suite now supports the ability to snooze alert policies for a given period of time. You can create a Snooze by providing specific alert policies and a time period. During this window, if the alert policy is violated, no incidents or notifications are created. When the window ends, the alerting behavior resumes as normal. 

      Your team can use this feature in a variety of ways. One example is to avoid being paged for non-production environments over the weekend. Another way is to plan for a known maintenance window or cutover period. You can also quiet the noise during a growing outage, among other approaches. 

      To create a Snooze, go to Monitoring >  Alerting. See the new table with Snoozes and click on Create Snooze. You provide the name of the Snooze, time period, and select the desired Alert Policies. After you select the criteria, a table lists recent Incidents that match this criteria. Events like those won't cause an alert when the snooze is active.

      1 Snooze alerts.jpg

      You will  see a timeline visualization of all past, active, and upcoming Snoozes. If you’d like to adjust the duration, you can go back and edit the details. For more information, please see the documentation.

      2 Snooze alerts.jpg

      In the future, we’ll expand this functionality to allow snoozing by labels. You’ll be able to temporarily silence by the resource, system, metric, and custom labels which will allow you to snooze all alert policies in a specific environment, zone, or team. This functionality will be extended to be supported in the API, allowing you to create Snoozes programmatically for regularly repeating events.

      Related Article

      Add severity levels to your alert policies in Cloud Monitoring

      Add static and dynamic severity levels to your alert policies for easier triaging and include these in notifications when sent to 3rd par...

      Read Article
    • Accelerate your developer productivity with Query Library Thu, 11 Aug 2022 16:00:00 -0000

      Our goal in Cloud Logging is to help increase developer productivity by streamlining the troubleshooting process. The time spent on writing and executing a query, and then analyzing the errors can impact developer productivity. Whether you’re troubleshooting an issue or analyzing your logs, finding the right logs quickly, is critical. 

      That’s why we recently launched a Query Library and other new features to make querying your logs even easier. The Query Library in Cloud Logging makes it easier to find logs faster by using common queries.

      Build queries faster with our templates

      The new text search and drop-down features are designed to make querying something that you can achieve with a few mouse clicks. These features automatically generate the Logging query language necessary for you. The Query Library extends this simplicity with templates for common GCP queries.

      1 Query Library.jpg

      The Query Library is located in the query builder bar next to the Suggested queries. To help find the most relevant queries you’ll notice the following details:

      • Query categories – Each query is broken down into categories that can be used to easily narrow down to relevant queries. 

      • Query occurrences – To help you pick queries that have the most useful results, sparklines are displayed for queries that have logs in your project. 

      • Query details – Each query has a description along with the Logging query 

      • Run/Stream – Run the query or start streaming logs right from the library

      • Save – Save the query in your list of saved queries

      2 Query Library.jpg

      The road ahead

      We’re committed to making Logs Explorer the best place to troubleshoot your applications running on Google Cloud. Over the coming months, we have many more changes planned to make Logs Explorer both easier and more powerful for all users. If you haven’t already, get started with the Logs Explorer and join the discussion in our Cloud Operations page on the Google Cloud Community site.

      Related Article

      Google Cloud Deploy gets continuous delivery productivity enhancements

      In this latest release, Google Cloud Deploy got improved onboarding, delivery pipeline management and additional enterprise features.

      Read Article
    • Google Cloud and Apollo24|7: Building Clinical Decision Support System (CDSS) together Thu, 11 Aug 2022 16:00:00 -0000

      Clinical Decision Support System (CDSS) is an important technology for the healthcare industry that analyzes data to help healthcare professionals make decisions related to patient care. The market size for the global clinical decision support system appears poised for expansion, with one study predicting a compound annual growth rate (CAGR) of 10.4%, from 2022 to 2030, to $10.7 billion.

      For any health organization that wants to build a CDSS system, one key block is to locate and extract the medical entities that are present in the clinical notes, medical journals, discharge summaries, etc. Along with entity extraction, the other key components of the CDSS system are capturing the temporal relationships, subjects, and certainty assessments.

      At Google Cloud, we know how critical it is for the healthcare industry to build CDSS systems, so we worked with Apollo 24|7, the largest multi-channel digital healthcare platform in India, to build the key blocks of their CDSS solution. 

      We helped them to parse the discharge summaries and prescriptions to extract the medical entities. These entities can then be used to build a recommendation engine that would help doctors with the “Next Best Action” recommendation for medicines, lab tests, etc.

      Let’s take a sneak peek at Apollo 24|7’s entity extraction solutions, and the various Google AI technologies that were tested to form the technology stack. 

      Datasets Used

      To perform our experiments on entity extraction, we used two types of datasets. 

      1. i2b2 Dataset - i2b2 is an open-source clinical data warehousing and analytics research platform that provides annotated deidentified patient discharge summaries made available to the community for research purposes. This dataset was primarily used for training and validation of the models.

      2. Apollo 24|7’s Dataset - De-identified doctor’s notes from Apollo24|7 were used for testing. Doctors annotated them to label the entities and offset values. 

      Experimentation and choosing the right approach — Four models put to test

      For entity extraction, both Google Cloud products and open-source approaches were explored. Below are the details:

      1. Healthcare Natural Language API: This is a no-code approach that provides machine learning solutions for deriving insights from medical text. Using this, we parsed unstructured medical text and then generated a structured data representation of the medical knowledge entities stored in the data for downstream analysis and automation. The process includes:

      • Extract information about medical concepts like diseases, medications, medical devices, procedures, and their clinically relevant attributes;

      • Map medical concepts to standard medical vocabularies such as RxNorm, ICD-10, MeSH, and SNOMED CT (US users only);

      • Derive medical insights from text and integrate them with data analytics products in Google Cloud.

      The advantage of using this approach is that it not only extracts a wide range of entity types like MED_DOSE, MED_DURATION, LAB_UNIT, LAB_VALUE, etc, but also captures functional features such as temporal relationships, subjects, and certainty assessments, along with the confidence scores. Since it is available on Google Cloud, this offers long-term product support. It is also the only fully-managed NLP service among all the approaches tested and hence, it requires the least effort to implement and manage. 

      But one thing to keep in mind is that since the Healthcare NL API offers natural language models that are pre-trained, it currently cannot be used for custom entity extraction models trained using custom annotated medical text or to extract custom entities. This has to be done via AutoML Entity Extraction for Healthcare, another Google Cloud service for custom model development. Custom model development is important for adapting the pre-trained models to new languages or region-specific natural language processing, such as medical terms whose use may be more prevalent in India than in other regions

      2. Vertex AutoML Entity Extraction for Healthcare: This is a low-code approach that’s already available on Google Cloud. We used AutoML Entity Extraction to build and deploy custom machine learning models that analyzed documents, categorized them, and identified entities within them. This custom machine learning model was trained on the annotated dataset provided by the Apollo 24|7 team.

      The advantage of AutoML Entity Extraction is that it gives the option to train on a new dataset. However, one of the prerequisites to keep in mind is that it needs a little pre-processing to capture the input data in the required JSONL format. Since this is an AutoML model just for Entity Extraction, it does not extract relationships, certainty assessments, etc.

      3. BERT-based Models on Vertex AI: Vertex AI is Google Cloud’s fully managed unified AI platform to build, deploy, and scale ML models faster, with pre-trained and custom tooling. We experimented with multiple custom approaches based on pre-trained BERT-based models, which have shown state-of-the-art performance in many natural language tasks. To gain better contextual understanding of medical terms and procedures, these BERT-based approaches are explicitly trained on medical domain data. Our experiments were based on BioClinical BERT, BioLink BERT, Blue BERT trained on Pubmed dataset, and Blue BERT trained on Pubmed + MIMIC datasets.

      The major advantage of these BERT-based models is that they can be finetuned on any Entity Recognition task with minimal efforts. 

      However, since this is a custom approach, it requires some technical expertise. Additionally, it does not extract relationships, certainty assessments, etc. This is one of the main limitations of using BERT-based models. 

      4. ScispaCy on Vertex AI: We used Vertex AI to perform experiments based on ScispaCy, which is a Python package containing spaCy models for processing biomedical, scientific or clinical text. 

      Along with Entity Extraction, Scispacy on Vertex AI provides additional components like Abbreviation Detector, Entity Linking, etc. However, when compared to other models, it was less precise, with too many junk phrases, like “Admission Date,” captured as entities.

      “Exploring multiple approaches and understanding the pros/cons of each approach helped us to decide the one that would fit our business requirements." according to Abdussamad M, Engineering Lead at Apollo 24|7. 

      Evaluation Strategy

      In order to match the parsed entity with the test data labels, we used extensive matching logic that comprised of the below four methods:

      1. Exact Match - Exact match captures entities where the model output and the entities in the test dataset match. Here, the offset values of the entities have also been considered. For example, the entity “gastrointestinal infection” that is present as-is in both the model output and the test label will be considered an “Exact Match.” 

      2. Match-Score Logic - We used a scoring logic for matching the entities. For each word in the test data labels, every word in the model output is matched along with the offset. A score is calculated between the entities and based on the threshold, it is considered as a match.

      3. Partial Match - In this matching logic, entities like “hypertension” and “hypertensive” are matched based on the Fuzzy logic.

      4. UMLS Abbreviation Lookup - We also observed that the medical text had some abbreviations, like AP meaning abdominal pain. These were first expanded by doing a lookup on the respective UMLS (Unified Medical Language System) tables and then passed to the individual entity extraction models. 

      Performance Metrics

      We used precision and recall metrics to compare the outcomes of different models/experiments. 

      Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. 

      The below example shows how to calculate these metrics for a given sample.

      Example sample: “Krish has fever, headache and feels uncomfortable”

      Expected Entities: [“fever”, “headache”]

      Model Output: [“fever”, “feels”, “uncomfortable”]

      1 Apollo247.jpg


      2 Apollo247.jpg

      Experimentation Results

      The following table captures the results of the above experiments on Apollo24|7’s internal datasets.

      3 Apollo247.jpg

      Finally, the Blue BERT model trained on the Pubmed dataset had the best performance metrics with a 81% improvement on Apollo 24|7’s baseline mode with the Healthcare Natural Language API providing the context, relationships, and codes. This performance could be further improved by implementing an ensemble of these two models.

      “With the Blue BERT model giving the best performance for entity extraction on Vertex AI and the Healthcare NL API being able to extract the relationships, certainty assessments etc, we finally decided to go with an ensemble of these 2 approaches,“ Abdussamad added. 

      Fast track end-to-end deployment with Google Cloud AI Services (AIS)

      Google AIS (Professional Services Organization) helped Apollo24|7 to build the key blocks of the CDSS system.

      The partnership between Google Cloud and Apollo 24|7 is just one of the latest examples of how we’re providing AI-powered solutions to solve complex problems to help organizations drive the desired outcomes. To learn more about Google Cloud’s AI services, visit our AI & ML Products page, and to learn more about Google Cloud solutions for health care, explore our Google Cloud Healthcare Data Engine page


      We’d like to give special thanks to Nitin Aggarwal, Gopala Dhar and Kartik Chaudhary for their support and guidance throughout the project. We are also thankful to Manisha Yadav, Santosh Gadgei and Vasantha Kumar for implementing the GCP infrastructure. We are grateful to the Apollo team (Chaitanya Bharadwaj, Abdussamad GM, Lavish M, Dinesh Singamsetty, Anmol Singh and Prithwiraj) and our partner team from HCL/Wipro (Durga Tulluru and Praful Turanur) who partnered with us in delivering this successful project. Special thanks to the Cloud Healthcare NLP API team (Donny Cheung, Amirhossein Simjour, and Kalyan Pamarthy).

      Related Article

      HIMSS 2022: Improving health through data interoperability and natural language processing

      At HIMSS 2022, Google Cloud showcases how data interoperability and natural language processing can help improve health outcomes.

      Read Article

    Google has many products and the following is a list of its products: Android AutoAndroid OSAndroid TVCalendarCardboardChromeChrome EnterpriseChromebookChromecastConnected HomeContactsDigital WellbeingDocsDriveEarthFinanceFormsGboardGmailGoogle AlertsGoogle AnalyticsGoogle Arts & CultureGoogle AssistantGoogle AuthenticatorGoogle ChatGoogle ClassroomGoogle DuoGoogle ExpeditionsGoogle Family LinkGoogle FiGoogle FilesGoogle Find My DeviceGoogle FitGoogle FlightsGoogle FontsGoogle GroupsGoogle Home AppGoogle Input ToolsGoogle LensGoogle MeetGoogle OneGoogle PayGoogle PhotosGoogle PlayGoogle Play BooksGoogle Play GamesGoogle Play PassGoogle Play ProtectGoogle PodcastsGoogle ShoppingGoogle Street ViewGoogle TVGoogle TasksHangoutsKeepMapsMeasureMessagesNewsPhotoScanPixelPixel BudsPixelbookScholarSearchSheetsSitesSlidesSnapseedStadiaTilt BrushTranslateTravelTrusted ContactsVoiceWazeWear OS by GoogleYouTubeYouTube KidsYouTube MusicYouTube TVYouTube VR

    Google News

    Think with Google

    Google AI BlogAndroid Developers BlogGoogle Developers Blog
    AI is Artificial Intelligence

    Nightmare Scenario: Inside the Trump Administration’s Response to the Pandemic That Changed. From the Washington Post journalists Yasmeen Abutaleb and Damian Paletta - the definitive account of the Trump administration’s tragic mismanagement of the COVID-19 pandemic, and the chaos, incompetence, and craven politicization that has led to more than a half million American deaths and counting.

    Since the day Donald Trump was elected, his critics warned that an unexpected crisis would test the former reality-television host - and they predicted that the president would prove unable to meet the moment. In 2020, that crisis came to pass, with the outcomes more devastating and consequential than anyone dared to imagine. Nightmare Scenario is the complete story of Donald Trump’s handling - and mishandling - of the COVID-19 catastrophe, during the period of January 2020 up to Election Day that year. Yasmeen Abutaleb and Damian Paletta take us deep inside the White House, from the Situation Room to the Oval Office, to show how the members of the administration launched an all-out war against the health agencies, doctors, and scientific communities, all in their futile attempts to wish away the worst global pandemic in a century...


    ZDNet » Google

    9to5Google » Google

    Computerworld » Google

    • The best buried Android 13 treasures Wed, 17 Aug 2022 03:00:00 -0700

      If you've got a Google Pixel phone, it's time to rejoice — for some shiny new software is on its way to you this week.

      Android 13, Google's latest and greatest Android version, officially made its landing on Monday and is in the midst of rolling out to current Pixel devices as we speak.

      That means if you've got a Pixel 4 or higher, it should be showing up in your sweaty person-palms any moment now. And that, in turn, means you've got some spectacular new Googley goodies to explore.

      To read this article in full, please click here

    • 10 out-of-sight Google Pixel settings worth surfacing Fri, 12 Aug 2022 02:45:00 -0700

      With some phones, the hardware itself is the primary point of appeal.

      Google's self-made Pixel devices take a decidedly different approach. Sure, the shells around the phones are as shiny and purty as any of 'em — but it's what's inside that really sets the Pixel apart.

      Plain and simple, Google's Android software is in a league of its own. And aside from the thoughtfully designed, platform-consistent interface and the lack of obnoxious and often over-the-top experience-harming additions so many other manufacturers love to lard into their Android environments, Pixels are packed with genuinely useful features that tap into Google's high-tech smarts and make your life easier in some small but significant ways.

      To read this article in full, please click here

    • In a hybrid workforce world, what happens to all that office space? Thu, 11 Aug 2022 04:03:00 -0700

      Offices are getting smaller — or at least companies that own or lease office space are now using less of it, according to the 2022 Office Space Report compiled by workplace management software maker Robin Powered.

      The company surveyed 247 business owners, facilities managers and those in charge of office space. The survey was aimed at getting a better idea of what companies plan to do with their all their cubicles, meeting rooms and offices in the aftermath of workplace changes brought about by the COVID-19 pandemic, the move to remote and hybrid work, and the Great Resignation.

      To read this article in full, please click here

    • Companies move to drop college degree requirements for new hires, focus on skills Wed, 10 Aug 2022 03:04:00 -0700

      At Google, a four-year degree is not required for almost any role at the company — and a computer science degree isn't required for most software engineering or product manager positions. “Our focus is on demonstrated skills and experience, and this can come through degrees or it can come through relevant experience,” said Tom Dewaele, Google’s vice president of People Experience.

      Similarly, Bank of America has refocused its hiring efforts on a skills-based approach. “We recognize that prospective talent think they need a degree to work for us, but that is not the case,” said Christie Gragnani-Woods, a Bank of America Global Talent Acquisition executive. “We are dedicated to recruiting from a diverse talent pool to provide an equal opportunity for all to find careers in financial services, including those that don’t require a degree.”

      To read this article in full, please click here

    • 7 Gboard settings that'll supercharge your Android typing Wed, 10 Aug 2022 03:00:00 -0700

      If there's one place where saved seconds can seriously add up, it's in your smartphone's on-screen keyboard.

      This doesn't get nearly enough attention among average tech-totin' animals, but Android has an awesome advantage over that (cough, cough) other mobile platform when it comes to text input. All it takes is two minutes of trying to type text on an iDevice to see just how much of a good thing we've got goin' (and to make yourself want to gouge your eyes out with the nearest overpriced Apple accessory).

      And you know what? While we've got no shortage of commendable Android keyboard apps to choose from, Google's own Gboard keyboard really is the perfect example of how simple, effective, and expandable the Android typing experience can be. Gboard works well right out of the virtual box, and once you start poking around in the mustiest corners of its settings, you'll uncover some tucked-away treasures that can inject all sorts of seconds-saving sorcery into your Android input process.

      To read this article in full, please click here

    • Tech hiring enters the Big Freeze Thu, 04 Aug 2022 05:28:00 -0700

      As the global economic downturn continues to deepen, many technology companies are reacting to fears of an incoming recession by putting the brakes on hiring.

      While lowering payroll costs might seem like an easy way to reduce spending right now, the job landscape remains in a state of flux, with research showing that workers are just as pessimistic about the economic climate as their employers.

      As a result, 60% of US job seekers say they feel more urgency to find a job now, before market conditions change for the worse. This could leave companies that have decided to stop hiring with a talent drain they are unable to plug.

      To read this article in full, please click here

    • The awkward thing about Android 13 Thu, 04 Aug 2022 03:00:00 -0700

      Android 13 may be one of Google's strangest Android versions yet. And considering the company we're talking about here, my goodness, that's really saying something.

      Android 13 — currently in the final phase of its beta development and expected to be launched any moment now — is without a doubt one of the most shape-shifting software updates in Android's history. It'll completely change the way Android looks, feels, and acts and open the door to a whole new side of growth for the platform.

      To read this article in full, please click here

    • 7 Google Pixel settings you should change this second Sun, 31 Jul 2022 07:56:00 -0700

      Part of the Pixel's primary appeal is the phone's phenomenal software. All Android experiences are not created equal, as anyone who's spent seven seconds with an out-of-the-box Samsung setup can tell you, and Google's clean and simple approach to Android is a huge piece of what makes a Pixel so pleasant to use.

      Still, while a Pixel may be perfectly peachy from the moment you power it on, Google's smartphone software is full of hidden features and advanced options that can make your experience even more exceptional.

      And whether you're setting up a shiny new Pixel 6a right now or cradling an older Pixel model in your suspiciously sticky paw, taking the time to think through some of your phone's most easily overlooked settings can take your Pixel adventure to a whole new level.

      To read this article in full, please click here

    • The story behind Google’s in-house desktop Linux Thu, 28 Jul 2022 03:00:00 -0700

      If you look around Google's Mountain View, CA offices, you'll see Windows machines, Chromebooks, Macs — and gLinux desktops. G what, you ask? Well, in addition to  relying on Linux for its servers, Google has its very own Linux desktop distribution.

      You can't get it — darn it! — but for more than a decade, Google has been baking and eating its own homemade Linux desktop distribution. The first version was Goobuntu. (As you'd guess from the name, it was based on Ubuntu.)

      To read this article in full, please click here

    • 3 little-known location tricks for Google Assistant on Android Fri, 22 Jul 2022 02:45:00 -0700

      I don't know if you've noticed, but our favorite virtual helper is losing some of its location-sensing smarts.

      Yes, oh yes: For reasons unknown, Google's in the midst of quietly pulling back Assistant's ability to handle reminders based on your physical location — y'know, the "remind me to do something when I get to this place" sorts of commands that have long been possible on Android. It's part of a broader confusion campaign realignment of Google features that seems to be placing more emphasis on the company's Tasks service and its integration with other Google apps.

      To read this article in full, please click here

    • How to stay smart about Android app permissions Wed, 20 Jul 2022 03:00:00 -0700
    • Old PCs can find new life with Google ChromeOS Flex Tue, 19 Jul 2022 04:30:00 -0700

      I'm the kind of person who runs computing hardware until it's dead, dead, dead.

      I mean, I still have a 1984 KayPro luggable that boots. I keep that one because of nostalgia, but I also have more than a dozen Macs and PCs that first saw the light of day in the late 2000s and early 2010s, still doing valuable work.

      How? I run Linux on them.

      While Linux isn't for everyone, with Google's release of ChromeOS Flex, it's now easy to turn an otherwise obsolete computer into a useful Chromebook.

      This can be really useful.

      For example, I know many businesses have rooms filled with old, dusty PCs. ChromeOS Flex can give them new life. That, in turn, enables you to cut down on computer costs drastically. For example, a new Dell OptiPlex 3000 Small Form Factor business PC will cost you about $1,000. But a Dell Optiplex 3020 from 2013 that ran Windows 7 can be revitalized with ChromeOS Flex for nothing.

      To read this article in full, please click here

    • Cookie conundrum: The loss of third-party trackers could diminish your privacy Fri, 01 Jul 2022 04:30:00 -0700

      Third-party cookies may be going away in 18 months, but will that achieve Google’s stated intentions of creating a “more privacy-first web?”

      Chris Matty doesn’t think so.

      In fact, he believes the death of the invasive little trackers could paradoxically make our online identities less secure.

      And he believes the motivations of Apple and Google, which have advocated for an end to this form of passive surveillance, are motivated by goals that are less altruistic than they may seem.

      Matty is the founder and chief revenue officer of Versium, a business-to-business omnichannel marketing firm that profiles online visitors without using cookies. Instead, it harvests data from various third-party sources in a process that complies with the California Consumer Privacy Act and then uses deterministic algorithms to make what is essentially an educated guess about the identity of visitors.

      To read this article in full, please click here

    • 10 next-level tricks for your Pixel Clock app Fri, 01 Jul 2022 02:45:00 -0700

      Google's Pixel phones are practically overflowing with useful stuff. And some of the best options of all are things you've probably never even noticed.

      That's true for the Pixel's core Android software as well as its Google-made apps — everything from the excellent calling-related features in the Pixel Phone app to the expanded array of advanced options in Google Assistant on Pixels.

      It's even true in apps that seem so basic and utilitarian, you'd never think they had anything interesting lurking in their dusty virtual corners.

      To read this article in full, please click here

    • FCC commissioner wants Apple, Google to remove TikTok from App Stores Wed, 29 Jun 2022 05:34:00 -0700

      FCC Commissioner Brendan Carr has written to Apple and Google to request that both companies remove the incredibly popular TikTok app from their stores, citing a threat to national security.

      Is your data going TikTok?

      Carr warns the app collects huge quantities of data and cited a recent report that claimed the company has accessed sensitive data collected from Americans. He argues that TikTok’s, "pattern of conduct and misrepresentations regarding the unfettered access that persons in Beijing have to sensitive U.S. data...puts it out of compliance,” with App Store security and privacy policies.

      To read this article in full, please click here

    • Italian spyware firm is hacking into iOS and Android devices, Google says Fri, 24 Jun 2022 08:51:00 -0700

      Google's Threat Analysis Group (TAG) has identified Italian vendor RCS Lab as a spyware offender, developing tools that are being used to exploit zero-day vulnerabilities to effect attacks on iOS and Android mobile users in Italy and Kazakhstan.

      According to a Google blog post on Thursday, RCS Lab uses a combination of tactics, including atypical drive-by downloads, as initial infection vectors. The company has developed tools to spy on the private data of the targeted devices, the post said.

      To read this article in full, please click here

    • 14 ways Google Lens can save you time on Android Wed, 22 Jun 2022 03:00:00 -0700

      Psst: Come close. Your Android phone has a little-known superpower — a futuristic system for bridging the physical world around you and the digital universe on your device. It's one of Google's best-kept secrets. And it can save you tons of time and effort.

      It's a little somethin' called Google Lens, and it's been lurking around on Android and quietly getting more and more capable for years. Google doesn't make a big deal about it, weirdly enough, and you really have to go out of your way to even realize it exists. But once you uncover it, well, you'll feel like you have a magic wand in your pocket.

      At its core, Google Lens is best described as a search engine for the real world. It uses artificial intelligence to identify text and objects both within images and in a live view from your phone's camera, and it then lets you learn about and interact with those elements in all sorts of interesting ways. But while Lens's ability to, say, identify a flower, look up a book, or give you info about a landmark is certainly impressive, it's the system's more mundane-seeming productivity powers that are far more likely to find a place in your day-to-day life.

      To read this article in full, please click here

    • The killer calendar app your Chromebook's been missing Wed, 15 Jun 2022 03:00:00 -0700

      Let me just go on the record as saying: The Google Calendar website is fine.

      And fine really is the most appropriate word here. Google's default desktop Calendar interface is perfectly functional, and it gets the job done.

      It's good enough, in fact — until you experience a truly exceptional Chrome OS calendar alternative and realize how much more efficient, effective, and generally enjoyable your Chromebook-based agenda juggling could be.

      I've been raving endlessly about my favorite Google-connecting desktop calendar app of the moment, the recently-acquired Cron, and lemme tell ya: Phenomenal doesn't even begin to describe it.

      To read this article in full, please click here

    • 6 custom Android shortcuts that'll supercharge your efficiency Wed, 08 Jun 2022 03:00:00 -0700

      Quick: When's the last time you really, truly thought about your Android phone's Quick Settings setup?

      If you're like most mammals I know, the answer probably ranges somewhere between "eons ago" and "never." And it's no surprise: Android's Quick Settings area is one of those things that's just sort of there. It's convenient, sure, but it's all too easy to forget that it's completely customizable — and expandable, too. It can turn into an invaluable home for your own custom Android shortcuts, if you take the time to build it up accordingly.

      The challenge, aside from simply remembering that you can expand that area of your phone's interface, is knowing where to begin. Google doesn't exactly have any great way of tracking down and identifying apps that offer Quick Settings additions, and even when you have an app with a cool Quick Settings option on your phone, you might not even realize it's there.

      To read this article in full, please click here

    • 6 secret shortcuts in Chrome on Android Wed, 01 Jun 2022 03:00:00 -0700

      Goodness gracious, I sure do love saving seconds. And if there's one area where wasted moments are just begging to be reclaimed, it's within the shiny Chrome browser on your favorite Android phone.

      Google's Android Chrome app is an absolute gold mine when it comes to hidden shortcuts and underappreciated time-savers. And despite the fact that we went over a ton of top-notch time-savers for the Chrome Android environment a handful of months back, I kept thinking to myself: "Gee wilikers, Mr. Wigglesby, there's gotta be more."

      To read this article in full, please click here

    Pac-Man Video Game - Play Now

    A Sunday at Sam's Boat on 5720 Richmond Ave, Houston, TX 77057