Home

Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. Google was founded in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California.

Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California. The project initially involved an unofficial "third founder", Scott Hassan, the original lead programmer who wrote much of the code for the original Google Search engine, but he left before Google was officially founded as a company. Read the full story...


Clothing & Jewelry —— Cellphones —— Microsoft Products —— All Products


Google Blog
TwitterFacebookInstagramYouTube

  • AG Paxton’s false claims still don’t add up Fri, 21 Jan 2022 17:00:00 +0000

    Today we’re filing a motion asking the court to dismiss Texas Attorney General Ken Paxton’s antitrust lawsuit over our advertising technology (“ad tech”) business.

    This lawsuit has now been rewritten three times. With each version, AG Paxton follows the same pattern: make inaccurate and inflammatory allegations, publicize them widely, and repeat. This playbook may generate attention, but it doesn’t make for a credible antitrust lawsuit.

    AG Paxton's allegations are more heat than light, and we don't believe they meet the legal standard to send this case to trial. The complaint misrepresents our business, products and motives, and we are moving to dismiss it based on its failure to offer plausible antitrust claims.

    Why this lawsuit misses the law and the facts

    At its heart, AG Paxton is asking the court to force us to share user data and design our products in a way that helps our competitors rather than our customers or consumers. But American antitrust law doesn’t require companies to give information to, or design products specifically for, rivals. This lawsuit fails to acknowledge that ad tech is a highly dynamic industry with countless competitors. It’s been recognized that competition in ad tech has led to reduced fees, encouraged new entry, led to increased investment and expanded options for advertisers and publishers alike.

    Correcting AG Paxton’s false and misleading allegations

    AG Paxton overlooks, or misstates, a litany of clear facts. We want to publicly and unequivocally refute the more egregious allegations:

    • We don’t force “tying”: A central allegation in AG Paxton’s lawsuit is that publishers are forced to use our ad server in order to access our ad exchange. This allegation is simply wrong, and AG Paxton offers no evidence to prove otherwise. If a publisher wants to use our ad exchange with a different ad server, they are free to do so.
    • Header bidding is thriving: A core claim is that we prevented rivals from using a technology, header bidding, through our Open Bidding program. But again, the facts don’t support that. Since we launched Open Bidding, header bidding’s popularity has continued to grow. Recent surveys show a vast majority of publishers currently use header bidding. We simply haven’t held header bidding back.
    • Our auctions are fair: The complaint uses deliberately inflammatory rhetoric to accuse us of a litany of wrongdoings: “misleading” publishers, “rigging” auctions through special data access, running “third price auctions,” “pocketing the difference.” But quite simply, we have – provably – done none of these things. AG Paxton is distorting various optimizations that we have created to improve publisher yields and returns for advertisers. To be clear, contrary to his claims, these optimizations did not and do not result in Google “pocketing” additional revenue share and do not make auctions unfair. And our auction was always a second price auction (until 2019 when it became a first price auction).
    • Out-of-date claims: And more broadly, much of AG Paxton's lawsuit is based on out-dated information that bears no correlation to our current products or business in this dynamic industry (and in any event never amounted to a violation of antitrust laws).

    Facebook Audience Network’s participation in Open Bidding

    The allegation that has generated the most attention is that we somehow “colluded” with Facebook Audience Network (FAN) through our Open Bidding agreement. That’s simply not true.

    To set the record straight, we are today including the full text of our agreement with FAN in our motion to the court. Here are some facts that contradict AG Paxton's claims:

    • This is far from a secret deal: We announced FAN’s participation as one of over 25 partners in our Open Bidding program, all of whom have signed their own agreements to participate.
    • This is a procompetitive agreement: FAN’s participation benefits advertisers because it gives them additional ways to reach their desired audiences. And it benefits publishers because it introduces additional bidders to compete for their ad space, earning them higher returns. In fact, if FAN weren’t a part of Open Bidding, AG Paxton may have claimed we were preventing a rival from accessing our products and depriving publishers of additional revenue.
    • FAN’s involvement is not exclusive: The agreement doesn’t prevent FAN from participating in header bidding or other competing auctions. In fact, FAN participates in several similar auctions on rival platforms. The agreement also doesn’t prevent FAN from building a competing product. Our agreement explicitly states that FAN’s participation is not exclusive (and nowhere in our agreement is header bidding even mentioned). And the entire Open Bidding program (of which Facebook is one of 25 participants) accounts for a small fraction of the display ads we place.
    • We do not manipulate the auction: Finally, this agreement does not provide FAN with an advantage in the Open Bidding auction. FAN competes in the auction just like other bidders: FAN must make the highest bid to win a given impression, period. If another eligible network or exchange bids higher, they win the auction. We don’t allocate ad space to FAN, they don’t receive speed advantages, and we don’t guarantee that they win any auctions.

    Our advertising technology helps fund digital content that benefits everyone, and it supports thousands of businesses, from small advertisers to major publishers. Our work in this space is designed to balance and support the needs of publishers, advertisers and consumers.

    We’re confident that this case is wrong on the facts and the law, and should be dismissed. However, if it does move forward, we’ll continue to vigorously defend ourselves.

  • XWP helps publishers get creative using Web Stories Thu, 20 Jan 2022 19:00:00 +0000

    Editor’s note: Today’s guest post is from Amit Sion, Chief Revenue Officer at XWP.

    Content creation is growing at a faster pace than ever before. Digital media has made it easier for niche publishers to reach global audiences. And publishers are now competing for readers’ attention and time not just with each other, but with social media platforms. With seemingly limitless ways to get news, entertainment and other information, publishers need to find ways to stand out.

    Headquartered in Melbourne, Australia, our web agency XWP works with technology, media and publishing companies. Part of what we do is help publishers engage readers and thrive in today’s highly competitive and often mobile-first marketplace. One way we help them do that is through Web Stories.

    A carousel of five Web Stories is featured with topics like tech, music and film.

    The Australian features their top Web Stories in a carousel on their homepage.

    We recently began working with News Corp Australia to use Web Stories across their family of publications. For example, Australia's most prominent newspaper, The Australian, just added a Web Stories carousel to their homepage under the “Visual Stories” heading. They are using Web Stories for a variety of sections, including news, travel, lifestyle, arts and entertainment.

    We are also working with News Corp brands in the U.S., like The Wall Street Journal, and hope to bring Web Stories to even more News Corp publications. In each project, we learn something new and try to share that experience globally.

    A man stands over a bar with three friends laughing and drinking beer. The text on the image reads: “Bored during COVID, Rich Joyce, left, decided to put a television in his garage for a no-frills hang-out spot. Before he knew it, he had spent about $5,000 to convert the garage into a pub, with a 4-foot wooden bar, a pinball machine and a sign dubbing it ‘Joycee’s Bar & Grill.’”

    The Home garages getting pandemic makeovers Web Story in The Wall Street Journal shares many garage renovations, including this transformation into a pub.

    "News Corp Australia is producing more Web Stories a week than any other publisher in the world,” says Rod Savage, Partnership Editor of News Corp Australia. ”We could not output such volumes of quality content without a quality publishing system and XWP's plug-in has proven to be robust and intuitive. We're looking forward to continuing to build a mutually beneficial relationship with the common goal of making Web Stories a stunning user experience."

    On the Google app on Android and iOS, News Corp’s Web Stories appear on Discover (currently available in the U.S., India and Brazil). This is a useful tool for reaching new audiences, and our customers are seeing positive results in their web traffic.

    We’re also helping smaller, independent publishers use Web Stories to engage their audiences. For example, a COWGIRL Magazine Web Story promoted a documentary about Wyomingrodeo athlete Amberley Snyder, who built her life back after losing the use of her legs in an automobile accident.

    Amberley is sitting in a wheelchair and wearing a black blouse, blue jeans and a cowboy hat. She is putting a brown saddle on a horse.

    COWGIRL Magazine used the Web Stories format to share how rodeo athlete Amberley Snyder began riding horses again after a car crash that left her paralyzed below the waist.

    “It's a different way of telling a story online, unlike anything that anybody's doing out there,” says COWGIRL Magazine Founder and CEO Ken Amorosano. “[Social media stories are] rapid fire…but they’re not really telling a story. A blog post is telling a story, but it's out of sequence, as a photo doesn’t necessarily link with a paragraph. With Web Stories, every word with that image, with that video, matters. And it matters to the actual flow of the story. It has a beginning, middle and an end. And it's very, very, very powerful.” Check out my interview with Ken to learn more about their experience using Web Stories.

    Over the years, we’ve learned a lot about how to develop, deploy and enhance Web Stories for publishers. We’ve found that you can't just take an article and break it up into pages with text — it has to be more engaging. Web Stories offer the ability to add video, sound and images, and publishers need to find the right balance of using multiple media to tell their stories. When we start with publishers, the first thing we do is look at some existing stories. Then we encourage them to think about how to transform them into immersive Web Stories.

    We can’t wait to see where Web Stories take XWP and our publishers next. That includes working with Google to develop Web Stories for WordPress, and helping even more of our customers experiment with Web Stories to grow their audiences and create new reader experiences.

  • From Lagos to London, this marketer is making an impact Thu, 20 Jan 2022 17:00:00 +0000

    Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

    Today’s post features Oiza Sadiq, an Associate Product Marketing Manager based in Lagos, Nigeria (soon to be London) who seeks ways to make real-life impact through her work.

    What do you do at Google?
    I’m an Associate Product Marketing Manager (APMM) at Google. The APMM program is a two-year rotational development program for early-career digital marketers. During our rotations, we work on different teams across Google Marketing to get experience and build our skills. In my current role as a Growth Strategist on the Growth Lab team, I work with product marketers to develop strategies and campaigns to help people better understand how to use Google products.

    What have been the driving forces behind your career?

    I’ve always been passionate about helping individuals and businesses grow. I get fulfillment from seeing people impacted by either the knowledge I share or the work I do — through creating campaigns, supporting product marketers, launching new features or learning more about our users. And I know that I can’t give what I don’t have, which is why I’m so driven to find inspiration and success myself.

    Oiza, wearing a Google t-shirt, smiles and holds up the two-finger “peace” sign in front of the Google logo.

    Oiza in our Lagos, Nigeria office

    How would you describe your path to Google?

    When I got to university, I learned about a group of students — the Google Student Ambassadors (GSA) — who shared resources and trained other students on Google products. I was drawn to how helpful and knowledgeable they were, so I joined the program in my second year.

    After building my skills as a Google Student Ambassador, I landed my first job after university as a project and campaign manager at a digital agency. I eventually reached out to a Googler, who led the GSA program at the time, and told her I wanted to take on more challenging projects and someday become a Googler like her. She shared that there was an open contract role at Google for a Strategic Partner Manager, who would help establish partnerships to provide public Wi-Fi in Nigeria. She encouraged me to apply and put my best foot forward.

    So I did, interviewed and got the role. After 16 months in that position, I transferred to the APMM program — and now, here I am.

    What surprised you about the interview process?

    I typically dread interviews, because it feels like you are in a hot seat trying to prove and convince people of your worth. So when I spoke with my Google interviewers, I was surprised that it felt like any other chat. Everyone was friendly and engaging, which really helped me be myself.

    Oiza, with her arms crossed and wearing black glasses and an orange top, smiles at the camera for a headshot image.

    What’s next for you at Google?
    As part of my second rotation with the APMM program, I’m moving to London to join my new team. As a Growth Specialist, I’ll look after markets like Northern Europe, Central and Eastern Europe — and my home, Sub Saharan Africa (I’m from Kogi State, Nigeria and started in Google’s Lagos office).

    And what excites you outside of your role?

    Outside of my role, I love working with secondary school students and giving career talks and digital skills training. I also do voice-overs for events, including speaker introductions and program announcements.

    Any tips for anyone hoping to join Google?

    Be your authentic self, put your best foot forward and apply for that role!

  • Your 2022 guide to Google Ad Manager Thu, 20 Jan 2022 13:00:00 +0000

    While 2021 was far from a return to normal, publishers found ways to adapt, innovate and thrive in a rapidly changing environment. To help you keep your business on track in 2022, we're recapping some of last year's biggest tips and resources from Google Ad Manager.

    Prepare for a privacy-first future

    Use automation to do more with less

    Invest in advanced TV video streaming

    Build a retail media business

    Learn from other publishers

  • Surfacing women in science with the Smithsonian Wed, 19 Jan 2022 14:00:00 +0000

    Women have always been on the forefront of science. From Ada Lovelace designing the first computer programs, to Rosalind Franklin decoding the structure of DNA, to Katherine Johnson figuring out the physics for mankind to reach the moon, the history of science has been driven by the contributions of women. However, they have often not received proper credit or acknowledgement for their essential work.

    This is why today we are thrilled to announce a new phase in our long-term collaboration between the Smithsonian and Google Arts & Culture. Together, we’ve developed new machine learning tools for use by curators at the Smithsonian American Women’s History Initiative as we dive into the institution’s archives to help uncover and highlight the many roles women have played in science over more than 174 years of history.

    Through this first of its kind collaboration with Google Arts & Culture, our lead partner on the Smithsonian Open Access initiative, it is now easier than ever to surface the work of women in Smithsonian history. This project builds on Google Arts & Culture’s previous work, which made over 2.8 million 2D and 3D images from the museum collections available to the public for the very first time in 2020.

    Powered by machine-learning, these new tools enable three types of research in the Smithsonian’s archives: comparing records across history by connecting different “nodes” in the metadata, identifying the names of women even when they haven’t been explicit (such as by the use of a husband’s name instead), and analyzing image records to cluster and group together similarities to facilitate comparison.

    Results are already promising. Through analyzing collections relating to Mary Jane Rathbun, likely the first woman curator at the Smithsonian, for example, we have been able to find taxonomy cards in Smithsonian records that detail a collecting trip she took with Serena Katherine “Violet” Dandridge, a scientific illustrator who worked alongside Rathbun in the Department of Marine Invertebrates in 1911, and gain a better understanding of early collaborations between women. In fact, the taxonomy cards reveal that a third colleague, Dr. Harriet Richardson Searle, identified some of the specimens that Rathbun and Dandridge brought back to the Smithsonian. Though this experiment is only just beginning, it is clear that technology can play a key role in facilitating further research and help to recover stories of women in science.

  • It’s time for a new EU-US data transfer framework Wed, 19 Jan 2022 11:00:00 +0000

    If you rely on an open, global internet, you’ll want the European Union and the U.S. government to agree soon on a new data framework to keep the services you use up and running. People increasingly rely on data flows for everything from online shopping, travel, and shipping, to office collaboration, customer management, and security operations. The ability to share information underpins global economies and powers a range of services like high-value manufacturing, media, and information services. And over the next decade, these services will contribute hundreds of billion euros to Europe’s economy alone.

    But those data flows, that convenience, and those economic benefits are more and more at risk. Last week, Austria’s data protection authority ruled that a local web publisher’s implementation of Google Analytics did not provide an adequate level of protection, on the grounds that U.S. national security agencies have a theoretical ability to access user data. But Google has offered Analytics-related services to global businesses for more than 15 years and in all that time has never once received the type of demand the DPA speculated about. And we don't expect to receive one because such a demand would be unlikely to fall within the narrow scope of the relevant law.

    The European Court of Justice’s July 2020 ruling did not impose an inflexible standard under which the mere possibility of exposure of data to another government required stopping the global movement of data. We are convinced that the extensive supplementary measures we offer to our customers ensure the practical and effective protection of data to any reasonable standard.

    While this decision directly affects only one particular publisher and its specific circumstances, it may portend broader challenges. If a theoretical risk of data access were enough to block data flows, that would pose a risk for many publishers and small businesses who use the web, andhighlight the lack of legal stability for international data flows facing the entire European and American business ecosystem.

    In 15 years of offering Analytics services, Google has never received the type of demand...speculated about Kent Walker

    Businesses in both Europe and the U.S. are looking to the European Commission and the U.S. Department of Commerce to quickly finalize a successor agreement to the Privacy Shield that will resolve these issues. Both companies and civil society have been supporting reforms based on an evidence-based approach. The stakes are too high — and international trade between Europe and the U.S. too important to the livelihoods of millions of people — to fail at finding a prompt solution to this imminent problem.

    A durable framework — one that provides stability for companies offering valuable services in Europe — will help everyone, at a critical moment for our economies. A new framework will bolster the transatlantic relationship, ensure the stability of transatlantic commerce, help businesses of all sizes to participate in the global digital economy, and avoid potentially serious disruptions of supply chains and transatlantic trade. And it will assure continued protection of people’s right to privacy on both sides of the Atlantic.

    We strongly support an accord, and have for many years supported reasonable rules governing government access to user data. We have long advocated for government transparency, lawful processes, and surveillance reform. We were the first major company to create a Transparency Report on government requests for user data, were founding members of the Global Network Initiative and the Reform Government Surveillance coalition, and support the OECD’s workstream on government access to data. At this juncture, we urge both governments to take a flexible and aligned approach to resolving this important issue.

    As the governments finalize an agreement, we remain committed to upholding the highest standards of data protection in all our products, and are focused on meeting the needs of our customers as we wait for a revised agreement. But we urge quick action to restore a practical framework that both protects privacy and promotes prosperity.

  • The harmful consequences of Congress’s anti-tech bills Tue, 18 Jan 2022 20:00:00 +0000

    Update: Ahead of the committee vote on the bill, members of the Senate Judiciary Committee have circulated an amendment to the Senate bill. Below is our statement from Kent Walker:

    These changes concede every concern that has been raised about the bill — and solve none of them. For example, the amendment acknowledges the real security flaws in the bill by saying that platforms won’t be forced to share user data with companies on the U.S. sanctions list. But it says nothing about provisions that could require sharing data with countless other bad actors and foreign companies. The bill still covers leading American companies, while giving a free pass to foreign companies. It still includes all the provisions that hamper our ability to offer security by default on our platforms, exposing people to phishing attacks, malware and spammy content. And it still includes the provisions that could prevent us from providing consumers and businesses useful, free services. In fact, the amendment seems to punish free services in favor of services consumers have to pay for, as it seems to exempt "fee for subscription services" (like Microsoft’s subscription-based software). This raises its own set of troubling issues, would hurt consumers who benefit from free services, and doesn’t address the bill’s real problems.


    Every day, millions of Americans use online services like Google Search, Maps and Gmail to find new information and get things done. Research shows these free services provide thousands of dollars a year in value to the average American, and polls show that 90% of Americans like our products and services.

    However, legislation being debated in the House and Senate could break these and other popular online services, making them less helpful and less secure, and damaging American competitiveness. We’re deeply concerned about these unintended consequences.

    Antitrust law is about ensuring that companies are competing hard to build their best products for consumers. But the vague and sweeping provisions of these bills would break popular products that help consumers and small businesses, only to benefit a handful of companies who brought their pleas to Washington.

    Some specifics:

    Harming U.S. technological leadership

    These bills would impose one set of rules on American companies while giving a pass to foreign companies. And they would give the Federal Trade Commission and other government agencies unprecedented power over the design of consumer products. All of this would be a dramatic reversal of the approach that has made the U.S. a global technology leader, and risks ceding America’s technology leadership and threatening our national security, as bipartisan national security experts have warned:

    • Americans might get worse, less relevant, and less helpful versions of products like Google Search and Maps (see below for some examples).
    • An “innovation by permission” requirement could force American technology companies to get approval from government bureaucrats before launching new features or even fixing problems, while foreign companies would be free to innovate. Foreign companies could also routinely access American technology as well as Americans' data.
    • Handicapping America’s technology leaders would threaten our leading sources of research and development spending — just as bipartisan voices in Congress are recognizing the need to increase American R&D investment to stay competitive in the global race for AI, quantum, and other advanced technologies.
    • That’s why national security experts from both parties have aligned in warning that current anti-tech bills could threaten America’s national security.

    Degrading security and privacy

    Google is able to protect billions of people around the world from cyberattacks because we bake security and privacy protections into our services. Every day, Gmail automatically blocks more than 100 million phishing attempts and Google Play Protect runs security scans on 100 billion installed apps around the world.

    These bills could prevent us from securing our products by default, and would introduce new privacy risks for you. For instance:

    • The bills could hamper our ability to integrate automated security features if other companies offer similar features. For example, we might be prevented from automatically including our SafeBrowsing service and spam filters in Chrome and Gmail to block pop-ups, viruses, scams and malware.
    • Breaking apart the connections between Google tools could limit our ability to detect and protect you against security risks that use security signals across our products.
    • These bills may compel us to share the sensitive data you store with us with unknown companies in ways that could compromise your privacy.
    • And when you use Google Search or Google Play, we might have to give equal prominence to a raft of spammy and low-quality services.

    Breaking features that help consumers and small businesses

    When you come to Google Search, you want to get the most helpful results. But these bills could prohibit us from giving you integrated, high-quality results — even when you prefer them — just because some other company might offer competing answers. In short, we’d have to prefer results that help competitors even if they don’t help you.

    • If you search for a place or an address, we may not be able to show you directions from Google Maps in your results. As just one example, if you search for “vaccine near me,” we might not be able to show you a map of vaccine locations in your community.
    • When you have an urgent question — like “stroke symptoms” — Google Search could be barred from giving you immediate and clear information, and instead be required to direct you to a mix of low quality results.
    • When you search for local businesses, Google Search and Maps may be prohibited from highlighting information we gather about hours of operation, contact information, and reviews. That could hurt small businesses and local retailers, as well as their customers.
    • The bills would also harm small businesses if tools like Gmail, Calendar and Docs were not allowed to be integrated or work together seamlessly.

    A boost for competitors, not consumers

    While these bills might help the companies campaigning for them, including some of our major competitors, that would come at a cost to consumers and small businesses. Moreover, the bills wouldn’t curb practices by our competitors that actually harm consumers and customers (they seem to be intentionally gerrymandered to exclude many other major companies). For example, they don’t address the problem of companies forcing governments and small businesses to pay higher prices for enterprise software. And of course, the online services targeted by these bills have reduced prices; these bills say nothing about sectors where prices have actually been rising and contributing to inflation.

    The wrong focus

    There are important discussions taking place about the rules of the road for the modern economy. We believe that updating technology regulations in areas like privacy, AI, and protections for kids and families could provide real benefits. But breaking our products wouldn’t address any of these issues. Instead, it would eliminate helpful features, expose people to new privacy and security risks, and weaken America’s technological leadership. There’s a better way. Congress shouldn’t rush to judgment, and should instead take more time to consider the unintended consequences of these bills.

  • This year's Doodle for Google contest is all about self care Tue, 18 Jan 2022 16:00:00 +0000

    I used to be the type of person who took pride in filling my days up. I loved checking items off my to-do list, saying yes to everything and filling my week with social plans. I took pride in productivity and living a fast-paced life. But the pandemic and the shift to new ways of working and living forced me to re-examine my mindset. I had to be intentional in rethinking how I structured my days and build in time for self-reflection, care and introspection.

    This shift isn’t unique to just me. The past few years have been marked by uncertainty, and students in particular have been profoundly impacted in the way they learn, socialize and approach health.

    So the theme of self-care felt fitting for our 14th annual Doodle for Google student contest. The 2022 contest theme is, “I care for myself by…”. We’re asking students to share how they nurture themselves in tough times. What do they do to feel better when they’re feeling down? How do they approach taking a break? What activities make them feel calm or give them energy? What or who brings them joy? Our theme this year invites students to share how they take care of their minds, bodies and spirits as they face the opportunities and challenges every new day brings.

    Meet the judges

    This year’s judges are all passionate about self-care and wellness. The panel will help us determine our 54 state and territory winners and five national finalists, one of whom will go on to be the national grand prize winner.

    Selena Gomez is a Grammy-nominated artist, entrepreneur and philanthropist. One of her personal passions is starting conversations around mental health, and in 2019 she founded the Rare Impact Fund, pledging to raise $100 million for mental health services for individuals in underserved communities. “Art is something that has always been an important part of my life,” she says. “I am thrilled to join this year’s judges panel in the Doodle for Google contest as the theme is ‘I care for myself by,’ which is a topic close to my heart. As a longtime advocate for mental health awareness, the concept that self-care is becoming a part of our everyday conversation makes me hopeful for the future.”

    Our second judge, Elyse Fox, is a director, model and mental health activist. She created Sad Girls Club, a nonprofit committed to destigmatizing mental wellness for millennial and Gen Z women, girls and femmes of color, and she’s a member of the Rare Beauty Mental Health Council. “This year's theme ‘I care for myself by’ is an important prompt we should all be asking ourselves, especially in today's climate,” she says. “I love the theme because sometimes people may think caring for yourself is selfish, but on the contrary it's necessary for us to prioritize to be the best versions of who we want to be.”

    Our final judge, Juliana Urtubey, is the 2021 National Teacher of the Year, and she currently serves as a special education co-teacher at Kermit Booker Elementary in Las Vegas, Nevada. She has spent her career advocating for joyous and just education for all, and community-oriented wellbeing is at the center of her mission. “One of the ways I care for myself is through self-reflection and engaging with my community,” she says. “Knowing yourself and understanding how and why you process certain emotions is influenced by where you come from, and for me, my collective community keeps me grounded and centered. I teach my students how to acknowledge and regulate their emotions and since their relationships and interactions with family, friends and community members can have a major impact on their health and well-being, we always talk about our emotions with a community context.”

    Get started

    The 2022 Doodle for Google contest is open to students based in the United States, Guam, Puerto Rico and the U.S. Virgin Islands through March 4. For details on how to enter the contest, resources for educators and parents, as well the contest rules, head to our website. The winning artist will see their work on the Google homepage for a day, receive a $30,000 college scholarship and the winner’s school will receive a $50,000 technology grant. We can’t wait to see what students create.

  • The biggest lesson from a local news startup: listen Tue, 18 Jan 2022 14:00:00 +0000

    Editor’s note from Ludovic Blecher, Head of Google News Initiative Innovation: The GNI Innovation Challengeprogram is designed to stimulate forward-thinking ideas for the news industry. The story below by Kelsey Ryan, founder and publisher of The Beacon, is part of an innovator seriessharing inspiring stories and lessons from funded projects.

    When I first started thinking about launching a digital news startup for Kansans and Missourians after getting laid off in 2018, I envisioned an investigative-only outlet. But that’s very different from how The Beacon looks today. Now it’s better, and here’s why.

    After spending most of my career in for-profit newspapers, I thought “audience” was just about clicks and search engine optimization. It felt lacking in meaningful community connection. So when I was in the early phases of designing The Beacon — and building the confidence to do it — I grabbed coffee with a former colleague, who gave me the right advice at the right time: Stop talking to other journalists and start talking with people in the community about what they want in local journalism.

    It sounds so obvious now, but suddenly it clicked. If we were to create a sustainable and public-serving news organization, we needed to talk to our community early and often. So we did. And that person who told me to stop talking to other journalists, Jennifer Hack Wolf, became The Beacon’s first employee, made possible with funding through the Google News Initiative Innovation Challenge.

    Picture shows two women sitting on a sofa. The woman on the left is speaking and gesticulating with her hands while the woman on the right is listening and taking notes.

    A participant in a community news listening event tells The Beacon’s Audience Development Director Jennifer Hack Wolf what’s missing in local news.

    Our work to define and engage our audience for long-term sustainability includes a mix of qualitative and quantitative research, and an experimental approach.

    What we have learned in this process is you can’t be all things to all people. But you can meet the news and information needs of segments of people in your community who want more than what they’re getting now. Here are some ways to do this work:

    • Tabling at community events: Set up a booth or table with information about who you are and have an interactive activity where people can engage with you or provide feedback on a specific concern or topic. Candies or treats encouraged.
    • Surveys: There are lots of free survey tools, including Google Forms. Try to keep surveys short and to the point, and don’t use leading questions. Always have an open-ended section where people can put their own feedback.
    • Community listening sessions: In-person or virtual events with a third-party facilitator allow people to discuss two or three open-ended questions, spending 15-20 minutes going deep on each one. It’s important to just listen — don’t get defensive or try to pitch people about who you are or what you’re trying to do.
    • Meet people where they’re at: Explore collaborating with other community partners on things like focused private online groups or pop-up text messaging campaigns to connect with new people in new ways and expand the pool of perspectives.

    It turns out when we started talking to people, they told us they didn’t just want investigative reporting, like I had initially envisioned. Investigative journalism was important to them, but not the end-all, be-all. We found a big opportunity in solutions journalism and data journalism when people told us they wanted context: How did we get here? What are the trends? And they wanted to be more civically engaged: How can we get involved? How can we make change? How are other communities solving similar problems?

    We had nearly 1,000 respondents to our initial surveys and found those people who attended our events (in person or virtual) were far more likely to become subscribers to our newsletter. By partnering with or interviewing people from established organizations with their own large audiences, we were able to grow our subscriber base because those organizations shared the event. 25% of our current 7,000 subscribers learned about us through this activity. We also found about 10% of our newsletter subscribers in the first year of publishing went on to become paying donors, with either one-time gifts or recurring monthly gifts. For recurring donors, $15 a month is our most popular level of giving.

    After more than a year and a half, we feel like we’ve only just scratched the surface. But we’re already applying these concepts to our second newsroom in Wichita, Kansas, building on what we’ve learned and exploring ways we can take this work even further so we can truly be a sustainable, community-oriented news organization for years to come.

  • Schneider Electric secures its teams through Android Enterprise Tue, 18 Jan 2022 11:00:00 +0000

    Editor's note: Today’s post is by Simon Hardy-Bistagne, Director of Solution Architecture for Schneider Electric. The global company specializes in energy management and automation, with operations in more than 100 countries.

    At Schneider Electric, we are responsible for providing sustainability and energy management systems for a global customer base. As the Director of Solution Architecture for our digital workplace, I lead a team that ensures our employees have access to all of the collaborative tools they need from wherever they’re working.

    Android Enterprise is key to securely and flexibly managing Schneider Electric’s global workforce devices. We support a wide range of device-use scenarios for our employees, from fully-managed devices to personal smartphones securely enrolled with the Android work profile. The extensive, customizable and secure controls available with Android Enterprise ensure we are giving our teams the resources they need no matter where they’re working while protecting critical corporate applications and data.

    Flexibility for every use case

    We manage devices in over 117 countries. Android Enterprise has helped us shift to new working styles and embrace employee choice and work-life balance with powerful controls that meet our security needs. By enrolling personal devices with the Android work profile, we know that we are not only protecting our data and services, but we can prove to our employees that, with the work profile, “What you have here is your work life, what you have here is your personal life.” And that has revolutionized the way our teams use their mobile devices.

    Security is at the core of everything we do, both from the perspective of servicing our customers and protecting our own corporate resources. So when we talk about implementing security and management services through Android Enterprise, it’s fundamental to get those basics right. Through Android Enterprise, we have powerful tools for safeguarding devices — like preventing the installation of unknown applications, disabling debug mode and preventing devices from being rooted. Putting these requirements and other key security configurations in place for both personal and company-owned devices is essential for our global business.

    Thanks to the flexibility of Android Enterprise, we can also support a wide range of device use cases. For some employees, we use fully-managed mode for devices dedicated to specific tasks. Others who only want one phone for work and personal use can use a device with the work profile. And with managed Google Play, we can make both internal and public apps available on devices.

    Ready for a hybrid work reality

    Enrollment choice is important as well. We use devices from a variety of vendors, and we can set up those devices with the method that works best for each situation — like zero-touch enrollment or Samsung Knox Mobile Enrollment. With these options, end users can get the applications they need on their corporate devices and use them right away.

    We also value the flexibility of allowing our end users to purchase their own Android device, or ask our IT team to enroll a personal device they’ve used for a couple of years. They can bring their device and easily enroll it into our managed estate with Android Enterprise.

    Hybrid work is our present and future, and Android Enterprise is helping us navigate that. It gives our employees the flexibility in device choice and management mode, and it gives my team comprehensive and effortless management tools that meet the security needs of our global operations.

    To hear more about our mobility strategy, watch my discussion with Android Enterprise Security Specialist Mike Burr from The Art of Control digital event.

  • So you got new gear for the holidays. Now what? Fri, 14 Jan 2022 19:58:00 +0000

    The new year is here, and the holidays are (officially) over. If you were gifted a new Google gadget, that means it’s time to get your new gear out of the box and into your home or pocket.

    We talked to the experts here at Google and asked for a few of their quick setup tips, so you can get straight to using your new…whatever you got...right away.

    So you got a Pixel 6 Pro…

    1. Begin by setting up fingerprint unlock for quick and easy access.
    2. Prepare for future emergencies and turn on the extreme battery saver feature in the settings app. Extreme battery saver can extend your Pixel 6 Pro’s battery life by intelligently pausing apps and slowing processes, and you can preselect when you want to enable the feature — and what your priority apps are.
    3. Create a personal aesthetic with Material You, and express character by customizing wallpaper and interface designs that will give your Pixel 6 Pro’s display a more uniform look.

    So you got a Nest Hub Max…

    1. First, set up Face Match to ensure your Nest Hub Max can quickly identify you as the user and share a more personal experience. Then, when you walk up to the device it can do things like present your daily schedule, play your favorite playlist or suggest recommended videos, news and podcasts.
    2. Set up a Duo account for video calling and messaging with your friends and family. From there, you can ask Nest Hub Max to call anyone in your Google contacts who has Duo — just say, “Hey Google, call (your contact name).” For family members or friends who don't already have Duo, the app is free and available for download on both Android and iOS.
    3. Be sure to connect your Nest Hub Max to any other Google gear, such as the Chromecast and Nest Mini for a smart home experience.
    The Nest Hub Max in front of a white background.

    The Nest Hub Max.

    So you got the new Nest Thermostat…

    1. Use Quick Schedule to easily and quickly get your thermostat programmed. You can go with its recommended presets or adjust the settings further to create a custom schedule. You can make changes to your schedule anytime from the Home app.
    2. Then you can opt in Home and Away Routines, which can help you avoid heating or cooling an empty house by using motion sensing and your phone’s location to know when nobody’s home and adjust the temperature accordingly to save energy.
    3. Make sure you’ve enabled notifications and Savings Finder will proactively suggest small tweaks to your schedule that you can accept from the Home app. For example, it might suggest a small change to your sleep temperature to save you energy.

    So you got the new Pixel Buds A-Series…

    1. Check out the Pixel Buds A-Series’ latest feature, the bass customization option, to find your perfect sound. This addition doubles the bass range when connected to an Android 6.0 device, and can be adjusted on a scale from -1 to 4 by using the Pixel Buds App.
    2. Here’s a hardware tip: Try out the three different ear tip fit options to find the most comfortable fit for you.
    3. Start listening to your favorite podcasts and music right away by using Fast Pair to immediately connect your Pixel Buds to your phone.
  • This talking Doogler deserves a round of a-paws Fri, 14 Jan 2022 16:00:00 +0000

    Ever wonder what your dog is thinking? You’re not alone. Over the last year, dog “talking” buttons have taken the pet world by storm. With the push — er paw — of a button, dogs are now “telling” their humans what they need, whether that’s water, food or to go outside. Some pups have even become social media famous for their impressive vocabulary, inspiring dog owners everywhere to pick up a set of buttons. Or in the case of Rutledge Chin Feman, a software engineer for Google Nest, to try building their own DIY versions.

    “I know I’m biased, but Cosmo is obviously the best dog in the world,” says Rutledge of his pup. When he and his wife adopted Cosmo, a German Shepherd mix and the first dog for both of them, they noticed right away he was skittish. “He was afraid of everything and would do a lot of lunging and barking. It was kind of a forcing function to learn a lot about positive reinforcement training techniques and desensitization…which is how I stumbled on all of this.”

    Cosmo stands on a sidewalk next to a table, chairs and a chalkboard sign. A person is sitting next to him, holding his leash and wearing blue jeans and a brown pair of shoes.

    After Rutledge saw a video of dog-talking buttons that blew his mind, he started to build his own set for Cosmo — the perfect hobby to blend his passions for engineering and animals.

    He used an electronics prototyping board (a “breadboard”) to hold the buttons, and a small computer (a “Raspberry Pi”) to activate them with light, sound and notifications. The first model, made from a wooden wine box, had just three buttons: “food,” “water” and “outside,” which Rutledge recorded with his own voice. Now, Cosmo’s up to seven buttons — with “ball,” “later,” “love you” and “scritches” (otherwise known as “belly rubs”) added to the mix.

    At one point, Rutledge even set the board up so that he received text messages whenever Cosmo pressed a button. “I’ve been in meetings where I’m like, ‘Hold on a sec, I need to silence my phone. My dog is blowing me up right now.’”

    Rutledge and his wife will add a “baby” button to Cosmo’s board next, now that their newest family member has arrived. And the wheels are already turning for Rutledge: “I think it will only be a few months before a baby could push buttons and say things — ask for more food or whatever. I think that will be a fun experiment.”

    For now, he’s focused on Cosmo and continuing to strengthen their bond using the button board. “I think it’s a really powerful way to see your pet. It really reminds you that they’re intelligent beings who are capable of profound thought, in a way. And they’re constantly observing their world in a way that we don’t usually really give them credit for.”

    “It’s a really simple device, it’s all just for fun,” he adds of the process. “And obviously, Cosmo’s a very good boy.”

    Cosmo is looking up and putting his paw on a rectangular wooden board. He is sitting on a blue patterned rug with a white blanket next to him.
  • Increasing Google’s investment in the UK Fri, 14 Jan 2022 06:00:00 +0000

    Image credit: Pollitt & Partners 2015

    For almost two decades Google has been proud to have a home in the UK. Today, we have more than 6,400 employees and last year we added nearly 700 new people. We also strengthened our commitment to the UK in 2021 with the laying of a new subsea cable — Grace Hopper — which runs between the United States and the UK.

    Building on our long-term commitment to the UK, we are purchasing the Central Saint Giles development — the site many Googlers have long called home — for $1 billion. Based in London’s thriving West End, our investment in this striking Renzo Piano-designed development represents our continued confidence in the office as a place for in-person collaboration and connection.

    Across all our UK sites, Google will have capacity for 10,000 employees, as we continue to commit to the UK’s growth and success. This includes our new King’s Cross development, which is currently under construction.

    Investing in the future flexible workplace

    We believe that the future of work is flexibility. Whilst the majority of our UK employees want to be on-site some of the time, they also want the flexibility of working from home a couple of days a week. Some of our people will want to be fully remote. Our future UK workplace has room for all of those possibilities.

    Over the next few years, we’ll be embarking on a multi-million pound refurbishment of our offices within Central Saint Giles to ensure that they are best equipped to meet the needs of our future workplace.

    We'll be introducing new types of collaboration spaces for in-person teamwork, as well as creating more overall space to improve wellbeing. We’ll introduce team pods, which are flexible new space types that can be reconfigured in multiple ways, supporting focused work, collaboration or both, based on team needs. The new refurbishment will also feature outdoor covered working spaces to enable work in the fresh air.

    Supporting digital growth across the UK

    More than ever, technology is enabling people and businesses across the UK. In 2021, we achieved our target to help one million small British businesses stay open by helping them be found online.

    It’s important that everyone is able to take advantage of the increasing innovation in the UK and grow skill sets to prepare for the jobs of the present and the future. Since we launched our Digital Garage programme in Leeds in 2015, we have provided free digital skills training to more than 700,000 people across the UK .

    Thousands more UK jobseekers will also be helped to upgrade their digital skills in 2022 thanks to our expanded partnership with the Department for Work and Pensions (DWP). Nearly 10,000 job-seekers are able to gain access to free scholarships to earn a Google Careers Certificate in high-growth, high-demand career fields including IT support, data analysis, project management and UX design.

    We’re optimistic about the potential of digital technology to drive an inclusive and sustainable future in the UK. We’re excited to be making this investment in January as a fitting way to start the new year.

  • Some facts about Google Analytics data privacy Thu, 13 Jan 2022 19:48:00 +0000

    The web has to work for users, advertisers, and publishers of all sizes — but users first. And with good reason: people are using the internet in larger numbers for more daily needs than ever. They don’t want privacy as an afterthought; they want privacy by design.

    Understanding this is core to how we think about building Google Analytics, a set of everyday tools that help organizations in the commercial, public, and nonprofit sectors understand how visitors use their sites and apps — but never by identifying individuals or tracking them across sites or apps.

    Because some of these organizations lately have faced questions about whether an analytics service can be compatible with user privacy and the rules for international transfers of personal data, we wanted to explain what Google Analytics does, and just as important, what it does not do.

    Fact: Google Analytics is a service used by organizations to understand how their sites and apps are used, so that they can make them work better. It does not track people or profile people across the internet.

    • Google Analytics cannot be used to track people across the web or apps. It does not create user profiles.
    • Google Analytics helps owners of apps and websites understand how their users are engaging with their sites and apps (and only their site or app). For example, it can help them understand which sections of an online newspaper have the most readers, or how often shopping carts are abandoned for an online store. This is what helps them improve the experience for their customers by better understanding what’s working or not working.
    • This kind of information also includes things like the type of device or browser used; how long, on average, visitors spend on their site or app; or roughly where in the world their visitors are coming from. These data points are never used to identify the visitor or anyone else in Google Analytics.

    Google Analytics customers are prohibited from uploading information that could be used by Google to identify a person. We provide our customers with data deletion tools to help them promptly remove data from our servers if they inadvertently do so.

    Fact: Organizations control the data they collect using Google Analytics.

    • Organizations use Google Analytics because they choose to do so. They, not Google, control what data is collected and how it is used.
    • They retain ownership of the data they collect using Google Analytics, and Google only stores and processes this data per their instructions — for example, to provide them with reports about how visitors use their sites and apps.
    • These organizations can, separately, elect to share their Analytics data with Google for one of a few specific purposes, including technical support, benchmarking, and sales support.
    • Organizations must take explicit action to allow Google to use their analytics data to improve or create new products and services. Such settings are entirely optional and require explicit opt-in.

    Fact: Google Analytics helps customers with compliance by providing them with a range of controls and resources.

    Fact: Google Analytics helps put usersin control of their data.

    • Google makes products and features that are secure by default, private by design, and put users in control. That’s why we have long offered a browser add-on that enables users to disable measurement by Google Analytics on any site they visit.
    • Along with providing strong default protections, we aim to give people accessible, intuitive and useful controls so they can make choices that are right for them. For example, visitors can choose if and how Analytics cookies are used by websites they visit, or block all cookies on all or some websites.
    • In addition, organizations are required to give visitors proper notice about the implementations and features of Google Analytics that they use, and whether this data can be connected to other data they have about them.
    • These customers are also required to obtain consent from users for each visit, as required by applicable laws in their country.

    Fact: Google Analytics cannot be used to show advertisements to people based on sensitive information like health, ethnicity, sexual orientation, etc.

    • Google Analytics does not serve ads at all. It is a web and app analytics tool. (You can read all about it here.)
    • Some organizations do use insights they’ve garnered via Google Analytics about their own sites and apps to inform their own advertising campaigns.
    • If a business also uses Google’s advertising platforms, it’s strictly required to follow Google’s advertising guidelines preventing the use of sensitive information to personalize ads — like health, race, religion, or sexual orientation. We never allow sensitive information to be used for personalized advertising. It’s simply off limits.

    Fact: An organization’s Google Analytics data can only be transferred when specific and rigorous privacy conditions are met.

    • Google Analytics operates data centers globally, including in the United States, to maximize service speed and reliability. Before data is transferred to any servers in the United States, it is collected in local servers, where users’ IP addresses are anonymized (when the feature is enabled by customers).
    • The GDPR and European Court of Justice say that data can be transferred outside of the European Union for just this sort of reason, provided conditions are met.
    • In order to meet those conditions, we apply numerous measures, including:
      • Using data transfer agreements like EU Standard Contractual Clauses, which have been affirmed as a valid mechanism for transferring data to the United States, together with additional safeguards that keep data secure: industry-leading data encryption, physical security in our data centers and robust policies for handling government requests for user information.
      • Maintaining widely recognized, internationally accepted independent security standards like ISO 27001, which provides independent accreditation of our systems, applications, people, technology, processes and data centers.
      • Offering website owners a wide range of controls that they can use to keep their website visitors’ data safe and secure.
    • Our infrastructure and encryption is designed to protect data, and safeguard it from any government access.

    And we use robust technical measures (such as Application Layer Transport Security and HTTPS encryption) to protect against interception in transit within Google’s infrastructure, between data centers, and between users and websites, including surveillance attempts by government authorities around the world.

  • Making Open Source software safer and more secure Thu, 13 Jan 2022 18:45:00 +0000

    We welcomed the opportunity to participate in the White House Open Source Software Security Summit today, building on our work with the Administration to strengthen America’s collective cybersecurity through critical areas like open source software.

    Industries and governments have been making strides to tackle the frequent security issues that plague legacy, proprietary software. The recent log4j open source software vulnerability shows that we need the same attention and commitment to safeguarding open source tools, which are just as critical.

    Open source software code is available to the public, free for anyone to use, modify, or inspect. Because it is freely available, open source facilitates collaborative innovation and the development of new technologies to help solve shared problems. That’s why many aspects of critical infrastructure and national security systems incorporate it. But there’s no official resource allocation and few formal requirements or standards for maintaining the security of that critical code. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.

    For too long, the software community has taken comfort in the assumption that open source software is generally secure due to its transparency and the assumption that “many eyes” were watching to detect and resolve problems. But in fact, while some projects do have many eyes on them, others have few or none at all.

    At Google, we’ve been working to raise awareness of the state of open source security. We’ve invested millions in developing frameworks and new protective tools. We’ve also contributed financial resources to groups and individuals working on securing foundational open source projects like Linux. Just last year, as part of our $10 billion commitment to advancing cybersecurity, we pledged to expand the application of our Supply chain Levels for Software Artifacts (SLSA or “Salsa”) framework to protect key open source components. That includes $100 million to support independent organizations, like the Open Source Security Foundation (OpenSSF), that manage open source security priorities and help fix vulnerabilities.

    But we know more work is needed across the ecosystem to create new models for maintaining and securing open source software. During today’s meeting, we shared a series of proposals for how to do this:

    Identifying critical projects

    We need a public-private partnership to identify a list of critical open source projects — with criticality determined based on the influence and importance of a project — to help prioritize and allocate resources for the most essential security assessments and improvements.

    Longer term, we need new ways of identifying software that might pose a systemic risk — based on how it will be integrated into critical projects — so that we can anticipate the level of security required and provide appropriate resourcing.

    Establishing security, maintenance & testing baselines

    Growing reliance on open source means that it’s time for industry and government to come together to establish baseline standards for security, maintenance, provenance, and testing — to ensure national infrastructure and other important systems can rely on open source projects. These standards should be developed through a collaborative process, with an emphasis on frequent updates, continuous testing, and verified integrity.

    Fortunately, the software community is off to a running start. Organizations like the OpenSSF are already working across industry to create these standards (including supporting efforts like our SLSA framework).

    Increasing public and private support

    Many leading companies and organizations don’t recognize how many parts of their critical infrastructure depend on open source. That’s why it’s essential that we see more public and private investment in keeping that ecosystem healthy and secure. In the discussion today, we proposed setting up an organization to serve as a marketplace for open source maintenance, matching volunteers from companies with the critical projects that most need support. Google stands ready to contribute resources to this effort.

    Given the importance of digital infrastructure in our lives, it’s time to start thinking of it in the same way we do our physical infrastructure. Open source software is a connective tissue for much of the online world — it deserves the same focus and funding we give to our roads and bridges. Today’s meeting at the White House was both a recognition of the challenge and an important first step towards addressing it. We applaud the efforts of the National Security Council, the Office of the National Cyber Director, and DHS CISA in leading a concerted response to cybersecurity challenges and we look forward to continuing to do our part to support that work.

  • Advancing genomics to better understand and treat disease Thu, 13 Jan 2022 17:00:00 +0000

    Genome sequencing can help us better understand, diagnose and treat disease. For example, healthcare providers are increasingly using genome sequencing to diagnose rare genetic diseases, such as elevated risk for breast cancer or pulmonary arterial hypertension, which are estimated to affect roughly 8% of the population.

    At Google Health, we’re applying our technology and expertise to the field of genomics. Here are recent research and industry developments we’ve made to help quickly identify genetic disease and foster the equity of genomic tests across ancestries. This includes an exciting new partnership with Pacific Biosciences to further advance genomic technologies in research and the clinic.

    Helping identify life-threatening disease when minutes matter

    Genetic diseases can cause critical illness, and in many cases, a timely identification of the underlying issue can allow for life-saving intervention. This is especially true in the case of newborns. Genetic or congenital conditions affect nearly 6% of births, but clinical sequencing tests to identify these conditions typically take days or weeks to complete.

    We recently worked with the University of California Santa Cruz Genomics Institute to build a method – called PEPPER-Margin-DeepVariant – that can analyze data for Oxford Nanopore sequencers, one of the fastest commercial sequencing technologies used today. This week, the New England Journal of Medicine published a study led by the Stanford University School of Medicine detailing the use of this method to identify suspected disease-causing variants in five critical newborn intensive care unit (NICU) cases.

    In the fastest cases, a likely disease-causing variant was identified less than 8 hours after sequencing began, compared to the prior fastest time of 13.5 hours. In five cases, the method influenced patient care. For example, the team quickly turned around a diagnosis of Poirier–Bienvenu neurodevelopmental disorder for one infant, allowing for timely, disease-specific treatment.

    Time required to sequence and analyze individuals in the pilot study. Disease-causing variants were identified in patient IDs 1, 2, 8, 9, and 11.

    Applying machine learning to maximize the potential in sequencing data

    Looking forward, new sequencing instruments can lead to dramatic breakthroughs in the field. We believe machine learning (ML) can further unlock the potential of these instruments. Our new research partnership with Pacific Biosciences (PacBio), a developer of genomic sequence platforms, is a great example of how Google’s machine learning and algorithm development tools can help researchers unlock more information from sequencing data.

    PacBio’s long-read HiFi sequencing provides the most comprehensive view of genomes, transcriptomes and epigenomes. Using PacBio’s technology in combination with DeepVariant, our award-winning variant detection method, researchers have been able to accurately identify diseases that are otherwise difficult to diagnose with alternative methods.

    Additionally, we developed a new open source method called DeepConsensus that, in combination with PacBio’s sequencing platforms, creates more accurate reads of sequencing data. This boost in accuracy will help researchers apply PacBio’s technology to more challenges, such as the final completion of the Human Genome and assembling the genomes of all vertebrate species.

    Supporting more equitable genomics resources and methods

    Like other areas of health and medicine, the genomics field grapples with health equity issues that, if not addressed, could exclude certain populations. For example, the overwhelming majority of participants in genomic studies have historically been of European ancestry. As a result, the genomics resources that scientists and clinicians use to identify and filter genetic variants and to interpret the significance of these variants are not equally powerful across individuals of all ancestries.

    In the past year, we’ve supported two initiatives aimed at improving methods and genomics resources for under-represented populations. We collaborated with 23andMe to develop an improved resource for individuals of African ancestry, and we worked with the UCSC Genomics Institute to develop pangenome methods with this work recently published in Science.

    In addition, we recently published two open-source methods that improve genetic discovery by more accurately identifying disease labels and improving the use of health measurements in genetic association studies.

    We hope that our work developing and sharing these methods with those in the field of genomics will improve overall health and the understanding of biology for everyone. Working together with our collaborators, we can apply this work to real-world applications.

  • How dreaming big and daring to fail led Chai to Google Thu, 13 Jan 2022 16:00:00 +0000

    Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

    Today’s post is all about Chai Madan, a Google Cloud consultant in our Singapore office, who is passionate about making a difference through her work.

    What do you do at Google?

    As a Google Cloud consultant in Singapore, I work on infrastructure and security projects with some of Google Cloud’s customers in Southeast Asia. I love partnering with enthusiastic customers who want to change the world through their business, and seeing the impact of our work on everyday life — from booking a cab here in Singapore to ordering gifts for my parents online. Cloud computing is making this possible, which is why I’m proud to do this work.

    Can you tell us more about yourself?

    I’m Malayalee and was raised in Dubai until I was 17 years old, when I moved to India to enroll in university. When I’m not working, I’m most likely having fun with friends and family, fitness training, listening to podcasts, exploring restaurants or traveling around the world (at least, before the pandemic).

    Chai, in a green shirt and black jacket, is standing behind a wooden table with a large Google Cloud logo sitting on it. Hanging on the wall in the background is Van Gogh’s self portrait, part of the exhibit she’s visiting.

    Chai visiting an exhibit on Google’s ARCore, our platform for building augmented reality experiences

    Why were you interested in this role?

    Throughout my career, I’ve gravitated towards new and exciting areas in the tech industry. This includes the cloud computing space, which is where most businesses around the world are heading. And now, in keeping with my personal mantra of “dream big and dare to fail,” I'm starting a new role on Google Cloud’s Digital Natives team, where I'll help businesses with their digital transformation programs. I can't wait to use my skills and experience to make an impact with those customers, and I’m excited for the challenge.

    What’s your daily source of inspiration?

    I’m inspired by the fact that I enjoy my work. Particularly, I enjoy seeing and experiencing our impact in action. Outside of my core role, I also like participating in our fun work events. Last year, my daughter joined me for Google’s virtual Take Your Child to Work Day and won prizes for designing her own Google Doodle and making a Google-themed snack at home.

    Chai, in a green shirt, black jacket, and black mask, is sitting on a couch in front of a window. On both sides of the couch are three stacked boxes that say “Google Cloud” and feature Cloud’s logo.

    Chai attending a Google Cloud event

    What was your application and interview process like?

    I applied directly on the Google Careers website and heard back from a recruiter shortly afterward, who asked to set up a phone call. I remember thinking “It’s just a first round with the recruiter,” so I didn’t prepare much — gee, was I in for a surprise! My recruiter knew the requirements for the role and conducted a mini interview. I was a little stunned, but she ultimately helped me see that I had what it took to succeed. I had never felt so supported during an interview before. I would encourage anyone interested in exploring roles at Google to apply without hesitation!

    Any advice for aspiring Googlers?

    Have a strategy, but be open to tweaking it along the way. You will make mistakes, but you can learn from them. Once your interview is scheduled, practice, practice, practice. Write things down and do mock interviews. And finally, don't wait for a job description to be a 100% match. As long as you are passionate about the role and feel like you can get the hang of it, apply and make your mark!

  • Celebrating a decade of partnering with Technovation Wed, 12 Jan 2022 18:00:00 +0000

    In 2006, engineering grad student Tara Chklovski looked around at her classroom and realized how few women and people of color were in the room. Determined to change that, Tara launched Technovation, and this year, Google is celebrating over a decade of support.

    In 2010, we brought the first group of 45 girls to Google’s Mountain View cafe to learn from Google mentors how to build and bring apps to market through Technovation Girls, a program that prepares girls for tech entrepreneurship and leadership.

    The first Technovation Challenge season was conducted in-person, with Google mentors helping the group to learn how to build apps using MIT App Inventor. In the decade since, Google has continued to support Technovation, both through groups of dedicated volunteers, as well as through funding. In 2017, Google hosted Technovation's World Summit, and along the way has helped Technovation reach 350,000 people across 100 countries. The collaboration also allowed Technovation’s AI education program to empower 20,000 children and parents to identify problems in their communities and develop AI-based solutions.

    Through Google.org, we support organizations using technology and innovation to help more students, particularly those who have been historically underserved, get a better education. Since 2013, we’ve given more than $80 million to organizations around the globe focused on closing the computer science education access gap. And we recently shared resources to help nonprofits like Technovation that are working to close the gender gap in CS education.

    To date, Google’s investment in Technovation programming totals nearly $2 million, and more than 50 Technovation alumni have worked at Google campuses around the world. Those alumnae include women like Padmapriya in India, Dalia in Palestine, Jenny and Emma in the United States, and Adelina in Moldova, who graciously agreed to share their stories about participating in Technovation.

    The current Technovation Girls season is now open—if you know a girl who's ready to change the world, let her know about Technovation and encourage her to sign up. And if you want to support girls taking their first steps as technology creators and entrepreneurs, learn more about participating as a mentor or a judge. There are thousands of girls like Padmapriya, Dalia, Adelina, Emma, and Jenny who are just getting started and could use your encouragement!

  • Start the year strong with Google Marketing Platform Wed, 12 Jan 2022 16:00:00 +0000

    As 2022 kicks off, it’s a good time to review your digital marketing strategy and ensure you’re ready for the year ahead. Here are five ways Google Marketing Platform can help you better understand your customers and get stronger marketing results.

    Get insights while respecting user consent choices

    Close data gaps with conversion modeling

    Invest in your analytics foundation

    Deliver consistent experiences across channels

    Get creative with interactive ads

  • Career certificates for Singapore’s future economy Wed, 12 Jan 2022 04:00:00 +0000

    We’ve all worked with enterprising people like Daniel Singh — a Singaporean human resources professional whose curiosity about technology made him the unofficial “tech support” for many of his coworkers. But Daniel took his interest in technology a step further, studying at night to earn a Google Career Certificate in IT Support. He’s now the technology lead for a local business, managing complex projects while volunteering at his local community centre to share his knowledge with others.

    We know many more Singaporeans want to be able to develop the skills for careers in the fast-growing digital economy. Today we announced we’ll be helping meet that demand by introducing Google Career Certificates as a new pathway under Skills Ignition SG, our digital training partnership with the Singapore government and a coalition of employers.

    We first launched Skills Ignition SG in 2020, to support Singaporeans in a challenging job market. At the same time, we wanted to help make sure Singapore — where Google’s Asia-Pacific headquarters is located — has the skilled workforce it needs for the long-term future. We expanded the program last year, and so far more than 3,200 people have enrolled for training. We’re on course to hit our target of helping 3,800 Singaporeans under the program’s existing pathways: Cloud Technology, Data Engineering with Machine Learning Fundamentals, and Digital Marketing. And hundreds of trainees have gone through work placements with Google and other host companies.

    The addition of Google Career Certificates to Skills Ignition SG will enable us to expand the program again, and extend these benefits to thousands more Singaporeans. The training to earn a certificate is conducted online, tailored to people with no prior experience or degree, and lets learners go at their own pace.

    For Skills Ignition SG, we’re offering certificates in four areas where job openings outnumber skilled candidates: IT Support, Project Management, Data Analytics and User Experience Design. We’ll also be providing scholarships to help up to 5,000 learners earn a Google Career Certificate at no cost — in partnership with all five local polytechnics, Institutes of Technical Education, social service agencies and organizations such as The Codette Project, Singapore Indian Development Association and Yayasan MENDAKI.

    These steps will make digital training more accessible. But it's equally critical that Singaporeans can find jobs which allow them to put their new skills to use — which is why we’ve formed a consortium of employers to consider hiring Skills Ignition graduates in their first roles. So far, 15 companies (in addition to Google) have joined the consortium, ranging from global multinationals to major local businesses — and we expect to welcome more employers soon.

    This spirit of partnership is why Skills Ignition SG has made such an impact over the past two years. We and our government and industry partners are united in our commitment to help Singapore thrive as a technology leader for the region and the world. According to AlphaBeta research, taking full advantage of digital technologies could generate up to S$65.3 billion in economic value annually in Singapore by 2030. We look forward to playing our part in realizing that potential — working to create economic growth, jobs and opportunities for Singaporeans in the decade ahead and beyond.



Chess game at level 6 of Lichess (black) versus Crosslineup (white)
See Chess Boards


Checkers game at level 8 of Lidraughts (white) versus Anonymous (black)
See Checkers Boards


Google Ads
Many books were created to help people understand how Google works, its corporate culture and how to use its services and products. The following books are available: Ultimate Guide to Google AdsThe Ridiculously Simple Guide to Google Docs: A Practical Guide to Cloud-Based Word ProcessingMastering Google Adwords: Step-by-Step Instructions for Advertising Your Business (Including Google Analytics)Google Classroom: Definitive Guide for Teachers to Learn Everything About Google Classroom and Its Teaching Apps. Tips and Tricks to Improve Lessons’ Quality.3 Months to No.1: The "No-Nonsense" SEO Playbook for Getting Your Website Found on GoogleUltimate Guide to Google AdsGoogle AdSense Made Easy: Monetize Your Website and Blogs Instantly With These Proven Google Adsense TechniquesUltimate Guide to Google AdWords: How to Access 100 Million People in 10 Minutes (Ultimate Series)


Google Cloud Blog
TwitterFacebookInstagramYouTube

  • PyTorch/XLA: Performance debugging on Cloud TPU VM: Part III Fri, 21 Jan 2022 23:30:00 -0000

    This article is the final in the three part series to explore the performance debugging ecosystem of PyTorch/XLA on Google Cloud TPU VM. In the first part, we introduced the key concept to reason about the training performance using PyTorch/XLA profiler and ended with an interesting performance bottleneck we encountered in the Multi-Head-Attention (MHA) implementation in PyTorch 1.8. In the second part, we began with another implementation of MHA introduced in PyTorch 1.9 which solved the performance bottleneck and identified an example of Dynamic Graph during evaluation using the profiler and studied a possible trade-off between CompileTime penalty with ‘device-to-host’ transfers.

    In this part, we shift gears to server-side profiling. Recall from client and server terminology introduced in the first part, the server side functionality refers to all the computations that happen at TPU Runtime (aka XRT server) and beyond (on TPU device). Often, the objective of such an analysis is to peek into performance of a certain op, or a certain section of your model on TPUs. PyTorch/XLA facilitates this with user annotations which can be inserted in the source code and can be visualized as a trace using Tensorboard.

    Environment setup

    We continue to use the mmf (multimodal framework) example used in the first two parts. If you are starting with this part, please refer to the Environment Setup section of part-1 to create a TPU VM and continue from the TensorBoard Setup described in this post. We recap the commands here for easy access

    Creation of TPU VM instance (if not already created):

    Here is the command to SSH into the instance once the instance is created.

    TensorBoard setup

    Notice the ssh flag argument for ssh tunneled port forwarding. This allows you to connect to the application server listening to port 9009 via localhost:9009 on your own browser. To setup TensorBoard version correctly and avoid conflict with other local version, please uninstall the existing TensorBoard version first:

    In the current context the application server is the TensorBoard instance listening to port 9009:

    This will start the TensorBoard server accessible via localhost:9009 on your browser. Notice that no profiling data is collected yet. Follow the instructions from the user guide to open the PROFILE view localhost:9009.

    Training setup

    We anchor to the PyTorch / XLA 1.9 environment for this case study.

    Update alternative (make python3 default):

    Configure environment variables:

    MMF training environment

    The MMF (Multimodal Training Framework) library developed by Meta Research is built to help researchers easily experiment with the models for multi-modal (text/image/audio) learning problems.  As described in the roadmap we will use the UniTransformer model for this case study. We will begin by cloning and installing the mmf library (specific hash chosen for reproducibility purposes).

    Before we install the mmf library in developer mode, please make the following modifications in the requirement.txt (such that the existing PyTorch environment is not overridden when mmf is installed):

    Apply the following patch for validate_batch_sizes method (Specific to the commit selected for this article):

    Install the mmf library in developer mode:

    Trace API

    The PyTorch/XLA profiler provides two primary APIs (StepTrace, Trace)  to help you analyze the different segments of your code through annotations which can later be visualized using TensorBoard. To use StepTrace or Trace, first a profiler server needs to be started.

    Starting profiler server

    In the following code snippet, we modified the main method (mmf_cli/run.py) to introduce the start_server (<port_number>) call. Notice that the server object returned by this method will only persist if captured in a variable. Therefore server = xp.start_server(3294) will start a profiler server that persists throughout the training. However if we only called start server xp.start_server(3294) without any assignment, the server object does not persist and you will not be able to interact with the server (later on we will request the server via TensorBoard to capture the profile for a given period of time.)

    Make a note of the port number used as the argument to start_server call (in this case 3294). We will use this when communicating with the profiler server in the later sections.

    Insert traces annotations

    Once the mmf library is installed in the developer mode, we can make changes to the source code as shown below to follow this case study.

    In the following code snippet we illustrate StepTrace and Trace annotations. The former introduces a named context (called Training_Step) within which we introduce a few more nested named contexts to capture forward, backward and update times through the latter annotations.

    Note that xp refers to import torch_xla.debug.profiler as xp as illustrated in earlier code snippet.

    Once the training starts we can capture these annotations onto the trace by either the xp.trace method or TensorBoard Profiler’s Capture Profile feature. In this case study we will use the TensorBoard feature  for interactive and iterative profile capture. In development test settings xp.trace can be used to capture the profile data into a TensorBoard logdir, however be cautioned that trace data can quickly grow and therefore trace call should only be active for a reasonable time period.

    Start TensorBoard

    Recall the TensorBoard setup instructions shared in the earlier section (TensorBoard Setup). You started the TensorBoard server at port 9009. And when you ssh into the TPU VM instance we forwarded port 9009 of our localhost to 9009 of the TPU VM. Once you open localhost:9009 in your browser, you will see the following UI:
    start

    Note: If you do not see Profile options in the drop-down menu, please check the TensorBoard version and ensure that the profiler plugin was installed as instructed in the setup section.

    Start training

    Start the training using the same script that you used earlier with one modification. We change the distributed world size to 8 (for all-reduce visualization in the trace).

    Once the training is underway you should begin to see the following log snippet:

    Capture profile

    Form TensorBoard UI which you started earlier, Click ‘Capture Profile’ and specify the localhost:3294 (the same port number you specified in the start_server call):

    capture_profile

    Once the capture is complete you should see an overview page similar to the following (if more than one training step is captured in the trace, else the overview page may be blank, since it requires at least one training step completed to provide any input pipeline recommendations.

    In this case-study, we will focus on the trace view for details about other views. (For memory profiler, pod viewer, etc. please refer to the TensorBoard profiler documentation.) Some of these functionalities are not fully supported by TPU VM at the time of writing this article.

    capture_complete

    Navigate to the annotations in the Trace Viewer

    In the Trace Viewer, Traces collected from all the device i.e. TPU processes and all the host i.e. CPU processes are displayed. You can use the search bar and scroll down to find the annotations you have introduced in the source code:

    trace_viewer

    Understanding Trace


    The following picture shows the annotations that we created in the code snippet example:

    image_9

    Notice the Training Step trace and sub-traces forward and backward pass. These annotations on the CPU process trace Indicate the time spent in the forward backward pass (IR graph) traversal. In case forward or backward passes force early execution (unlowered op or value fetch) you will notice forward or backward traces broken and interspersed multiple times with StepMarker. StepMarker is an in-built annotation which corresponds to implicit or explicit mark_step() call.

    Notice also the FINISH_UPDATE trace. In the mmf code, this corresponds to reduce gradients and update operations. We notice another host process which which is starting TPUExecute function call:

    image_6

    If we create a window from the TPUExecute to the end of the RunExecutable trace we expect to see graph execution and all-reduce traces on the device (scroll back up to the top).

    image_7

    Notice the all_reduce xla_op and corresponding all reduce cross replica sum appearing in tensorflow op. Tensorflow Name Scope and Tensorflow Ops show up in the trace when XLA_HLO_DEBUG is enabled (ref code snippet in Start Training section). It shows the annotation propagated to the HLO (High Level Operations) graph.

    Notice also the gaps on the device trace. The duty cycle (i.e. the fraction of annotated trace per cycle) is the fraction of time spent on graph execution on the device. To investigate the gaps it’s often a good practice to select the gap duration using the trace viewer window tool and then examine the CPU traces to understand what other operations are being executed in that period. For example, in the following trace we select the 72.143ms period where no device trace is observed:

    image_8

    Examining the same period across cpu traces (zoomed in view), we notice three main time chunks:

    1. Overlapping with the IR generation (backward pass) of the next step, followed by StepMarker (mark_step()) call. MarkStep signals to PyTorch/XLA that the IR graph traversed thus far is to be executed now. If it’s a new graph, it will be compiled and optimized first (you will notice longer StepMarker trace and also separate process traces annotated “xxyy” ):
    image_4

    2. Graph/Data is Transferred to the Server (i.e transfer from host to TPU device):

    Date is Transferred

    In the case of TPU VM transfer-to-server is expected to be an inexpensive operation.

    3. XRTExecute is initiated which loads the program (graph) from cache if found or loads the input graph and eventually triggers TPU execution. (Details of XRTExecute, allocation, load program, signature etc is outside the scope of this case-study).
    image_1

    Notice also that the IR generation for the following step has already begun while the current step is executed on TPU.

    In summary, the gaps in device trace are good regions to investigate. Apart from the three chunks or scenarios above, it can also be due to in-efficient data pipeline. In order to investigate you can also add annotations to data-loading parts of the code and then inspect it in the trace wrt to device execution. For a good pipeline data loading and transfer-to-server should overlap with device execution. If this is not the case, the device will be idle for some time waiting for data and this would be input pipeline bottleneck.

    Annotation propagation

    Since PyTorch/XLA training involves IR graph translated into HLO graph which is then compiled and optimized further by the runtime and those optimization do not preserve the initial graph structure, these annotations when propagated to HLO graphs and beyond (enabled by XLA_HLO_DEBUG=1) do not appear in the same order. For example consider the following trace (zooming into the trace shown above):

    image11

    This window does not correspond to multiple forward and backward traces in the host process (you notice gaps in the trace when neither forward or train step tags appear), still we notice Training_Step, Forward and FINISH_UPDATE traces are interspersed here. And this is because the graph to which initial annotation is propagated has undergone multiple optimization passes.

    Conclusion

    Annotation propagation into the host and device traces using the profiler API provides a powerful mechanism to analyze the performance of your training. In this case study we did not deduce any optimizations to reduce training time from this analysis but only introduced it for educational purposes. These optimizations can improve efficiency, TPU utilization, duty cycle, training ingestion throughput and such.

    Next steps

    As an exercise, we recommend the readers to repeat the analysis performed in this study with log_interval set to 1. This exercise will accentuate device-to-host transfer costs and the effects of some of the changes discussed to optimize the training performance. 

    This concludes the part-3 of our three part series on performance debugging. We reviewed the basic performance concepts and command line debug in part-1. In Part-1 we investigated a transformer based model training which was slow due to too frequent device to host transfers. Part-1 ended with an exercise for the reader(s) to improve the training performance. In Part-2 we started with the solution of the part-1 exercise and examined another common pattern of performance degradation, the dynamic graph. And finally, in this post we took a deeper dive into performance analysis using annotations and tensorboard trace viewer. We hope that the concepts presented in this series would be helpful for the readers to debug their training runs effectively using PyTorch/XLA profiler and derive actionable insights to improve the training performance. The reader is encouraged to experiment with their own models and apply and learn the concepts presented here. 

    Acknowledgements

    The author would like to thank Jordan Totten, Rajesh Thallam, Karl Weinmeister and Jack Cao (Google) for your patience and generosity. Each one of you provided numerous feedback without which this post would be riddled with annoying errors. Special thanks to Jordan for patiently testing the case studies presented in this series, for finding many bugs, and providing valuable suggestions. Thanks to Joe Spisak, Geeta Chauhan (Meta) and Shauheen Zahirazami and Zak Stone (Google) for your encouragement, especially for the feedback to split what would have been too long a post for anyone to read. Thanks to Ronghong Hu (Meta AI) for the feedback and questions on the early revisions which made, especially, the trace discussion much more accessible and helpful. And finally to Amanpreet Singh (Meta AI), the author of MMF framework, for your inputs in various debugging discussions, case study selection and for your inputs in building the TPU support in MMF, without your help this series would not be possible.


    Related Article

    PyTorch/XLA: Performance debugging on Cloud TPU VM: Part II

    In this blog post, we build upon the concepts introduced in part-1 and apply these to analyze and improve performance for a case study in...

    Read Article
  • Sprinklr and Google Cloud join forces to help enterprises reimagine their customer experience management strategies Fri, 21 Jan 2022 17:00:00 -0000

    Enterprises are increasingly seeking out technologies that help them create unique experiences for customers with speed and at scale. At the same time, customers want flexibility when deciding where to manage their enterprise data, particularly when it comes to business-critical applications.

    That’s why I’m thrilled that Sprinklr, the unified customer experience management (Unified-CXM) platform for modern enterprises, has partnered with Google Cloud to accelerate its go-to-market strategy and grow awareness among our joint customers. Sprinklr will work closely with our global salesforce, benefitting from our deep relationships with enterprises that have chosen to build on Google Cloud. 

    Akin to Google Cloud’s mission to accelerate every organization’s ability to digitally transform their business through data-powered innovation, Sprinklr’s primary objective is to empower the world’s largest and most loved brands to make their customers happier by listening, learning, and taking action through insights. With this strategic partnership now in place, Sprinklr and Google Cloud will go-to-market together with the end-customer as our sole focus.

    Traditionally, brands have adopted point solutions to manage segments of the customer journey. In isolation, these may work — but they rarely work collaboratively, even when vendors build “Frankenstacks” of disconnected products. These solutions can’t deliver a 360° view of the customer, and often reinforce departmental silos. All of which creates point-solution chaos.

    Sprinklr’s approach is fundamentally different and is the way out of the aforementioned point-solution chaos. As the first platform purpose-built for unified customer experience management (Unified-CXM) and trusted by the enterprise, Sprinklr’s industry-leading AI and powerful Care, Marketing, Research, and Engagement solutions enable the world’s top brands to learn about their customers, understand the marketplace, and reach, engage, and serve customers on all channels to drive business growth. 

    Sprinklr was built from the ground up as a platform-first solution, designed to evolve and grow with the rapid expansion of digital channels and applications. The results? Faster innovation. Stronger performance. And a future-proof strategy for customer engagement on an enterprise scale.

    sprinklr.jpg

    "Sprinklr works with large, global companies that want flexibility when deciding where to manage their enterprise data and consider our platform a business-critical application,” said Doug Balut, Senior Vice President of Global Alliances, Sprinklr. "Giving our customers the opportunity to manage Sprinklr on Google Cloud empowers them to create engaging customer experiences while maintaining the high security, scalability, and performance they need to run their business.”

    To learn more about this exciting partnership and the challenges we jointly solve for customers, check out the recent conversation between Google Cloud’s VP of Marketing, Sarah Kennedy, and Sprinklr’s Chief Experience Officer, Grad Conn. Or read the press release on the partnership.

  • Develop and debug Kubernetes microservice applications fast with Cloud Code and Skaffold modules Fri, 21 Jan 2022 17:00:00 -0000

    Microservice applications are popular, and for good reason. They offer flexibility over monolithic applications, and superior scalability when containerized and deployed to Kubernetes. To be functional, a microservice application must run all its services.

    Issues arise when you want to build your application as a whole for your CI/CD pipeline and still be able to develop a subset of microservices. This is where Skaffold modules help.

    Skaffold modules give microservice developers the ability to build and deploy parts of their applications separately. This results in an efficient development flow that enables:

    • Iterating on and debugging a subset of microservices

    • Cross-boundary microservice debugging

    • Reduced build time and faster iterations when using remote dependencies and prebuilt artifacts

    Skaffold is a command line tool that facilitates continuous development for Cloud Native applications. Skaffold handles the workflow for building, pushing, and deploying your application, and provides building blocks for creating CI/CD pipelines. 

    To support iterative microservice development for Skaffold users using IDEs, module support was recently added to Cloud Code. Cloud Code is Google Cloud’s IDE plugin for IntelliJ and Visual Studio Code. With Cloud Code, we’re bringing Skaffold to where you’re at; developing locally in your favorite IDE. 

    This post walks through how to develop and debug a sample microservice application with Skaffold modules from your IDE. We’ll also take a look at how to add Skaffold modules support to your own application.

    Develop and debug microservices separately

    Let’s see Skaffold modules in action with the IntelliJ version of Cloud Code. We’ll try Skaffold modules with Cloud Code’s Guestbook sample. The Guestbook sample is a simple microservice application that’s available in Node.js, Java, Python, and Go.

    If you’re a Visual Studio Code user, install Cloud Code and check out our documentation for creating a sample Kubernetes app and configuring modules.

    Before you begin

    1. Install the Cloud Code plugin for JetBrains IDEs.

    2. Create a new Cloud Code project by opening IntelliJ and navigating to File > New > Project… > Cloud Code: Kubernetes > Java: Guestbook.

    1 Skaffold.jpg
    3. Click Next and finish creating the project.

    Run the application

    The Java Guestbook sample has a frontend and backend microservice. As you would expect, the frontend microservice provides the UI, while the backend stores and serves records from a database. For more details about the guestbook sample application, open the sample’s README.md file.

    First, edit the “Develop on Kubernetes” configuration. Under the Run tab, either select an existing Kubernetes cluster or “Deploy locally to a minikube cluster” to spin up a minikube cluster for free.

    2 Skaffold.jpg

    Now let’s deploy the full Guestbook application by running the “Develop on Kubernetes” configuration.

    3 Skaffold.jpg

    Cloud Code now builds the Guestbook sample application, packages it into containers, and deploys to the current Kubernetes cluster configured on your machine. The Run tool window looks similar to the following once the application is deployed. Clicking each step narrows the scope of the displayed logs, allowing you to focus on a specific container.

    4 Skaffold.jpg

    Open the Service URLs tab to access the port-forwarded application endpoints locally. You can test that the application is working by clicking the java-guestbook-frontend URL.

    Once you’ve confirmed the application works, click the stop button. Cloud Code stops the application and cleans up the deployment in your cluster.

    Run a single microservice

    You’ve probably noticed that it takes a while to deploy this entire application. Now imagine having to deploy a much larger application with over a dozen microservices when you only want to develop one! This is where Cloud Code and Skaffold modules come into play.

    Let’s edit the “Develop on Kubernetes” run configuration and select the Build/Deploy tab:

    5 Skaffold.jpg

    Here you can see all the microservices of the application. For a microservice to appear in this list, it needs to be defined as a Skaffold module in your skaffold.yaml file. skaffold.yaml provides specifications for an application’s workflow. This sample has already done that.

    Let’s build and deploy the “frontend” microservice.

    6 Skaffold.jpg

    Click OK and run “Develop on Kubernetes” again. This time Cloud Code only builds and deploys the frontend microservice.

    7 Skaffold.jpg

    Without the backend microservice, the frontend is just a UI view with no data. Deploying just the frontend allows testing that the frontend fails gracefully when the backend is unavailable. 

    More than likely your microservice development and debugging happens in a shared development cluster, where a backend team deploys their microservice that’s available for your frontend to interact with. This way, the backend and the frontend teams can develop a larger application independently while using the power of Cloud Code IDEs.

    With Skaffold modules, backend and frontend code can live in either the same or separate repositories. Microservices can then be built together with a root skaffold.yaml file when all modules are selected. Each team can then work on their part by developing their module. See our documentation for more info on common Skaffold module use cases.

    The larger your application and the more independent microservices it has, the more productive this workflow becomes.

    Debug a microservice

    Debugging your microservices is almost the same process as running them in your remote Kubernetes cluster. Run the “Develop on Kubernetes” configuration in debug mode.

    8 Skaffold.jpg

    Cloud Code ensures the container and application inside it run in debug mode and connects them to the IDE. 

    Now set breakpoints as if you are running locally and step through your application running in your remote cluster! You can find more details about debugging here.

    Configure Skaffold modules for your application

    The Guestbook sample shows one way to use Skaffold modules with a very simple microservice application. If you’re working on a small microservice application, reference the Guestbook sample’s root skaffold.yaml. It provides a great starting point on how to structure your Skaffold configuration file.

    This root configuration lists all the microservices of the application. Each microservice has its own Skaffold configuration that defines how to build and deploy it to Kubernetes. This sample has two Skaffold configurations for the frontend and backend

    You could provide alternative root configurations, such as a local development root that configures the backend with a local MySQL or PostgreSQL instance, or that activates specific Skaffold profiles.

    To see Skaffold modules implemented in a larger example, see the Bank of Anthos application. It has 5+ larger microservices configured as Skaffold modules and is fully supported by Cloud Code IDEs. This is a great example of a large application that can be developed with Cloud Code and deployed with Cloud Deploy using the same Skaffold configuration.

    Check out the Skaffold documentation to learn more about configuring Skaffold modules. 

    Next steps

    We encourage you to try configuring your microservice project with Skaffold modules to experience the power and productivity of Skaffold and Cloud Code.

    If you need help, join our GCP Community Slack Channel, #cloud-code, to connect with the community and ask questions.

    To learn more about how Skaffold and Cloud Code work together, see Kubernetes development, simplified—Skaffold is now GA.

    Related Article

    Kubernetes development, simplified—Skaffold is now GA

    Skaffold, a build and deploy automation tool for Kubernetes applications, is generally available.

    Read Article
  • Data considerations for early-stage startups Fri, 21 Jan 2022 17:00:00 -0000

    As lead for analytics and AI solutions at Google Cloud, my team works with startups building on Google Cloud. This puts us in the fortunate position to learn from founders and engineers about how early-stage startups’ investments can either constrain them or position them for success, even at the seed level. In this post, I want to share a few of the best practices to keep in mind as you're building. 

    Understand your value proposition before diving into a technology stack

    If you're launching a startup in the cloud, you're no doubt thinking about a technology stack, but it’s important to step back a bit and think carefully about the major value proposition that your startup offers to your customers. That value proposition is going to fundamentally drive the kind of technology that you should pick.

    For example, does your system need processing in real time, or can it be done in a batch mode? Can you rely on once-a-day insights or do the insights have to come in as events happen?

    Additionally, what kind of latency will your customers face? That latency makes your value proposition either usable or unusable. Early on in Google's development, leaders realized that no one was going to wait more than a few hundred milliseconds for a web page to show them their results, and that realization drove the technology decisions that have allowed Google to scale from being a startup in a garage to being a trillion dollar company. Your startup needs to define its value to customers with this level of specificity before it can build a technology stack suited to its needs. 

    Focus on customer interactions

    A few companies have gracefully pulled off big IT pivots that reshaped their value proposition. Netflix, for example, moved from mostly sending DVDs through the mail to becoming a streaming service and major content producer. That’s a huge shift in the user experience and the technology stack necessary to support it, even if the underlying value proposition (i.e., get content to customers) was broadly the same. But it’s also an outlier. If you’re planning for potential changes of this magnitude, rather than focused on getting your value proposition to users, you probably need to sharpen what that value proposition is.

    Specifically, you need a clear vision of how customers will access and interact with your business. Typically, they'll do so over a website or a mobile app, but there are still so many variables. 

    Are customers going to transmit documents? If so, in what format? Is handwriting supported or is input limited to typing? Can they use images for optical character recognition? Will it mostly be forms? Will the data be structured or unstructured? If all that sounds  a little overwhelming, don’t worry, it’ll seem simpler by the end of this article—but also be aware: we’re just getting warmed up.

    Imagine that most of your customers will access your business via voice, so you know you’ll want to prioritize conversational workflows. That’s a start—but dig deeper.  Even if we suppose you’re usingDialogflow, a Google Cloud conversational AI platform that lets you build and deploy virtual agents, we’re still not really seeing the value proposition.  How will all this work, from the beginning of a typical full customer interaction to the resolution? How many interactions will have to be facilitated over low-bandwidth connections, for example? When it comes to user interactions, make sure you can see an end-to-end use case.

    Another example: you're building a retail website, and one of your end-to-end use cases involves the customer asking if a certain amount of a given product is in stock, whether it’s one unit of the product, ten or hundreds. If the product is not sufficiently stocked, you want your app to offer similar items that are. Will your technology stack support this end-to-end use case?

    These considerations are not an argument for premature optimization. There’s value in moving fast, getting minimum viable products to users, and then iterating. But in the early stages, you only get one chance to start on the right foot—and how you navigate that chance will influence a lot of dollars and effort down the road. You need to make sure you have business use cases, not just an idea, before you can start designing a technology stack.  

    Here’s how to get in the right frame of mind. Pick three use cases: two that are “bread and butter” and one that is technologically complex.  Make sure your proposed technology stack can support all three, end to end. 

    Default toward higher levels of abstraction

    Now that we’re in the right frame of mind, we’re ready to think about the technology stack more directly. 

    As a startup, you’ll need to conserve resources, and to do that, you’ll want to build at the highest level of abstraction possible for your value proposition. For example, you probably don't want your people setting up clusters. You don't want them configuring things if they can use a fully managed service. You want them focused on building your prototype, not managing infrastructure.

    1 Canonical Data Stack on Google Cloud.jpg
    Canonical Data Stack on Google Cloud

    This focus has definitely informed how we create products at Google Cloud, as our canonical data stack—Pub/Sub, Dataflow, BigQuery, and Vertex AI—consists of auto-scaling and serverless products.

    But management of infrastructure is not the only place where you should err toward a less-is-more philosophy. 

    When it comes to architecture, choose no-code over low-code and low-code over writing custom code. For example, rather than writing ETL pipelines to transform the data you need before you land it into BigQuery, you could use pre-built connectors to directly land the raw data into BigQuery. That’s no code right there. Then, transform the data into the form you need using SQL views directly in the data warehouse. This is called ELT, and it is low code. You will be a lot more agile if you choose an ELT approach over an ETL approach. 

    Another place is when you choose your ML modeling framework. Don’t start with custom TensorFlow models. Start with AutoML. That’s no-code. You can invoke AutoML directly from BigQuery, avoiding the need to build complex data and ML pipelines. If necessary, move on to pre-built models from TensorFlow Hub, HuggingFace, etc. That’s low-code. Build your own custom ML models only as a last resort.

    2 No-code, low-code Data Stack on Google Cloud.jpg
    No-code, low-code Data Stack on Google Cloud

    Focus on getting your vision to market, not chasing technology hype  

    The goal is to pick the right technology stack for bringing your vision to market, generating value for customers, conserving resources, and maintaining flexibility for growth. Early IT investments should usually gravitate toward things that preserve flexibility, such as managed services built on standard protocols or open APIs, but they needn’t always rush to the flashiest technologies.  The answer isn’t always ML, for example. The answer might be heuristics to start, with a path to ML once you have collected enough data. You want to make sure that your intelligence layer has enough abstraction so you can mark it up with simple rules at first, but then replace it with a more robust system as you go along. 

    Launch and iterate fast with these principles 

    The preceding discussion is a reminder that your most expensive resource is your people—and that you really want them to be focused on building your prototype, minimum viable product or production app  You want to launch fast and iterate fast, and the only way you can do that is by focusing on the things that differentiate you. 

    But regardless of the technologies you use, the bottom line is the same: follow these four principles. 

    • Figure out your major value proposition and design your tech stack around it. 

    • Be very careful about user interactions. User experience is super important; you need to make sure you deliver the kind of experience that your customers have grown to expect.

    • When you’re building, pick the highest possible level of abstraction possible—the most fully managed tools and no-code/low-code frameworks that give you the functionality that you need. 

    • Instead of choosing new or flashy technologies, consider if you can build a “good enough” minimum viable product quickly and come back to a better implementation later. 

    To learn more about why startups are choosing Google Cloud, click here.

    Related Article

    Get started, build and grow your Startup on Google Cloud

    Announcing the launch of Google Cloud Technical Guides for Startups, a video series for technical enablement aimed at helping startups to...

    Read Article
  • TELUS accelerates modernization with data science Fri, 21 Jan 2022 17:00:00 -0000

    Editor’s note: TELUS, a Canadian communications and information technology company, has transformed their approach to data science with Google Cloud services. Here’s how they’ve broken down data silos, accelerated data engineering tasks, and democratized data access.  


    As a dynamic, world-leading communications and information technology company, TELUS is always at the forefront of innovation. We have made significant progress with our digitization journey over the last few years, modernizing our systems and networks to create new and improved growth opportunities. However, in that process, we have inevitably accumulated vast amounts of important data across various systems resulting in data access challenges for our teams.

    Our vision was to unite siloed data, democratize it across our organization, and enable our data science team to effectively extract meaningful, high quality insights to help with important business decisions. Partnering with Google Cloud, we’ve approached our cloud transformation in a way that allows us to unlock the true potential of data to create valuable insights and deliver exceptional customer experiences.

    From siloed data assets to a single source of truth

    We began this transformation by cleaning up and starting the migration of our siloed data assets to Google Cloud, aggregating all our data points into a common data layer built with BigQuery, Dataflow, Cloud Composer, Cloud Bigtable, and Cloud Storage

    Data governance has been crucial through this to ensure we have a single reliable source of truth for our data. We created a TELUS Metadata Repository to document information about our data assets (provenance, business description, privacy and security classification) in order to improve our team efficiency and streamline productivity.

    Democratized data unlocks new use cases 

    Our partnership with Google Cloud has also helped us to democratize data across the organization, allowing each business unit to share their data with each other and collaborate more effectively. 

    At TELUS, our data spans beyond just the telecom industry across multiple business units such as  healthcare, security, and agriculture. By bringing all those different datasets together, we’re seeing new use cases that help us improve the lives of Canadians. As an example, our Data for Good program was instrumental in helping track the spread of the COVID-19 virus during the global pandemic. By providing governments, health authorities, and academic researchers a platform to access strongly de-identified and aggregated network mobility data free of charge, the program assisted in initiatives to flatten the curve of COVID-19, reduce its health and economic impacts, and contribute to studies that could prevent or mitigate future phases of COVID-19 or other pandemics.

    Unifying the data and AI lifecycle

    Our data science team has made tremendous strides with Google Cloud services to  reduce machine learning (ML) model development and deployment time. We have been testing very sophisticated compute instances on Google Cloud, and Vertex AI, to accelerate our journey by unifying the data and AI lifecycles - from data exploration, aggregation and cleaning to model building, training, testing and finally deploying ML models in production.  In addition to the acceleration of ML model development and experimentation, with Vertex AI our data scientists will also be able to implement Machine Learning Operations (MLOps) to efficiently build and manage ML projects throughout the development lifecycle.

    Accelerating innovation

    Moving to the cloud has not only accelerated our model development, it has also allowed us to innovate faster. We transitioned from a waterfall to an agile mindset early on, but we needed an even faster framework to trial many ideas in just a few hours. We’re trying to empower the team to rapidly test their ideas, accelerate their iterations, and minimize the impact of their failures. This has enabled us to determine within just a few days—as opposed to months—if a project will be successful or not and therefore, minimize wasted time. 

    Privacy and security remain at the forefront

    As we grow our data science practice and use these tools more widely throughout our organization, keeping our data secure remains a top priority. We’ve established the TELUS Trust Model to reflect our commitment to protecting our customers’ personal information. To build trust with our stakeholders, we always use this data with respect and make sure that security and privacy is built into every step of our projects. Using Google Cloud allows us to retain complete control over our data and ensure that any information we use for analysis is always de-identified, so it can't be attributed to any single subscriber. While Google Cloud provides Data Loss Prevention (DLP) service, it does so in a way that doesn’t slow down our time to retrieve insights. In addition, we leverage Google Cloud locations in Montreal and Toronto to help support data sovereignty requirements and ensure that our customer information never leaves Canada.

    Data champions shape the TELUS culture

    Since we’ve transitioned to Google Could, TELUS has also undergone a significant cultural shift. We’re driving TELUS to become a next-generation, insights-driven organization that creates valuable analytics to maximize business outcomes and deliver superior experiences to our customers. Moving forward, we are excited to continue leveraging our insights, AI skills and technology to create meaningful human and social outcomes and help build stronger, healthier and more sustainable communities.

    Learn more about TELUS’ Data for Good initiatives and overall data cloud use cases here.

    Related Article

    Google joins the O-RAN ALLIANCE to advance telecommunication networks

    Google Cloud joins O-RAN ALLIANCE to drive transformative change in telecommunications.

    Read Article
  • Find products faster with the new All products page Thu, 20 Jan 2022 18:30:00 -0000

    Welcome to a new way of exploring Google Cloud products. Finding your favorite products and discovering new ones requires a user interface that’s easy-to-use, clear, informative, and delightful. Google Cloud users have primarily used our side menu to navigate, but with almost one hundred products and growing, it’s safe to say our product list has outgrown the side menu. Over time, we have listened long and hard to feedback from Google Cloud users, who have highlighted challenges navigating the console to explore our products. We've heard over and over that it's difficult and time consuming to scroll through the long list of products and remember what each one offers. 

    That’s why we created a new All products page to help you easily navigate to your favorite Google Cloud products. This page showcases all of the Google Cloud products as well as our key partner products in one, easy-to-navigate place. With one click, you can discover the right product that is right for your solution.

    All Products Page
    Click to enlarge

    Explore all Google Cloud products

    The page is organized into different categories, including Management (ie. IAM, Billing), Compute, Storage, Operations, Security, CI/CD, Artificial Intelligence, Support, and more. Quickly jump to the category of interest through the panel on the left, or you can scroll the entire page. Then you can click each product name to navigate directly to the product homepage. Each product listing also includes a short and long description so you can quickly understand what a product does and whether it fits your needs. This lets you compare categories at-a-glance, saving you the hassle of digging up product overviews elsewhere. 

    Under each product, you’ll also find a link to documentation and Quickstarts so you can understand it in more depth and try it out right away, removing the extra step of navigating to documentation in another tab.

    Exploring the All products page

    Customize your navigation

    To make navigation even easier, you can pin products directly from the All products page, and they will show up at the top of your side menu. You can also customize your navigation by reordering your pins in the side menu. That way, you can quickly access your most-used products directly from the side menu instead of scrolling through the panel or the All products page. 

    Customize your products through the All products page

    Save time and get more done faster

    With the new All products page you can save time scrolling and cut straight to the good stuff - finding your products, discovering new ones, learning, and getting hands on. Try it out for yourself by heading to the Google Cloud Console. Click the side panel and click “View All products,” or on the home dashboard you’ll see a call out to try out the All products page.

    Navigate to the All products page

    If you have any feedback about this new experience, I want to hear! Reach out to me on Twitter at @stephr_wong or on Linkedin at stephrwong.

  • Bio-pharma organizations can now leverage the groundbreaking protein folding system, AlphaFold, with Vertex AI Thu, 20 Jan 2022 18:30:00 -0000

    At Google Cloud, we believe the products we bring to market should be strongly informed by our research efforts across Alphabet. For example, Vertex AI was ideated, incubated and developed based on the pioneering research from Google’s research entities. Features like Vertex AI Forecast, Explainable AI, Vertex AI Neural Architecture Search (NAS) and Vertex AI Matching Engine were born out of discoveries by Google’s researchers, internally tested and deployed, and shared with data scientists across the globe as an enterprise-ready solution, each within a matter of a few short years. 

    Today, we’re proud to announce another deep integration between Google Cloud and Alphabet’s AI research organizations: the ability in Vertex AI to run DeepMind’s groundbreaking protein structure prediction system, AlphaFold

    We expect this capability to be a boon for data scientists and organizations of all types in the bio-pharma space, from those developing treatments for diseases to those creating new synthetic biomaterials. We’re thrilled to see Alphabet AI research continue to shape products and contribute to platforms on which Google Cloud customers can build. 

    This guide provides a way to easily predict the structure of a protein (or multiple proteins) using a simplified version of AlphaFold running in a Vertex AI. For most targets, this method obtains predictions that are near-identical in accuracy compared to the full version. To learn more about how to correctly interpret these predictions, take a look at the "Using the AlphaFold predictions" section of this blog post below. 

    Please refer to the Supplementary Information for a detailed description of the method.

    Solution Overview

    Vertex AI lets you develop the entire data science/machine learning workflow in a single development environment, helping you deploy models faster, with fewer lines of code and fewer distractions.

    For running AlphaFold, we choose Vertex AI Workbench user-managed notebooks, which uses Jupyter notebooks and offers both various preinstalled suites of deep learning packages and full control over the environment. We also use Google Cloud Storage and Google Cloud Artifact Registry, as shown in the architecture diagram below.

    1 deep mind.jpg
    Figure 1. Solution Overview

    We provide a customized Docker image in Artifact Registry, with preinstalled packages for launching a notebook instance in Vertex AI Workbench and prerequisites for running AlphaFold. For users who want to further customize the docker image for the notebook instance, we also provide the Dockerfile and a build script you can build upon. You can find the notebook, the Dockerfile and the build script in the Vertex AI community content.

    Getting Started

    Vertex AI Workbench offers an end-to-end notebook-based production environment that can be preconfigured with the runtime dependencies necessary to run AlphaFold. With user-managed notebooks, you can configure a GPU accelerator to run AlphaFold using JAX, without having to install and manage drivers or JupyterLab instances. The following is a step-by-step walkthrough for launching a demonstration notebook that can predict the structure of a protein using a slightly simplified version of AlphaFold that does not use homologous protein structures or the full-sized BFD sequence database.

    1. If you are new to Google Cloud, we suggest familiarizing yourself with the materials on the Getting Started page, and creating a first project to host the VM Instance that will manage the tutorial notebook. Once you have created a project, proceed to step 2 below.

    2. Navigate to the tutorial notebook, hosted in the vertex-ai-samples repository on GitHub.

    3. Launch the notebook on Vertex Workbench via the “Launch this Notebook in Vertex AI Workbench” link. This will redirect to the Google Cloud Platform Console and open Vertex AI Workbench using the last project that you used.

    2 deep mind.jpg
    4. If needed, select your project using the blue header at the top of the screen, on the left.
    • If you have multiple Google Cloud user accounts, make sure you select the appropriate account using the icon on the right.
    • First-time users will be prompted to take a tutorial titled “Deploy a notebook on AI Platform,” with the start button appearing on the bottom-right corner of the screen.
    • This tutorial is necessary for first-time users; it will help orient you to the Workbench, as well as configure billing and enable the Notebooks API (both required).
    • A full billing account is required for GPU acceleration, and is strongly recommended. Learn more here.
    3 deep mind.jpg

    5. Enter a name for the notebook but don’t click “Create” just yet; you still need to configure some “Advanced Options.'' If you have used Vertex AI Workbench before, you may first need to select “Create a new notebook.”

    4 deep mind.jpg
    6. GPU acceleration is strongly recommended for this tutorial. When using GPU acceleration, you should ensure that you have sufficient accelerator quota for your project. 
    • Total GPU quota: “GPUs (all regions)”
    • Quota for your specific GPU type: “NVIDIA V100 GPUs per region”

    Enter the Quota into the “filter” box and ensure Limit > 0. If needed, you can spin up small quota increases in only a few minutes by selecting the checkbox, and the “Edit Quotas.”

    5 deep mind.jpg
    7. Next, select “Advanced Options,” on the left, which will give you the remaining menus to configure:
    • Under Environment, configure “Custom container” (first in the drop-down menu) 
    • In the “Docker container image” text box, enter (without clicking “select”): us-west1-docker.pkg.dev/cloud-devrel-public-resources/alphafold/alphafold-on-gcp:latest
    • Suggested VM configuration:
      • Machine type: n1-standard-8 (8 CPUs, 30 GB RAM)

      • GPU type: NVIDIA Tesla V100 GPU accelerator (recommended).

        • Longer proteins may require a powerful GPU; check your quota configuration for your specific configuration, and request a quota increase if necessary (as in Step 6).

        • If you don’t see the GPU that you want, you might need to change your Region / Zone settings from Step 5. Learn more here.

      • Number of GPUs: 1

      • Make sure the check box “Install NVIDIA GPU driver automatically for me” is checked.

    • The defaults work for the rest of the menu items. Press Create!
    6 deep mind.jpg

    8. After several minutes, a virtual machine will be created and you will be redirected to a JupyterLab instance. When launching, you may need to confirm the connection to the Jupyter server running on the VM; click Confirm:

    7 deep mind.jpg

    9. The notebook is ready to run! From the menu, select Run > Run all Cells to evaluate the notebook top-to-bottom, or run each cell by individually highlighting and clicking <shift>-return. The notebook has detailed instructions for every step, such as where to add the sequence(s) of a protein you want to fold.

    10. Congratulations, you've just folded a protein using AlphaFold on the Vertex AI Workbench!

    9 deep mind.jpg
    11. When you are done with the tutorial, you should stop the host VM instance in the “Vertex AI” > ”Workbench” menu to avoid any unnecessary billing. 

    Using the AlphaFold predictions

    The protein structure that you just predicted has automatically been saved as ‘selected_prediction.pdb’ to the ‘prediction’ folder of your instance. To download it, use the File Browser on the left side to navigate to the ‘prediction’ folder, then right click on the ‘selected_prediction.pdb’ file and select ‘Download’. You can then use this file in your own viewers and pipelines.

    You can also explore your prediction directly in the notebook by looking at it in the 3D viewer. While many predictions are highly accurate, it should be noted that a small proportion will likely be of lower accuracy. To help you interpret the prediction, take a look at the model confidence (the color of the 3D structure) as well as the Predicted LDDT and Predicted Aligned Error figures in the notebook. You can find out more about these metrics and how to interpret AlphaFold structures on this page and in this FAQ.

    If you use AlphaFold (e.g. in publications, services or products), please cite the AlphaFold paper and, if applicable, the AlphaFold-Multimer paper

    Looking toward innovation in biology and medicine

    In this guide, we covered how to get started with AlphaFold using Vertex AI, enabling a secure, scalable, and configurable environment for research in the Cloud. If you would like to learn more about AlphaFold, the scientific paper and source code are both openly accessible. We hope that insights you and others in the scientific community make will unlock many exciting future advances in our understanding of biology and medicine.

    Related Article

    Vertex AI NAS: higher accuracy and lower latency for complex ML models

    How Google Cloud’s Vertex AI Neural Architecture Search (NAS) accelerates time-to-value for sophisticated ML workloads.

    Read Article
  • Data Fusion SAP accelerator for Procure 2 Pay Thu, 20 Jan 2022 17:00:00 -0000

    Earlier this year, we announced SAP Integration with Cloud Data Fusion, Google Cloud’s native data integration platform to seamlessly move data out of SAP Business Suite, SAP ERP, and S4/HANA. This was followed by the release of our accelerator for SAP Order to Cash which simplifies integrating SAP Order to Cash data into BigQuery and allows organizations to gain invaluable insights using Looker

    Even with a reliable and scalable integration platform like Cloud Data Fusion, creating pipelines to incorporate mission-critical enterprise data into warehouses can be tedious. Hence, the accelerator includes:

    • An SAP connector

    • Staging and transformation pipelines in the Data Fusion hub to load the staging schema on BigQuery

    • Target schemas in Google BigQuery for the staging, dimension, and fact datasets

    • A predefined Looker block with a semantic model and operational dashboards

    With these integrations, teams spend less time developing data pipelines, defining key metrics, and building out visualizations and instead spend more time analyzing the information and making informed business decisions. 

    We are now releasing our accelerator for the procure-to-pay process. Let us look into the details of the process and what the accelerator provides.

    What is Procure to Pay?

    Procure-to-pay is a critical business process that includes requisition processing, sourcing, purchase order processing, goods receiving, accounts payable management, and reporting. A typical procure-to-pay process involves the creation and maintenance of vendors, direct and indirect materials, purchase requisitions, purchase orders, goods receipt, and billing.  Most enterprises depend on these business processes, and they measure success using key performance indicators (KPI’s) around purchase orders, suppliers, cost, etc. As such, It becomes critical for key stakeholders to get insights into these metrics like the procurement cost, cost avoidance, on-contract vs. off-contract spend, supplier scorecard, cost-saving, and overall procurement ROI to effectively measure the health of procurement, develop new sources of supply, and sourcing strategies. 

    SAP Accelerator for Procure to Pay

    As a part of the Google Cloud SAP Accelerators, we are excited to announce the SAP accelerator for procure-to-pay. These accelerators are blueprints and defined content that gives customers a quick start for procure-to-pay scenarios and can be customized as per specific customer needs.

    Google offers the following components as a part of the accelerator:

    • Data integration and transformation using Google Cloud Data Fusion

      • SAP Connector: SAP Table Reader connector that can be used for initial data load for all tables and incremental data load for selective tables

      • Staging Pipelines: These pipelines bring the raw data from SAP along with a mapping of abbreviated column names to useful English column names.

      • Transformation Pipelines: These are simplified BQ Execute pipelines that have embedded SQL statements that can be easily customized for specific customer scenarios. 

    Critical pipelines and  Key Business Entities in  Cloud Data Fusion for Procure to Pay

    1 Data Fusion.jpg
    • Target Schemas in Google BigQuery

      • Staging Dataset: The staging dataset is where data extracted from SAP through the Staging Pipelines lands 

      • Dimension Dataset: After the staging dataset has landed, transformation pipelines are used to transform and capture key fields used for reporting. In the procure-to-pay accelerator, two tables are created: supplier_dimension and material_dimension, they contain information about each supplier and material

      • Fact Dataset: In addition to the dimension dataset, a fact dataset is also created, which contains tables that can be then aggregated in a Business Intelligence tool to calculate KPIs of interest. The procure to pay accelerator provides four tables: purchase_order_fact, goods_receipt_fact, invoice_fact, and accounting_fact which contains metrics around purchase orders, goods receipt, invoice,  and accounting postings

    2 Data Fusion.jpg
    • Looker Block

      • Pre-built Semantic Model: The Looker block can be installed through Looker’s in-app marketplace and comes with a pre-packaged LookML model. Here, we have defined KPIs which can be directly used or further customized based on organizational needs

      • Operational Dashboards: Beyond the LookML model, this block comes with four dashboards that allow users to jump right in and begin analyzing their SAP data. One dashboard presents high-level trends in procure-to-pay while the other allows analysts to drill in and focus on the purchase orders, goods receipts and suppliers.

    3 Data Fusion.jpg

    Getting Started with SAP Accelerator for Procure to Pay

    For information on how to configure and run the Cloud Data Fusion pipelines, see the SAP Table Reader Guide and the blog.  You can find additional information on the SAP Accelerator for Procure to Pay here

    The Looker block will be released for preview in theLooker marketplace. The LookML powering the block is publicly available in this GitHub repository. You can find details on accessing the Looker marketplace and installing the block here. If you do not have access to a Looker instance and want to try out the block, you can sign up for afree trial here.

    Related Article

    Data Fusion SAP Connectors

    Unlock the value of your SAP data on Google Cloud with Data Fusion SAP connectors.

    Read Article
  • Data Fusion SAP Connectors Thu, 20 Jan 2022 17:00:00 -0000

    Businesses today have a growing demand for data analysis and insight-based action. More often than not, the valuable data driving these actions is in mission critical operational systems. Among all the applications that are in the market today, SAP is the leading provider of ERP software and Google Cloud is introducing integration with SAP to help unlock the value of SAP data quickly and easily.

    Google Cloud’s native data integration platform Cloud Data Fusion now offers an additional capability to seamlessly get data out of SAP Business Suite, SAP ERP and S/4HANA. Cloud Data Fusion is a fully managed, cloud-native data integration and ingestion service that helps ETL developers, data engineers and business analysts efficiently build and manage ETL/ELT pipelines.  These pipelines accelerate the creation of data warehouses, data marts, and data lakes on BigQuery or operational reporting systems on CloudSQL, Spanner or other systems. 

    To simplify the unlocking of SAP data, today we’re announcing the launch of SAP ODP General Availability, SAP SLT and SAP ODATA in Preview.  This will allow customers to have the flexibility to unlock data from SAP ECC and S4HANA in batch or real-time in a very easy  and code free way. This blog describes 3 connectors: SAP ODP, SLT connector,  and SAP ODATA. 

    SAP ODP (Open Data Provisioning) Connector GA 

    Transfer Full or Delta data from SAP to BigQuery or other systems, using ERP Extractors (Datasources).

    In Cloud Data Fusion’s Pipeline Studio, you can add an SAP source Datasource to a data pipeline. If needed, add a Transform option, like Wrangler. Then add a sink like BigQuery or GCS.

    1 Data Fusion.jpg

    There are 2 SAP connection options: Direct and via Load Balancer. 

    Data can be transferred in 2 modes: Full (All Data) and Sync (Automatic selection based on previous execution). 

    Sync mode automatically determines whether full, delta (incremental), or recovery (recover data from last execution) mode should be run based on the previous execution type and status available in SAP. It extracts full data in the initial pipeline execution (ODP mode F) and changes data in subsequent pipeline executions (ODP modes D, R).

    Clicking Validate will check the properties, connect to SAP and retrieve the schema. 

    2 Data Fusion.jpg

    SLT (SAP Landscape Transformation) Connector Public Preview 

    SAP SLT (Landscape Transformation Server) is an SAP solution dedicated to replicating data within the SAP ecosystem and to external systems. 

    Replicate data from SAP to BigQuery, using SAP SLT

    There are a few configuration prerequisites for SLT 

    • deploying the SAP transport, 

    • setting up the replication configuration and 

    • the Google ABAP SDK. 

    In CDF, the user needs to provide the path where the SAP JCo libraries were downloaded from SAP.com

    Google Cloud Storage is used as the staging area for the data pushed out of SLT, so make sure to configure the capacity/quota appropriately.  

    In the Replication studio, you can add an SAP SLT Replicator source to a replication job. Then add the Big Query target.

    3 Data Fusion.jpg

    To configure the SLT plugin, provide the connection parameters (SAP host, user, password) and SLT GUID and Mass Transfer ID. Then select the tables to be replicated.

    4 Data Fusion.jpg

    SAP ODATA Public Preview 

    Transfer data from SAP to BigQuery or other systems, using SAP OData Services

    In the Cloud Data Fusion’s Pipeline Studio, you can add an SAP source OData Service to a data pipeline. If needed add a Transform option, like Wrangler. Then add a sink like Big Query or GCS.

    5 Data Fusion.jpg

    The connection to SAP is done via http/s. The authentication method is through SAP user and password. An optional X.509 Client certificate can be used to improve security.  

    The user must specify an OData base URL, a service name and entity to extract. Clicking Validate will check the properties, connect to SAP and retrieve the schema.

    6 Data Fusion.jpg

    Related Article

    Data Fusion SAP accelerator for Procure 2 Pay

    Google Cloud Data Fusion accelerator for SAP Procure to Pay, consisting of SAP connector, pipeline templates, target BigQuery schemas and...

    Read Article
  • Encrypt Data Fusion data and metadata using Customer Managed Encryption Keys (CMEK) Thu, 20 Jan 2022 17:00:00 -0000

    We are pleased to announce the general availability of Customer Managed Encryption Keys (CMEK) integration for Cloud Data Fusion. CMEK enables encryption of both user data and metadata at rest with a key that you can control through Cloud Key Management Service (KMS). This capability will help meet the security, privacy and compliance requirements of CDF customers (particularly in regulated industries) for mission-critical workloads. 

    Data Fusion already supported encrypting all user data generated on popular Google Cloud services such as Cloud Storage, BigQuery, Cloud Spanner with CMEK. This release takes it a step further by allowing customers to use their own keys for encrypting Data Fusion metadata at rest. In particular, this latest CMEK integration provides users control over encryption keys for the  data written to Google internal resources in tenant projects and data written by Cloud Data Fusion pipelines, including:

    • Pipeline logs and metadata

    • Dataproc cluster metadata

    • Various Cloud Storage, BigQuery, Pub/Sub, and Cloud Spanner data sinks, actions, and sources

    Getting started with CMEK for Cloud Data Fusion

    1. Protecting Data Fusion metadata at rest using CMEK 
    When you create, run and manage data pipelines using Data Fusion, various types of metadata such as pipeline specifications, pipeline artifacts, run history, logs and metrics, as well as lineage and discovery metadata are stored in Data Fusion’s metadata repository in a tenant project. This metadata can now be easily encrypted using CMEK by simply providing the full CMEK resource name while creating the Data Fusion instance, as shown in the picture below. Note that the encryption mechanism of an instance cannot be changed after creation. In order to specify the CMEK resource, follow the steps below, while creating a Data Fusion instance:
    • Open the Advanced section of the instance creation form
    • Select the “Use a customer-managed encryption key (CMEK)” option in the Encryption section.
    • Choose from a list of Customer Managed Encryption Keys, or specify a key manually by entering its full resource name (in the format projects/project-name/locations/global/keyRings/my-keyring/cryptoKeys/my-key)
    1 Encrypt Data Fusion.jpg

    Once you’ve selected or specified a key, you may also need to additionally provide both the Data Fusion service account and the default compute engine service account (used for running pipelines on Dataproc clusters by default) permissions to encrypt and decrypt keys. This can be done by granting the cloudkms.cryptoKeyEncrypterDecrypter role to these service accounts, and can be done right in the same UI by clicking the GRANT button.

    2 Encrypt Data Fusion.jpg
    2. Protecting user data at rest using CMEK in Data Fusion pipelines

    In addition to protecting metadata at rest, you can also protect any newly created resources in supported Google Cloud services such as Cloud Storage, BigQuery, Cloud Spanner, Pub/Sub, and more using CMEK. In order to protect your newly created data using CMEK in Data Fusion pipelines, you have a couple of options:

    a. Specify the full CMEK resource name in the configuration of the respective sink. This is useful when you want to (potentially) protect the data in each sink with a different key. Some examples of CMEK being used to protect data written through Data Fusion sinks are below:
    • BigQuery Sink:
    3 Encrypt Data Fusion.jpg

    • GCS Sink:

    4 Encrypt Data Fusion.jpg

    • Cloud Spanner Sink:

    5 Encrypt Data Fusion.jpg

    • Pub/Sub Sink:

    6 Encrypt Data Fusion.jpg
    b. Specify the full CMEK resource name as a preference. This is useful when you want to use the same CMEK to protect newly created data in all sinks in a given pipeline, namespace or instance. In order to do so, specify the full CMEK resource name as the preference key gcp.cmek.key.name at the pipeline, namespace or instance level. 

    • Pipeline level: At the pipeline level, the CMEK key can be set either as a runtime argument (if you only want to set it for a particular run) or as a pipeline level preference (if you want to set it for all pipeline runs)/

    7 Encrypt Data Fusion.jpg

    • Namespace level: At the namespace level, the CMEK key can be set as a preference on the namespace details page. All CMEK-supported sinks in such a namespace will use this key unless a key is explicitly provided either at the pipeline level or in the specific sink’s plugin configuration.

    8 Encrypt Data Fusion.jpg

    • Instance level: At the instance level, the CMEK key can be set as a preference on the System Admin page. All CMEK-supported sinks on the instance will use this key unless a key is explicitly provided either at the namespace level, the pipeline level or in the specific sink’s plugin configuration.

    9 Encrypt Data Fusion.jpg

    Priority order for CMEK for user data

    Another key feature to note with CMEK for user data in Data Fusion is the priority order in which the key is chosen. As we have already seen in the previous section, CMEK can be specified at various levels in Data Fusion. These configurations follow the priority order below:

    10 Encrypt Data Fusion.jpg

    As you can see, CMEK in instance preferences has the lowest precedence, while CMEK in the sink plugin config has the highest precedence. You can use this powerful capability to appropriately set CMEK in your Data Fusion pipelines.

    We are excited to roll out this critical feature to Data Fusion customers. For more information about using CMEK with Data Fusion, please refer to the documentation. For a list of Cloud Data Fusion plugins that support CMEK, see the supported plugins. We are committed to provide a secure and compliant cloud data integration service in Cloud Data Fusion. Stay tuned for  more updates in this area in future.

    Related Article

    Understanding data pipeline security in Cloud Data Fusion

    Building more secure ELT and ETL pipelines in the cloud can help protect your data. See how you can easily build integrated pipelines wit...

    Read Article
  • How to publish applications to our users globally with Cloud DNS Routing policies? Thu, 20 Jan 2022 17:00:00 -0000

    When building applications that are critical to your business, one key consideration is always high availability. In Google Cloud, we recommend building your strategic applications on a multi-regional architecture. In this article, we will see how Cloud DNS routing policies can help simplify your multi-regional design.

    As an example, let’s take a web application that is internal to our company, such as a knowledge-sharing wiki application. It uses a classic 2-tier architecture: front-end servers tasked to serve web requests from our engineers and back-end servers containing the data for our application. 

    This application is used by our engineers based in the US (San Francisco), Europe (Paris) and Asia (Tokyo), so we decided to deploy our servers in three Google Cloud regions for better latency, performance and lower cost.

    1 high leel design.jpg
    High level design

    In each region, the wiki application is exposed via an Internal Load Balancer (ILB), which engineers connect to over an Interconnect or Cloud VPN connection. 

    Now our challenge is determining how to send users to the closest available front-end server. 

    Of course, we could use regional hostnames such as <region>.wiki.example.com where <region> is US, EU, or ASIA - but this puts the onus on the engineers to choose the correct region, exposing unnecessary complexity to our users. Additionally, it means that if the wiki application goes down in a region, the user has to manually change the hostname to another region - not very user-friendly!

    So how could we design this better? 

    Using Cloud DNS Policy Manager, we could use a single global hostname such as wiki.example.com and use a geo-location policy to resolve this hostname to the endpoint closest to the end user. The geo-location policy will use the GCP region where the Interconnect or VPN lands as the source for the traffic and look for the closest available endpoint.

    For example, we would resolve the hostname for US users to the IP address of the US Internal Load Balancer in the below diagram:

    2 DNS resolution.jpg
    DNS resolution based on the location of the user

    This allows us to have a simple configuration on the client side and to ensure a great user experience.

    The Cloud DNS routing policy configuration would look like this:

    See the official documentation page for more information on how to configure Cloud DNS routing policies.

    This configuration also helps us improve the reliability of our wiki application: if we were to lose the application in one region due to an incident, we can update the geo-location policy and remove the affected region from the configuration. This would mean that new users will resolve the next closest region to them, and it would not require an action on the client’s side or the application team’s side.

    We have seen how this geo-location feature is great for sending users to the closest resource, but it can also be useful for machine-to-machine traffic. 

    Expanding on our web application example, we would like to ensure that front-end servers all have the same configuration globally and use the back-end servers in the same region. 

    We would configure front-end servers to connect to the global hostname backend.wiki.example.com. The Cloud DNS geo-location policy will use the front-end servers’ GCP region information to resolve this hostname to the closest available backend tier Internal Load Balancer.

    3 front-end to back-end.jpg
    Front-end to back-end communication (instance to instance)

    Putting it all together, we now have a multi-regional and multi-tiered application with DNS policies to smartly route users to the closest instance of that application for optimal performance and costs. 

    In the next few months, we will introduce even smarter capabilities to Cloud DNS routing policies, such as health checks to allow automatic failovers. We look forward to sharing all these exciting new features with you in another blog post.

    Related Article

    Simplify traffic steering with Cloud DNS routing policies

    Cloud DNS routing policies (geo-location and weighted round robin) helps you define custom ways to steer private and Internet traffic usi...

    Read Article
  • Developing and securing a platform for healthcare innovation with Google Cloud Thu, 20 Jan 2022 17:00:00 -0000

    In an industry as highly regulated as healthcare, building a single secure and compliant application that tracks patient care and appointments at a clinic requires a great deal of planning from development and security teams. So, imagine what it would be like to build a solution that includes almost everything related to a patient’s healthcare, including insurance and billing. That’s what Highmark Health (Highmark)—a U.S. health and wellness organization that provides millions of customers with health insurance plans, a physician and hospital network, and a diverse portfolio of businesses–decided to do. 

    Highmark is developing a solution called Living Health to re-imagine healthcare delivery, and it is using Google Cloud and the Google Cloud Professional Services Organization (PSO) to build and maintain the innovation platform supporting this forward thinking experience. Considering all the personal information that different parties like insurers, specialists, billers and coders, clinics, and hospitals share, Highmark must build security and compliance into every part of the solution. 

    In this blog, we look at how Highmark Health and Google are using a technique called “secure-by-design” to address the security, privacy, and compliance aspects of bringing Living Health to life.

    Secure-by-design: Preventive care for development

    In healthcare, preventing an illness or condition is the ideal outcome. Preventive care often involves early intervention—a course of ideas and actions to ward off illness, permanent injury, and so on. Interestingly, when developing a groundbreaking delivery model like Living Health, it’s a good idea to take the same approach to security, privacy, and compliance. 

    That’s why Highmark’s security and technology teams worked with their Google Cloud PSO team to implement secure-by-design for every step of design, development, and operations. Security is built into the entire development process rather than waiting until after implementation to reactively secure the platform or remediate security gaps. 

    It’s analogous to choosing the right brakes for a car before it rolls off the assembly line instead of having an inspector shut down production because the car failed its safety tests. The key aspect of secure-by-design is an underlying application architecture created from foundational building blocks that sit on top of a secure cloud infrastructure. Secure-by-design works to ensure that these building blocks are secure and compliant before moving on to development.

    The entire approach requires security, development, and cloud teams to work together with other stakeholders. Most importantly, it requires a cloud partner, cloud services, and a cloud infrastructure that can support it. 

    Finding the right cloud and services for secure-by-design 

    Highmark chose Google Cloud because of its leadership in analytics, infrastructure services, and platform as a service. In addition, Google Cloud has made strategic investments in healthcare interoperability and innovation, which was another key reason Highmark decided to work with Google. As a result, Highmark felt that Google Cloud and the Google Cloud PSO were best suited for delivering on the vision of Living Health—its security and its outcomes. 

    “Google takes security more seriously than the other providers we considered, which is very important to an organization like us. Cloud applications and infrastructure for healthcare must be secure and compliant,” explains Highmark Vice President and Chief Information Security Officer, Omar Khawaja. 

    Forming a foundation for security and compliance

    How does security-by-design with services work? It starts with the creation and securing of the foundational platform, allowing teams to harden and enforce specified security controls. It’s a collaborative process that starts with input from cross-functional teams—not just technology teams—using terms they understand, so that everyone has a stake in the design. 

    A strong data governance and protection program classifies and segments workloads based on risk and sensitivity. Teams build multiple layers of defense into the foundational layers to mitigate against key industry risks. Google managed services such as VPC Service Controls help prevent unauthorized access. Automated controls such as those in Data Loss Prevention help teams quickly classify data and identify and respond to potential sources of data risk. Automation capabilities help ensure that security policies are enforced.

    After the foundational work is done, it’s time to assess and apply security controls to the different building blocks, which are Google Cloud services such as Google Kubernetes Engine, Google Compute Engine, and Google Cloud Storage. The goal is to make sure that these and similar building blocks, or any combination of them, do not introduce additional risks and to ensure any identified risks are remediated or mitigated. 

    Enabling use cases, step by step

    After the foundational security is established, the security-by-design program enables the Google Cloud services that developers then use to build use cases that form Living Health. The service enablement approach allows Highmark to address complexity by providing the controls most relevant for each individual service. 

    For each service, the teams begin by determining the risks and the controls that can reduce them. The next step is enforcing preventive and detective controls across various tools. After validation, technical teams can be granted an authorization to operate, also called an ATO. An ATO authorizes the service for development in a use case.

    For use cases with greater data sensitivity, the Highmark teams validate the recommended security controls with an external trust assessor, who uses the HITRUST Common Security Framework, which maps to certifications and compliance such as HIPAA, NIST, GDPR, and more. A certification process follows that can take anywhere from a few weeks to a few months. In addition to certification, there is ongoing monitoring of the environment for events, behavior, control effectiveness, and control lapses or any deviation from the controls.

    The approach simplifies compliance for developers by abstracting compliance requirements away. The process provides developers a set of security requirements written in the language of the cloud, rather than in the language of compliance, providing more prescriptive guidance as they build solutions. Through the secure-by-design program, the Highmark technology and security teams, Google, the business, and the third-party trust assessor all contribute to a secure foundation for any architectural design with enabled Google Cloud services as building blocks. 

    Beating the learning curve 

    Thanks to the Living Health project, the Highmark technology and security teams are trying new methods. They are exploring new tools for building secure applications in the cloud. They are paying close attention to processes and the use case steps and, when necessary, aligning different teams to execute. Because everyone is working together collaboratively toward a shared goal, teams are delivering more things on time and with predictability, which has reduced volatility and surprises. 

    The secrets to success: Bringing everyone to the table early and with humility

    Together, Highmark and Google Cloud PSO have created over 24 secure-by-design building blocks by bringing everyone to the table early and relying on thoughtful, honest communication. Input for the architecture design produced for Highmark came from privacy teams, legal teams, security teams, and the teams that are building the applications. And that degree of collaboration ultimately leads to a much better product because everyone has a shared sense of responsibility and ownership of what was built. 

    Delivering a highly complex solution like Living Health takes significant, more purposeful communication and execution. It is also important to be honest and humble. The security, technology, and Google teams have learned to admit when something isn’t working and to ask for help or ideas for a solution. The teams are also able to accept that they don’t have all the answers, and that they need to figure out solutions by experimenting. Khawaja puts it simply, “That level of humility has been really important and enabled us to have the successes that we've had. And hopefully that'll be something that we continue to retain in our DNA.”

  • Retailers now need to "always be pivoting." Here's three moves keeping them going Wed, 19 Jan 2022 17:00:00 -0000

    For years, retailers have been told that they must embrace a litany of new technologies, trends, and imperatives like online shopping, mobile apps, omnichannel, and digital transformation. In search of growth and stability, retailers adopted many of these, only to realize that for every box they ticked, there was another one waiting.

    And then the pandemic hit, along with rising social movements and increasingly harsh weather. Some retailers were more prepared to take on these disruptions than others, which crystallized a new universal truth across the industry: the ability to adapt on the fly became the most important trait to survive and thrive.

    Today’s retail landscape has surfaced both existing and new challenges for specialty and department store retailers. Approximately88% of purchases previously occurred within a store environment. Now, it’s closer to 59%, with the remainder done online or through other omni methods. 

    With such constant change and upheaval, it can feel like the mantra now is ABP: always be pivoting. 

    The big question isn’t just how to maintain constant momentum and agility—it’s how to do it without sapping your workforce, your inventory, or your profits in the process. The pivot is now a given. What matters is how you do it.

    Adapting requires a flexible base of technology that allows retailers to shift and scale seamlessly with the needs of the moment. 

    They need to be able to leverage real-time insights and enhance customer experiences rapidly, online and in the real world (not to mention the growing hybridization that’s AR and VR). They need to modernize their stores to power engaging consumer and associate experiences. They need to enhance operations for rapid scaling between full operations and digital-only offerings.

    To help retailers achieve these goals and more, Google Cloud is honing a trio of essential innovations: demand forecasting that harnesses the power of data analytics and artificial intelligence; enhanced product discovery to improve conversion across channels; and the tools to help create the modern store experience.

    In other words, here’s some of the biggest ways we’re ready to help you pivot.

    Pivot point 1: Harnessing data and AI for demand forecasting with Vertex AI

    One of the greatest challenges for retailers when building organizational flexibility is managing inventory and the supply chain. 

    We are in the midst of one of the worst global supply chain crises, stemming from soaring demand and logistics issues brought on by the pandemic. This crisis has only heightened the challenge retailers face when assessing demand and product availability. Even in normal times, mismanagement of inventory can add up to a trillion-dollar problem, according to IHL Group (costing $634 billion in lost sales worldwide each year, while overstocks result in $472 billion in lost revenues due to markdowns). 

    On the flipside, optimizing your supply chain can lead to greater profits. For instance, McKinsey predicts that a 10% to 20% improvement in retail supply chain forecasting accuracy is likely to produce a 5% reduction in inventory costs and a 2% to 3% increase in revenues.

    Some of the challenges related to demand forecasting include:

    • Low accuracy leads to excess inventory, missed sales, and pressure on fragile supply chains.

    • Real drivers of product demand are not included, because large datasets are hard to model using traditional methods.

    • Poor accuracy for new product launches and products that have sparse or intermittent demand.

    • Complex models are hard to understand, leading to poor product allocation and low return on investment on promotions.

    • Different departments use different methods, leading to miscommunication and costly reconciliation errors.

    AI-based demand forecasting techniques can help. Vertex AI Forecast supports retailers in maintaining greater inventory flexibility by infusing machine learning into their existing systems. Machine learning and AI-based forecasting models like Vertex AI are able to digest large sets of disparate data, drive analytics and automatically adjust when provided with new information. 

    With these machine learning models, retailers can not only incorporate historical sales data, but also use close to real-time data such as marketing campaigns, web actions like a customer clicking the “add to cart” button on a website, local weather forecasts, and much more. 

    Pivot point 2: Enhanced product discovery through AI-powered search and recommendations

    If customers can’t easily find what they are looking for, whether online or at the store, they will turn to someone else. That’s a simple statement, but one with profound impacts.

    In research conducted by The Harris Poll and Google Cloud, we found that over a six month period, 95% of consumers received search results that were not relevant to what they were searching for on a retail website. And roughly 85% of consumers view a brand differently after an unsuccessful search, while 74% say they avoid websites where they’ve experienced search difficulties in the past.

    Each year, retailers lose more than $300 billion dollars from search abandonment, or when a consumer searches for a product on a retailer’s website but does not find what they are looking for. Our product discovery solutions help you surface the right products, to the right customers, at the right time. These solutions include: 

    • Vision Product Search, which is like bringing the augmented reality of Google Lens to a retailer’s own branded mobile app experience. Both shoppers and retail store associates can search for products using an image they’ve photographed or found online and receive a ranked list of similar items.

    • Recommendations AI, which enables retailers to deliver highly personalized recommendations at scale across channels.

    • Retail Search, which provides Google-quality search results on a retailer’s own website and mobile applications.

    All three are powered by Google Cloud, leveraging Google’s advanced understanding of user context and intent, utilizing technology to deliver a seamless experience to every shopper. With these combined capabilities, retailers are able to reduce search abandonment and improve conversions across their digital properties. 

    Pivot point 3: Building the modern store

    Stores are no longer places for just browsing and buying. They must be flexible operation centers, ready to pivot to address changing circumstances. The modern store must be multiple things at once: a mini-fulfillment and return center, a recommendation engine, a shopping destination, a fun place to work, and more. 

    Just as retail companies had to embrace omnichannel, stores are now becoming omnichannel centers on their own, mixing the digital and physical into a single location. Retailers can use physical stores as a vehicle to deliver superior customer experiences. This will demand heightened levels of collaboration and cooperation between stores, digital, and tech infrastructure teams, building on the agile ways they have worked together.

    In many ways, it’s about allowing our physical spaces to function more like digital ones. Google Cloud can help by bringing the scalability, security, and reliability of the cloud to the store, allowing physical locations to upgrade infrastructure and modernize their internal and customer-facing applications. 

    Think of it as when a new OS gets released for your phone. It’s the same small, hard box, but the experience can feel radically different. Now, extend that same idea to a digitally enabled store. With the right displays, interfaces, and tools at a given retail location, the team only needs to send an over-the-air update to create radically fresh experiences, ranging from sales displays to fulfillment or employee engagement.

    Such an approach can enable streamlined experiences for both customers and store associates. For instance, when it comes to the modern store’s evolving role as a fulfillment or return center, cloud solutions can help drive efficiency in stores through automation of ordering, replenishment, and fulfillment of omnichannel order selection.  

    Similar tools for personalized product discovery online can be applied to customers in the store, helping them to browse and explore, or even create a tailored shopping experience. 

    The impact of store associates can be maximized by equipping them with technology to provide expertise that drives value-added customer service, as well as increasing productivity in stores by streamlining operations, thus lowering overhead cost. At the register, customers should be able to enjoy frictionless checkout while ensuring reliable, accurate, secure transactions.

    Google Cloud can help retailers transform

    The ability to adapt and pivot to meet today’s changing consumer needs requires that retailers rely on modern tools to obtain operational flexibility. We believe that every company can be a tech company. That every decision is data driven. That every store is physical and digital all at once. That every worker is a tech worker. 

    Google Cloud works with retailers to help them solve their most challenging problems. We have the unique ability to handle massive amounts of unstructured data, in addition to advanced capabilities in AI and ML. Our products and solutions help retailers focus on what’s most important—from improving operations to capturing digital and omnichannel revenue.

    Related Article

    Shopify engineers deliver on peak performance during Black Friday Cyber Monday 2021

    Shopify just experienced a record-breaking Black Friday Cyber Monday. Learn how Shopify works with Google Cloud to handle unprecedented p...

    Read Article
  • New in Google Cloud VMware Engine: Single nodes, certifications and more Wed, 19 Jan 2022 17:00:00 -0000

    We’ve made several updates to Google Cloud VMware Engine in the past few months — today’s post provides a recap of our latest milestones. 

    Google Cloud VMware Engine delivers an enterprise-grade VMware stack running natively in Google Cloud. This fully managed cloud service is one of the fastest paths to the cloud for VMware workloads without making changes to existing applications or operating models across a variety of use-cases. These include rapid data center exit, application lift and shift, disaster recovery, virtual desktop infrastructure, or modernization at your own pace. 

    The service helps our customers save money and time while accelerating their digital transformation journey. In fact, in a study conducted by VMware’s Cloud Economics team, Google Cloud VMware Engine delivers an average of 45% lower TCO compared to on-premises.Further, LIQ, a CRM software company was able to achieve 60% total infrastructure cost reduction compared with two years ago, and a 92% savings rate for storing historical data.

    In June of 2021 we announced Autoscale, Mumbai expansion and more. 

    Key updates this time around include:

    • Single node private cloud: a time-bound, 60-day, single node non-production environment for VMware Engine that allows you to do proofs-of-concept.

    • New private clouds will now deploy on vSphere version 7.0 Update 2 and NSX-T version 3.1.2.

    • Preview of NetApp Cloud Volumes Service enabling independent scaling of datastore storage from compute without adding additional hosts

    • Service availability in Toronto and expansion into a second zone in Frankfurt and Sydney

    • Compliance certifications updates: achievement of ISO 27001/27017/27018, SOC 2 Type 2, SOC 3 and PCI-DSS compliance certifications

    • We are also working on the ability to purchase Prepay options via the Google Cloud Console for 1 year and 3 year commitment terms

    Let us look into each of these updates in more depth.

    Single node private cloud: We understand that your Cloud Transformation decisions do not happen overnight. Often you want to understand the values and benefits of your option by using products through trials and technical validations. To support such scenarios, you can now get started with your Google Cloud VMware Engine experience with a 60-day time-bound single node private cloud. Designed for non-production usage such as pilots and proof-of-concept evaluations, this configuration allows you to understand the capabilities of this service. It has a 60-day time span - this means that after 60 days, the single node private cloud is automatically deleted along with the workloads and data in it. At any point during these 60 days, you can expand to a production 3 node private cloud with a single click. 

    Note: A private cloud must contain at least 3 nodes to be eligible for coverage based on the SLA.

    Upgrades to the core VMware stack: All new VMware Engine private clouds now deploy with VMware vSphere version 7.0 Update 2 and NSX-T version 3.1.2. For existing customers, Google Cloud VMware Engine automatically handles the upgrades of the VMware stack from version 7.0 Update 1 to 7.0 Update 2 and the NSX-T stack from version 3.0 to 3.1.2 with customers receiving proactive notifications and having the ability to select their upgrade window. Read more in our November 2021 service announcement.

    • ESXi: Enhanced administrative capabilities, reduced compute and I/O latency, and jitter for latency sensitive workloads, and more

    • vCenter: Scaled VMware vSphere vMotion operations, security fixes and more. 

    • NSX-T: New events and alarms, support for parallel cluster upgrade, migration from NVDS to VDS and more

    Preview of NetApp Cloud Volumes Service as datastores: This capability will enable you to independently scale your datastore storage without adding additional hosts, thereby saving costs. In October 2021, NetApp announced the integration of NetApp Cloud Volumes Service (CVS) as datastores for Google Cloud VMware Engine. It will enable you to migrate your vSphere workloads that require large amounts of vmdk storage to the cloud and address the needs of storage-bound workloads and use-cases such as DR. This complements the ability for you to use NetApp CVS as external storage that is mounted from within the guest OS of your Google Cloud VMware Engine VMs. 

    Google Cloud VMware Engine is now available in the Toronto region. This brings the availability of the service to 13 regions globally, enabling our multi-national and regional customers to leverage a VMware-compatible infrastructure-as-a-service platform on Google Cloud.

    Expansion into a second zone in Frankfurt and Sydney: While we provide 4-9’s of SLA in a single zone in each one of the 13 regions that the service is available in, there are customers who want even more availability. We are happy to announce that Google Cloud VMware Engine is now available in second zones in Frankfurt and Sydney. In addition, we are working on making Google Cloud VMware Engine available in additional zones.

    Compliance certifications updates:

    We enable customers to meet their security and compliance needs for their VMware workloads - with a single operator model. Google manages the Google Cloud VMware Engine infrastructure and the administrative tasks that go with managing the systems, platforms, and VMware stack that supports it. These components run on Google Cloud, which leverages the same secure-by-design infrastructure, built-in protection, and global network that Google uses to protect your information, identities, applications, and devices. 

    One of the areas that we have been working on is adding more compliance certifications to Google Cloud VMware Engine. As you may remember, Google Cloud VMware Engine is covered under the Google Cloud Business Associate Agreement (BAA). Let us take a look at new certifications we have achieved in the last few months. The below certifications are available for Google Cloud VMware Engine running in Ashburn, Los Angeles, Frankfurt, London, Tokyo, Sydney, Netherlands, Singapore,  São Paulo, Montreal, Council Bluffs, Mumbai. The supported locations are listed in the corresponding audit reports. Your Google contact should be able to provide you with those reports.

    ISO Compliance: As of November 4 2021, Google Cloud VMware Engine is certified as ISO/IEC 27001/27017/27018 compliant. The International Organization for Standardization (ISO) is an independent, non-governmental international organization with an international membership of 163 national standards bodies. The ISO/IEC 27000 family of standards enable organizations to keep their information assets more secure.

    SOC 2 Type 2 and SOC 3 Compliance: Google Cloud VMware Engine has received the SOC 2 Type 2 as well as the SOC 3 report based on third-party audit. 

    • The SOC 2 is a report based on the Auditing Standards Board of the American Institute of Certified Public Accountants' (AICPA) existing Trust Services Criteria (TSC). The purpose of this report is to evaluate an organization’s information systems relevant to security, availability, processing integrity, confidentiality, and privacy. 

    • Like SOC 2, the SOC 3 report has been developed based on the Auditing Standards Board of the American Institute of Certified Public Accountants’ (AICPA) Trust Service Criteria (TSC). The SOC 3 is a public report of internal controls over security, availability, processing integrity, and confidentiality.

    •  Please contact your Google account team if you would like a copy of the report.

    PCI DSS Compliance:  Google Cloud VMware Engine has been reviewed by an independent Qualified Security Assessor and determined to be PCI DSS 3.2.1 compliant. This means that the service provides an infrastructure upon which customers may build their own services or applications which store, process, or transmit cardholder data. It is important to note that customers are still responsible for ensuring that their applications are PCI DSS compliant. PCI DSS is a set of network security and business best practices guidelines adopted by the PCI Security Standards Council to establish a “minimum security standard” to protect customers’ payment card information. Google Cloud undergoes at least an annual third-party audit to certify individual products against the PCI DSS.

    Please contact your Google account team if you would like a copy of the reports.

    Prepay via Google Cloud Console: As you are aware, you have monthly as well as prepay options for 1 year and 3 year commitment contracts for purchasing Google Cloud VMware Engine. Monthly payment options are executable via the Google Cloud console, but prepay options require offline order processing. Prepay options are attractive due to the high discount levels they create (up to 50% discounts are possible). We are working on enabling prepay purchasing option directly via your Google Cloud console. If you are interested in this capability, please contact your Google Sales representative.

    This brings us to the end of our updates this time around. For the latest updates to the service, please bookmark our release notes.


    The authors would like to thank Krishna Chengavalli and Manish Lohani for their contributions to this article.

    1. https://blogs.vmware.com/cloud/2021/07/28/google-cloud-vmware-engine-saves-over-45-on-tco-in-first-study/

  • Keep tabs on your tables: Cloud SQL for MySQL launches database auditing Wed, 19 Jan 2022 17:00:00 -0000

    If you manage sensitive data in your MySQL database, you might be obligated to record and monitor user database activity, especially if you work in a regulated industry. Although you could set up MySQL’s slow query log or general log to create an audit trail of user activity, these logs significantly impact database performance and aren’t formatted optimally for auditing. Purpose-built, open source audit plugins are better, but they lack some of the advanced security features that enterprise users need, such as rule-based auditing and results masking.

    Cloud SQL for MySQL has developed a new audit plugin called the Cloud SQL for MySQL Audit Plugin that offers enterprise-grade database auditing to help you maintain a strong, compliant security posture. You can now define audit rules that govern which database activity is recorded. This activity is recorded in the form of database audit logs. The plugin masks sensitive data out of the audit logs, such as user passwords, and processed database audit logs are then sent to Cloud Logging, where you can view them to understand who performed what operations on which data, when. You can also route these logs using a user-defined log sink to a Google Cloud Storage bucket or BigQuery for long-term storage for compliance reasons or Splunk or another log management tool to detect unusual activity in real-time. 

    How to audit a MySQL database

    Say you’re a security engineer at Money Buckets Bank and you’ve been asked by the compliance department to audit activity on the “bank-prod” Cloud SQL instance in the “money-buckets” project. You’re asked to audit two types of activity: 

    1. Any write activity by any user on the sensitive “transactions” table in the “finance” database.

    2. Any activity by the “dba1” and “dba2” superuser accounts. 

    As a security engineer, you want to narrowly define rules that only audit the sensitive activity, ensuring minimal impact to database performance. After enabling MySQL database auditing, you would call MySQL stored procedures to configure the two audit rules:

    The plugin stores these audit rules in the “mysql” system database. The plugin monitors database activity from MySQL’s Audit API and, when activity matches the audit rules, records a log to send to Cloud Logging.

    Later that month, you decide to review these audit logs in the Logs Explorer. To isolate all the MySQL database auditing log entries from the “money-buckets” project, you’d enter in the following query filter:

    You can now use these audit log entries in your audit trail in order to comply with the key finance regulations that govern Money Buckets Bank.

    Learn More

    With MySQL database auditing, you can collect audit records of user database activity for security and compliance purposes. To learn more about database auditing for Cloud SQL for MySQL, see the documentation.

    Related Article

    Improving security and governance in PostgreSQL with Cloud SQL

    Managed cloud databases need security and governance, and Cloud SQL just added pgAudit and Cloud IAM integrations to make security easier.

    Read Article
  • Creating custom notifications with Cloud Monitoring and Cloud Run Wed, 19 Jan 2022 17:00:00 -0000

    The uniqueness of each organization in the enterprise IT space creates interesting challenges in how they need to handle alerts. With many commercial tools in the IT Service Management (ITSM) market, and lots of custom internal tools, we equip teams with tools that are both flexible and powerful.

    This post is for Google Cloud customers who want to deliver Cloud Monitoring alert notifications to third-party services that don’t have supported notification channels.

    It provides a working implementation of integrating Cloud Pub/Sub notification channels with the Google Chat service to forward the alert notifications to Google Chat rooms and demonstrates how this is deployed on Google Cloud. Moreover, it outlines steps for continuous integration using Cloud Build, Terraform, and GitHub. All the source code for this project can be found in this GitHub repository.

    It is worth noting that the tutorial provides a generic framework that can be adapted by Google Cloud customers to deliver alert notifications to any 3rd-party services that provide Webhook/Http API interfaces. 

    Instructions for how to modify the sample code to integrate with other 3rd-party services is explained in the section “Extending to other 3rd-party services“

    Objectives

    • Write a service to forward Google Cloud Monitoring alert notifications from Cloud Monitoring Pub/Sub notification channels to a third-party service.

    • Build and deploy the service to Cloud Run using Cloud Build, Terraform, and GitHub.

    Costs

    This tutorial uses billable components of Google Cloud:

    • Cloud Build

    • Cloud Compute Engine (GCE)

    • Cloud Container Registry

    • Cloud Pub/Sub

    • Cloud Run

    • Cloud Storage

    Use the pricing calculator to generate a cost estimate based on your projected usage.

    Before you begin

    For this tutorial, you need a GCP project. You can create a new project or select a project that you've already created:

    1. Select or create a Google Cloud project.
      Go to the project selector page

    2. Enable billing for your project.
      Enable billing

    When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For details, see the "Cleaning up" section at the end of this tutorial.

    Integration with Google Chat

    This tutorial provides a sample integration to enable Google Cloud customers to forward alert notifications to their Google Chat rooms. The system architecture is as follows:

    Creating custom notifications with Cloud Monitoring and Cloud Run- fastlane blog post.jpg

    In the example, two monitoring alerting policies are created using Terraform: one is based on the GCE instance CPU usage_time metric and the other is based on the GCE instance disk read_bytes_count metric. Both alert policies use Cloud Monitoring Pub/Sub notification channels to send alert notifications. A Cloud Pub/Sub push subscription is configured for each Cloud Pub/Sub notification channel. The push endpoints of the Cloud Pub/Sub push subscriptions are pointed to the Cloud Run service we implement so that all the alert notifications sent to the Cloud Pub/Sub notification channels are forwarded to the Cloud Run service. The Cloud Run service is a simple Http server that transforms the incoming Cloud Pub/Sub messages into Google Chat messages and sends them to the configured Google Chat rooms via their incoming Webhook URLs

    All the infrastructure components are automatically created and configured using Terraform, which include:

    • Cloud Pub/Sub topics, push subscriptions, and service account setup.

    • Cloud Pub/Sub notification channels

    • Cloud Monitoring Alerting policies

    • Cloud Run service and service account setup. 

    The Terraform code can be found at ./tf-modules and ./environments.

     Looking at the Cloud Run code

    The Cloud Run service is responsible for delivering the Cloud Pub/Sub alert notifications to the configured Google Chat rooms. The integration code is located in the ./notification_integrationfolder.

    In this example, a basic Flask HTTP server is set up in main.py to handle incoming Cloud Monitoring alert notifications from Cloud Monitoring Pub/Sub channels. We use Cloud Pub/Sub push subscriptions to forward the Pub/Sub notification messages to the Flask server in real time. More information on Cloud Pub/Sub subscription can be found in the Subscriber overview

    Below is a handler that processes the Pub/Sub message:

    The handler calls the ExtractNotificationFromPubSubMsg() function in utilities/pubsub.py to parse the relevant notification data from the Pub/Sub message, and then loads the notification data into a dictionary. The output is a json object with the schema defined here.

    This notification dictionary is then passed to SendNotification() which sends the notification along with config_params to the _SendHttpRequest(), in utilities/service_handler.py, which appropriately notifies the third-party service about the alert with an API client. There is a URL parameter “config_id”, which is the configuration ID used by the Cloud Run service to retrieve the configuration data “config_params”. “Config_params” includes all the needed parameters (e.g. HTTP URL and user credentials) for the Cloud Run service to forward the incoming notification to the third-party service. In this example, “config_id” corresponds to the Pub/Sub topics defined here.

    You can modify this dispatch function to forward alerts to any third-party service.

    Remember to acknowledge the Pub/Sub message on success by returning a success HTTP status code (200 or 204). See Receiving push messages.

    All the logs written in the Cloud Run service can be easily accessed either from the Cloud Logging Logs Explorer or the Cloud Run UI. The logs are very useful for debugging the Cloud Run service. Moreover, users can create an extra pull subscription of the Pub/Sub topic used by the Cloud Pub/Sub notification channel to simplify the triage of notification delivery issues. For example, if some alert notifications were not delivered to users’ Google Chat room, users could first check if the pull subscription received the Cloud Pub/Sub messages of the missing alert notifications. If the pull subscription correctly received the missing alert notifications, then it means the alert notifications got lost in the Cloud Run service. Otherwise, it was the Cloud Pub/Sub notification channel issue.  

    Finally, there is a Dockerfile containing instructions to build an image that hosts the Flask server when deployed to Cloud Run:

    Deploying the app

    This section describes how to deploy and set up continuous integration using Cloud Build, Terraform, and GitHub, following the GitOps methodology. The instructions are based on Managing infrastructure as code with Terraform, Cloud Build, and GitOps, which also explains the GitOps methodology and architecture. Sections from the guide are also referenced in the steps below. An important difference is that this document assumes that separate Google Cloud projects are used for the dev and prod environments, whereas the referenced guide configures the environments as virtual private clouds (VPCs). As a result, the following deployment steps (with the exception of “Setting up your GitHub repository”) need to be executed for each of the dev and prod projects.

    Set up your GitHub repository

    To get all the code and understand the repository structure needed to deploy your app, follow the steps in Setting up your GitHub repository

    Deploy the Google Chat integration

    Setting up webhook urls

    Hardcoded Webhook URLs

    We provided within main.py a config_map variable to store your webhook urls. You’ll first need to locate your Google Chat webhook url and replace the value for the key ‘webhook_url’ within the config_map dictionary.

    Manual GCS Bucket Webhook URLs

    Alternatively if you’d like to have a more secure option to store your webhook urls, you can create a GCS bucket to store your webhook urls.

    1. Locate and store your Google Chat webhook url for your gchat rooms in a json file named config_params.json in the format of:

      1. {“topic”: “webhook url”, “topic”: “webhook url”}

    2. Create a Cloud Storage bucket to store the json file with the name gcs_config_bucket_{PROJECT_ID}

      1. You can also run this command in the cloud console: gsutil mb gs://gcs_config_bucket_{PROJECT_ID}

    3. Grant the read permissions (Storage Legacy Bucket Reader and Storage Legacy Object Reader) to the default Cloud Run service account <PROJECT_NUMBER>-compute@developer.gserviceaccount.com

    To deploy the notification channel integration sample for the first time automatically, we’ve provided a script deploy.py that will handle a majority of the required actions for deployment. After completing the webhook url step above run the following command:

    Python3 deploy.py -p <PROJECT_ID>

    Manual Deployment

    To deploy the notification channel integration manually, you’ll have to complete the following steps:

    1. Set the Cloud Platform Project in Cloud Shell. Replace <PROJECT_ID> with your Cloud Platform project id:
    gcloud config set project <PROJECT_ID>

    2. Enable the Cloud Build Service:
    gcloud services enable cloudbuild.googleapis.com

    3. Enable the Cloud Resource Manager Service:
    gcloud services enable cloudresourcemanager.googleapis.com

    4. Enable the Cloud Service Usage Service:
    gcloud services enable serviceusage.googleapis.com

    5. Grant the required permissions to your Cloud Build service account:
    CLOUDBUILD_SA="$(gcloud projects describe $PROJECT_ID --format 'value(projectNumber)')@cloudbuild.gserviceaccount.com"

    gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$CLOUDBUILD_SA --role roles/iam.securityAdmin

    gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$CLOUDBUILD_SA --role roles/run.admin

    gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$CLOUDBUILD_SA --role roles/editor

    6. Create Cloud Storage bucket to store Terraform states remotely:
    PROJECT_ID=$(gcloud config get-value project)
    gsutil mb gs://${PROJECT_ID}-tfstate

    7. (Optional) You may enable Object Versioning to keep the history of your deployments:
    gsutil versioning set on gs://${PROJECT_ID}-tfstate

    8. Trigger a build and deploy to Cloud Run:
    If you used the in-memory config server, run (replace <BRANCH> with the current environment branch)
    gcloud builds submit . --config cloudbuild.yaml --substitutions BRANCH_NAME=<BRANCH>,_CONFIG_SERVER_TYPE=in-memory

    If you use the GCS based config server, run:
    gcloud builds submit . --config cloudbuild.yaml --substitutions BRANCH_NAME=<BRANCH>,_CONFIG_SERVER_TYPE=gcs

    Continuous Deployment setup

    This is an optional flow and this section describes how to set up continuous deployment using Cloud Build through the use of triggers. The flow is demonstrated in the following diagram: every time users push a new version to their Git repository, it will trigger the Cloud Build trigger; the Cloud Build will run the YAML file to rebuild the Cloud Run docker image, update the infrastructure setup, and redeploy the Cloud Run service.

    Creating custom notifications with Cloud Monitoring and Cloud Run- fastlane blog post (1).jpg

    The instructions are based on Automating builds with Cloud Build

    Set up a code repository, this could be GitHub, Google Cloud Source repository or any private repository.  

    1. Clone the repository from our GitHub.  

    2. Switch to the new project and push the cloned repository to the remote repository.

    cloud source repositories.jpg

    Next we create a new trigger in Cloud Build. 

    Cleaning up

    If you created a new project for this tutorial, delete the project. If you used an existing project and wish to keep it without the changes added in this tutorial, delete resources created for the tutorial.

    Delete the project

    The easiest way to eliminate billing is to delete the project you created for the tutorial.

    Deleting a project has the following effects:

    • Everything in the project is deleted. If you used an existing project for this tutorial, when you delete it, you also delete any other work you've done in the project.

    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple tutorials and quickstarts, reusing projects can help you avoid exceeding project quota limits.

    To delete a project, do the following:

    1. In the Cloud Console, go to the Manage resources page.

      Go to the Manage resources page

    2. In the project list, select the project that you want to delete and then click Delete.

    3. In the dialog, type the project ID and then click Shut down to delete the project.

    Delete tutorial resources

    1. Delete the Cloud resources provisioned by Terraform:
      terraform destroy

    2. Delete the Cloud Storage bucket called {PROJECT_ID}-tfstate.

    3. Delete permissions that were granted to the Cloud Build service account:

    4. gcloud projects remove-iam-policy-binding $PROJECT_ID --member serviceAccount:$CLOUDBUILD_SA --role roles/iam.securityAdmin

    5. gcloud projects remove-iam-policy-binding $PROJECT_ID --member serviceAccount:$CLOUDBUILD_SA --role roles/run.admin

    6. gcloud projects remove-iam-policy-binding $PROJECT_ID --member serviceAccount:$CLOUDBUILD_SA --role roles/storage.admin

    7. Delete permission for the service account to publish to tf-topic:

    8. gcloud pubsub topics remove-iam-policy-binding \ projects/[PROJECT_NUMBER]/topics/tf-topic --role=roles/pubsub.publisher \ --member=serviceAccount:service-[PROJECT_NUMBER]@gcp-sa-monitoring-notification.iam.gserviceaccount.com

    9. Delete the notification channel that uses tf-topic.

    10. Delete your forked GitHub repository notification_integration.

    11. Disconnect the GitHub repository from Cloud Build by deleting the Cloud Build triggers.

    12. Disable Google Cloud APIs.

    Expanding to other 3rd-party services

    The sample code in the tutorial provides a generic framework and can be easily customized for Google Cloud customers to deliver alert notifications to any 3rd-party services that provide Webhook/Http API interfaces.

    To integrate with a new 3rd-party service, we can create a new derived class of the abstract class HttpRequestBasedHandler defined in ./notification_channel/service_handler.py and updated the following member functions:

    • CheckConfigParams(): A function that checks if a given integration configuration is valid, e.g. a required API key is given.

    • _GetHttpUrl(): A function to get the Http url (where to send Http requests) from the configuration data

    • _BuildHttpRequestHeaders(): A function that constructs the Http request header.

    • _BuildHttpRequestBody(): A function that constructs the Http request message body based on the incoming Cloud Pub/Sub message.

    • SendNotification(): You can reuse the one defined in the GchatHandler class. 

    There is no need to update the Terraform code, except you need to customize your alert policies. If you have additional suggestions, community feedback is always welcome. Please submit pull requests to continue to build the GitHub repository together.

  • Webhook, Pub/Sub, and Slack Alerting notification channels launched Wed, 19 Jan 2022 17:00:00 -0000

    When an alert fires from your applications, your team needs to know as soon as possible to mitigate any user-facing issues. Customers with complex operating environments rely on incident management or related services to organize and coordinate their responses to issues. They need the flexibility to route alert notifications to platforms or services in the formats that they can accept. 

    We’re excited to share that Google Cloud Monitoring’s Webhooks, Pub/Sub, and Slack notification channels for alerting are now Generally Available (GA). Along with our existing notification channels of email, SMS, mobile, and PagerDuty (currently in Beta), Google Cloud alerts can now be routed to many widely used services.  These new notification channels can be used to integrate alerts with the most popular Collaboration, ITSM, Incident Management, and virtually any other service or software that support Webhooks or Pub/Sub integration.

    You can configure your Google Cloud alerts to be sent to any vendor or custom-built tool used by your team. For example, your GKE cluster uptime checks can send the alert data to a 3rd party communication tool via the pub/sub notification channel. Or if you’re tracking security concerns such as unexpected IP addresses, you can send a log-based alert to your incident management provider. 

    How to Configure Webhook, Pub/Sub, or Slack Notifications

    For custom integrations, Pub/Sub is the recommended approach for sending notifications to a private network. Webhooks are supported for public endpoints and are available with basic and token authentication. Both of these notification channels can be enabled programmatically through an automation tool like Terraform.  

    If you’re using Slack, you can enable Cloud Monitoring access to your Slack channel/workspace and then create the notification channel. If you'd like to automate Slack channel notification deployments, you'll need to create and install your own Slack app and reuse the OAuth token instead of using the Google Cloud Monitoring app.

    cloud console.jpg

    What’s Next 

    If you’d like to learn more, check out our example tutorial blog on how to send pub/sub notifications to external vendors using Cloud Run and Cloud Build. Please feel free to share your comments and feedback with us in the Google Cloud Community

  • What’s new with Google Cloud Tue, 18 Jan 2022 22:00:00 -0000

    Want to know the latest from Google Cloud? Find it here in one handy location. Check back regularly for our newest updates, announcements, resources, events, learning opportunities, and more. 


    Tip: Not sure where to find what you’re looking for on the Google Cloud blog? Start here: Google Cloud blog 101: Full list of topics, links, and resources.


    Week of Jan 17-Jan 21

    • Firestore Key Visualizer is Generally Available (GA): Firestore Key Visualizer is an interactive, performance monitoring tool that helps customers observe and maximize Firestore’s  performance. Learn more.
    • Like many organizations, Wayfair faced the challenge of deciding which cloud databases they should migrate to in order to modernize their business and operations. Ultimately, they chose Cloud SQL and Cloud Spanner because of the databases’ clear path for shifting workloads as well as the flexibility they both provide. Learn how Wayfair was able to migrate quickly while still being able to serve production traffic at scale.

    Related Article

    Google Cloud doubles-down on ecosystem in 2022 to meet customer demand

    Google Cloud will double spend in its partner ecosystem over the next few years, including new benefits, incentives, programs, and training.

    Read Article

    Week of Jan 10-Jan 14

    • Start your 2022 New Year’s resolutions by learning at no cost how to use Google Cloud. Read more to find how to take advantage of these training opportunities.
    • 8 megatrends drive cloud adoption—and improve security for all. Google Cloud CISO Phil Venables explains the eight major megatrends powering cloud adoption, and why they’ll continue to make the cloud more secure than on-prem for the foreseeable future. Read more.

    Related Article

    Five do’s and don’ts CSPs should know about going cloud-native

    Expert advice from operators and network provider partners on how to do cloud-native right.

    Read Article

    Week of Jan 3-Jan 7

    • Google Transfer Appliance announces General Availability of online mode. Customers collecting data at edge locations (e.g. cameras, cars, sensors) can offload to Transfer Appliance and stream that data to a Cloud Storage bucket. Online mode can be toggled to send the data to Cloud Storage over the network, or offline by shipping the appliance. Customers can monitor their online transfers for appliances from Cloud Console.

    Related Article

    Why 2021 was an electrifying year for 24/7 carbon-free energy

    Recapping Google’s progress in 2021 toward running on 24/7 carbon-free energy by 2030 — and decarbonizing the electricity system as a whole.

    Read Article

    Week of Dec 27-Dec 31

    • The most-read blogs about Google Cloud compute, networking, storage and physical infrastructure in 2021. Read more.

    • Top Google Cloud managed container blogs of 2021.

    • Four cloud security trends that organizations and practitioners should be planning for in 2022—and what they should do about them. Read more.

    • Google Cloud announces the top data analytics stories from 2021 including the top three trends and lessons they learned from customers this year. Read more.

    • Explore Google Cloud’s Contact Center AI (CCAI) and its momentum in 2021. Read more.

    • An overview of the innovations that Google Workspace delivered in 2021 for Google Meet. Read more.

    • Google Cloud’s top artificial intelligence and machine learning posts from 2021. Read more.

    • How we’ve helped break down silos, unearth the value of data, and apply that data to solve big problems. Read more.

    • A recap of the year’s infrastructure progress, from impressive Tau VMs, to industry-leading storage capabilities, to major networking leaps. Read more.

    • Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team. Read more.

    • Google Cloud - A cloud built for developers — 2021 year in review. Read more.

    • API management continued to grow in importance in 2021, and Apigee continued to innovate capabilities for customers, new solutions, and partnerships. Read more.

    • Recapping Google’s progress in 2021 toward running on 24/7 carbon-free energy by 2030 — and decarbonizing the electricity system as a whole. Read more.

    Related Article

    2021 Gartner® Magic Quadrant™ for Cloud Database Management Systems recognizes Google as a Leader

    Unified capabilities for transactional and analytical use cases highlighted, as well as progress in security, elasticity, advanced analyt...

    Read Article

    Week of Dec 20-Dec 24

    • And that’s a wrap! After engaging in countless customer interviews, we’re sharing our top 3 lessons learned from our data customers in 2021. Learn what customer data journeys inspired our top picks and what made the cut here.
    • Cloud SQL now shows you minor version information. For more information, see our documentation.
    • Cloud SQL for MySQL now allows you to select your MySQL 8.0 minor version when creating an instance and upgrade MySQL 8.0 minor version. For more information, see our documentation.
    • Cloud SQL for MySQL now supports database auditing. Database auditing lets you track specific user actions in the database, such as table updates, read queries, user privilege grants, and others. To learn more, see MySQL database auditing.

    Related Article

    Google Cloud recommendations for investigating and responding to the Apache “Log4j 2” vulnerability

    Google Cloud recommendations for investigating and responding to Apache Log4j 2 vulnerability (CVE-2021-44228)

    Read Article

    Week of Dec 12-Dec 17

    • A CRITICAL VULNERABILITY in a widely used logging library, Apache’s Log4j, has become a global security incident. Security researchers around the globe warn that this could have serious repercussions. Two Google Cloud Blog posts describe how Cloud Armorand Cloud IDS both help mitigate the threat.
    • Take advantage of these ten no-cost trainings before 2022. Check them out here.
    • Deploy Task Queues alongside your Cloud Application: Cloud Tasks is now available in 23 GCP Regions worldwide. Read more.
    • Managed Anthos Service Mesh support for GKE Autopilot (Preview): GKE Autopilot with Managed ASM provides ease of use and simplified administration capabilities, allowing customers to focus on their application, not the infrastructure. Customers can now let Google handle the upgrade and lifecycle tasks for both the cluster and the service mesh. Configure Managed ASM with asmcli experiment in GKE Autopilot cluster.
    • Policy Troubleshooter for BeyondCorp Enterprise is now generally available! Using this feature, admins can triage access failure events and perform the necessary actions to unblock users quickly. Learn more by registering for Google Cloud Security Talks on December 15 and attending the BeyondCorp Enterprise session. The event is free to attend and sessions will be available on-demand.
    • Google Cloud Security Talks, Zero Trust Edition: This week, we hosted our final Google Cloud Security Talks event of the year, focused on all things zero trust. Google pioneered the implementation of zero trust in the enterprise over a decade ago with our BeyondCorp effort, and we continue to lead the way, applying this approach to most aspects of our operations. Check out our digital sessions on-demand to hear the latest updates on Google’s vision for a zero trust future and how you can leverage our capabilities to protect your organization in today’s challenging threat environment.

    Related Article

    The past, present, and future of Kubernetes with Eric Brewer

    Find out what the last decade of building cloud computing at Google was like, including the rise of Kubernetes and importance of open sou...

    Read Article

    Week of Dec 6-Dec 10

    • 5 key metrics to measure cloud FinOps impact in 2022 and beyond - Learn about the 5 key metrics to effectively measure the impact of Cloud FinOps across your organization and leverage the metrics to gain insights, prioritize on strategic goals, and drive enterprise-wide adoption. Learn more
    • We announced Cloud IDS, our new network security offering, is now generally available. Cloud IDS, built with Palo Alto Networks’ technologies, delivers easy-to-use, cloud-native, managed, network-based threat detection with  industry-leading breadth and security efficacy. To learn more, and request a 30 day trial credit, see the Cloud IDS webpage.

    Related Article

    Expanding our infrastructure with cloud regions around the world

    A Google Cloud region is coming to Santiago, Chile, and additional regions are coming to Germany, Israel, KSA and the United States.

    Read Article

    Week of Nov 29-Dec 3

    • Join Cloud Learn, happening from Dec. 8-9: This interactive learning event will have live technical demos, Q&As, career development workshops, and more covering everything from Google Cloud fundamentals to certification prep. Learn more.

    • Get a deep dive into BigQuery Administrator Hub– With BigQuery Administrator Hub you can better manage BigQuery at scale with Resource Charts and Slot Estimator Administrators. Learn more about these tools and just how easy they are to usehere.

    • New data and AI in Media blog - How data and AI can help media companies better personalize; and what to watch out for. We interviewed Googlers, Gloria Lee, Executive Account Director of Media & Entertainment, and John Abel, Technical Director for the Office of the CTO, to share exclusive insights on how media organizations should think about and ways to make the most out of their data in the new era of direct-to-consumer. Watch our video interview with Gloria and John and read more.

    • Datastream is now generally available (GA): Datastream, a serverless change data capture (CDC) and replication service, allows you to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. Datastream currently supports CDC ingestion from Oracle and MySQL to Cloud Storage, with additional sources and destinations coming in the future. Datastream integrates with Dataflow and Cloud Data Fusion to deliver real time replication to a wide range of destinations, including BigQuery, Cloud Spanner and Cloud SQL. Learn more.

    Related Article

    Illicit coin mining, ransomware, APTs target cloud users in first Google Cybersecurity Action Team Threat Horizons report

    The first threat report from the Google Cybersecurity Action Team finds cloud users are often targeted by illicit coin mining, ransomware...

    Read Article

    Week of Nov 22 - Nov 26

    • Security Command Center (SCC) launches new mute findings capability: We’re excited to announce a new “Mute Findings” capability in SCC that helps you gain operational efficiencies by effectively managing the findings volume based on your organization’s policies and requirements. SCC presents potential security risks in your cloud environment as ‘findings’ across misconfigurations, vulnerabilities, and threats. With the launch of ‘mute findings’ capability, you gain a way to reduce findings volume and focus on the security issues that are highly relevant to you and your organization. To learn more, read this blog post and watch thisshort demo video.

    Related Article

    How to develop Global Multiplayer Games using Cloud Spanner

    How Spanner makes multiplayer game development easier.

    Read Article

    Week of Nov 15 - Nov 19

    • Cloud Spanner is our distributed, globally scalable SQL database service that decouples compute from storage, which makes it possible to scale processing resources separately from storage. This means that horizontal upscaling is possible with no downtime for achieving higher performance on dimensions such as operations per second for both reads and writes. The distributed scaling nature of Spanner’s architecture makes it an ideal solution for unpredictable workloads such as online games. Learn how you can get started developing global multiplayer games using Spanner.

    • New Dataflow templates for Elasticsearch releasedto help customers process and export Google Cloud data into their Elastic Cloud. You can now push data from Pub/Sub, Cloud Storage or BigQuery into your Elasticsearch deployments in a cloud-native fashion. Read more for a deep dive on how to set up a Dataflow streaming pipeline to collect and export your Cloud Audit logs into Elasticsearch, and analyze them in Kibana UI.

    • We’re excited to announce the public preview of Google Cloud Managed Service for Prometheus, a new monitoring offering designed for scale and ease of use that maintains compatibility with the open-source Prometheus ecosystem. While Prometheus works well for many basic deployments, managing Prometheus can become challenging at enterprise scale. Learn more about the service in our blog and on the website.

    Related Article

    Announcing Spot Pods for GKE Autopilot—save on fault tolerant workloads

    You can save on GKE Autopilot workloads that tolerate interruptions with new Spot Pods.

    Read Article

    Week of Nov 8 - Nov 12

    Related Article

    Live from COP26: A cloud’s eye view

    Google sustainability experts bring their perspective on developments from the UN Climate Change Conference, or COP26.

    Read Article

    Week of Nov 1 - Nov 5

    • Time to live (TTL) reduces storage costs, improves query performance, and simplifies data retention in Cloud Spanner by automatically removing unneeded data based on user-defined policies. Unlike custom scripts or application code, TTL is fully managed and designed for minimal impact on other workloads. TTL is generally available today in Spanner at no additional cost. Read more.
    • New whitepaper available: Migrating to .NET Core/5+ on Google Cloud - This free whitepaper, written for .NET developers and software architects who want to modernize their .NET Framework applications, outlines the benefits and things to consider when migrating .NET Framework apps to .NET Core/5+ running on Google Cloud. It also offers a framework with suggestions to help you build a strategy for migrating to a fully managed Kubernetes offering or to Google serverless. Download the free whitepaper.
    • Export from Google Cloud Storage: Storage Transfer Service now offers Preview support for exporting data from Cloud Storage to any POSIX file system. You can use this bidirectional data movement capability to move data in and out of Cloud Storage, on-premises clusters, and edge locations including Google Distributed Cloud. The service provides built-in capabilities such as scheduling, bandwidth management, retries, and data integrity checks that simplifies the data transfer workflow. For more information, see Download data from Cloud Storage.
    • Document Translation is now GA! Translate documents in real-time in 100+ languages, and retain document formatting. Learn more about new features and see a demo on how Eli Lilly translates content globally.
    • Announcing the general availability of Cloud Asset Inventory console - We’re excited to announce the general availability of the new Cloud Asset Inventory user interface. In addition to all the capabilities announced earlier in Public Preview, the general availability release provides powerful search and easy filtering capabilities. These capabilities enable you to view details of resources and IAM policies, machine type and policy statistics, and insights into your overall cloud footprint. Learn more about these new capabilities by using the searching resources and searching IAM policies guides. You can get more information about Cloud Asset Inventory using our product documentation.

    Related Article

    As email turns 50, the @ symbol continues to fuel collaboration

    As email turns 50, we look at where it’s been and where the @ symbol is headed next.

    Read Article

    Week of Oct 25 - Oct 29

    • BigQuery table snapshots are now generally available. A table snapshot is a low-cost, read-only copy of a table's data as it was at a particular time.
    • By establishing a robust value measurement approach to track and monitor the business value metrics toward business goals, we are bringing technology, finance, and business leaders together through the discipline of Cloud FinOps to show how digital transformation is enabling the organization to create new innovative capabilities and generate top-line revenue. Learn more.
    • We’ve announced BigQuery Omni, a new multicloud analytics service that allows data teams to perform cross-cloud analytics - across AWS, Azure, and Google Cloud - all from one viewpoint. Learn how BigQuery Omni works and what data and business challenges it solves here.

    Related Article

    Here’s what you missed at Next ’21

    Too much to take in at Google Cloud Next 2021? No worries - here’s a breakdown of the biggest announcements at the 3-day event.

    Read Article

    Week of Oct 18 - Oct 22

    • Available now are our newest T2D VMs family based on 3rd Generation AMD EPYC processors. Learn more.
    • In case you missed it — top AI announcements from Google Cloud Next. Catch up on what’s new, see demos, and hear from our customers about how Google Cloud is making AI more accessible, more focused on business outcomes, and fast-tracking the time-to-value.
    • Too much to take in at Google Cloud Next 2021? No worries - here’s a breakdown of the biggest announcements at the 3-day event.
    • Check out the second revision of Architecture Framework, Google Cloud’s collection of canonical best practices.

    Related Article

    Solving for What’s Next

    Exciting announcements, customer stories, and technical deep dives headline this year’s Google Cloud Next. Thomas Kurian reveals the late...

    Read Article

    Week of Oct 4 - Oct 8

    • We’re excited to announce Google Cloud’s new goal of equipping more than 40 million people with Google Cloud skills. To help achieve this goal, we’re offering no-cost access to all our training content this month. Find out more here
    • Support for language repositories in Artifact Registry is now generally available. Artifact Registry allows you to store all your language-specific artifacts in one place. Supported package types include Java, Node and Python. Additionally, support for Linux packages is in public preview. Learn more.
    • Want to know what’s the latest with Google ML-Powered intelligence service Active Assist and how to learn more about it at Next’21? Check out this blog.

    Related Article

    Google Workspace at Next ‘21: 10 sessions you don’t want to miss

    The best sessions, product demos, and announcements for Google Workspace at Google Cloud Next ‘21.

    Read Article

    Week of Sept 27 - Oct 1

    • Announcing the launch of Speaker ID. In 2020, customer preference for voice calls increased by 10 percentage points (to 43%) and was by far the most preferred service channel. But most callers still need to pass through archaic authentication processes which slows down the time to resolution and burns through valuable agent time. Speaker ID, from Google Cloud, brings ML-based speaker identification directly to customers and contact center partners, allowing callers to authenticate over the phone, using their own voice. Learn more.
    • Your guide to all things AI & ML at Google Cloud Next. Google Cloud Next is coming October 12–14 and if you’re interested in AI & ML, we’ve got you covered. Tune in to hear about real use cases from companies like Twitter, Eli Lilly, Wayfair, and more. We’re also excited to share exciting product news and hands on AI learning opportunities. Learn more about AI at Next and register for free today!
    • It is now simple to use Terraform to configure Anthos features on your GKE clusters. Check out part two of this series which explores adding Policy Controller audits to our Config Sync managed cluster. Learn more.

    Related Article

    New research from Google Cloud reveals five innovation trends for market data

    New research from Google Cloud reveals five innovation trends for market data

    Read Article

    Week of Sept 20 - Sept 24

    • Announcing the webinar, Powering market data through cloud and AI/ML. We’re sponsoring a Coalition Greenwich webinar on September 23rd where we’ll discuss the findings of our upcoming study on how market data delivery and consumption is being transformed by cloud and AI. Moderated by Coalition Greenwich, the panel will feature Trey Berre from CME Group, Brad Levy from Symphony, and Ulku Rowe representing Google Cloud. Register here.
    • New research from Google Cloud reveals five innovation trends for market data. Together with Coalition Greenwich we surveyed exchanges, trading systems, data aggregators, data producers, asset managers, hedge funds, and investment banks to examine both the distribution and consumption of market data and trading infrastructure in the cloud. Learn more about our findings here.
    • If you are looking for a more automated way to manage quotas over a high number of projects, we are excited to introduce a Quota Monitoring Solution from Google Cloud Professional Services. This solution benefits customers who have many projects or organizations and are looking for an easy way to monitor the quota usage in a single dashboard and use default alerting capabilities across all quotas.

    Related Article

    The new Google Cloud region in Toronto is now open

    Google Cloud now has two regions in Canada: one in Montreal, and another in Toronto, providing customers with enhanced choice and data so...

    Read Article

      Week of Sept 13 - Sept 17

      • New storage features help ensure data is never lost. We are announcing extensions to our popular Cloud Storage offering, and introducing two new services, Filestore Enterprise, and Backup for Google Kubernetes Engine (GKE). Together, these new capabilities will make it easier for you to protect your data out-of-the box, across a wide variety of applications and use cases: Read the full article.
      • API management powers sustainable resource management. Water, waste, and energy solutions company, Veolia, uses APIs and API Management platform Apigee to build apps and help their customers build their own apps, too. Learn from their digital and API-first approach here.
      • To support our expanding customer base in Canada, we’re excited to announce that the new Google Cloud Platform region in Toronto is now open. Toronto is the 28th Google Cloud region connected via our high-performance network, helping customers better serve their users and customers throughout the globe. In combination with Montreal, customers now benefit from improved business continuity planning with distributed, secure infrastructure needed to meet IT and business requirements for disaster recovery, while maintaining data sovereignty.
      • Cloud SQL now supports custom formatting controls for CSVs.When performing admin exports and imports, users can now select custom characters for field delimiters, quotes, escapes, and other characters. For more information, see our documentation.

      Related Article

      Sqlcommenter now extending the vision of OpenTelemetry to databases

      Troubleshooting database performance issues just got easier with better observability through Sqlcommenter and Opentelemetry.

      Read Article

      Week of Sept 6 -Sept 10

      • Hear how Lowe’s SRE was able to reduce their Mean Time to Recovery (MTTR) by over 80% after adopting Google’s Site Reliability Engineering practices and Google Cloud’s operations suite.

      Related Article

      Google invests 1 billion euros in Germany's digital future

      Google is supporting Germany's transition to a digital and sustainable economy, investing 1 billion euros in digital infrastructure and c...

      Read Article

      Week of  Aug 30 -Sept 3

      • A what’s new blog in the what’s new blog? Yes, you read that correctly. Google Cloud data engineers are always hard at work maintaining the hundreds of dataset pipelines that feed into our public datasets repository, but they’re also regularly bringing new ones into the mix. Check out our newest featured datasets and catch a few best practices in our living blog: What are the newest datasets in Google Cloud?
      • Migration success with Operational Health Reviews from Google Cloud’s Professional Service Organization - Learn how Google Cloud’s Professional Services Org is proactively and strategically guiding customers to operate effectively and efficiently in the Cloud, both during and after their migration process.
      • Learn how we simplified monitoring for Google Cloud VMware Engine and Google Cloud operations suite. Read more.

      Related Article

      Celebrating Women’s Equality Day with Google Cloud

      In honor of Women’s Equality Day 2021 Google Cloud celebrates women in cloud and business technology (at Google and beyond).

      Read Article

      Week of Aug 23 -Aug 27

      • Google Transfer Appliance announces preview of online mode. Customers are increasingly collecting data that needs to quickly be transferred to the cloud. Transfer Appliances are being used to quickly offload data from sources (e.g. cameras, cars, sensors) and can now stream that data to a Cloud Storage bucket. Online mode can be toggled as data is copied into the appliance and either send the data offline by shipping the appliance to Google or copy data to Cloud Storage over the network. Read more.
      • Topic retention for Cloud Pub/Sub is now Generally Available. Topic retention is the most comprehensive and flexible way available to retain Pub/Sub messages for message replay. In addition to backing up all subscriptions connected to the topic, new subscriptions can now be initialized from a timestamp in the past. Learn more about the feature here.
      • Vertex Predictions now supports private endpoints for online prediction. Through VPC Peering, Private Endpoints provide increased security and lower latency when serving ML models. Read more.

      Related Article

      New study available: Modernize with AIOps to maximize your impact

      In this commissioned study, Forrester Consulting explores how organizations are using AI Ops in their cloud environments.

      Read Article

      Week of Aug 16 -Aug 20

      • Look for us to take security one step further by adding authorization features for service-to-service communications for gRPC proxyless services, as well as to support other deployment models, where proxyless gRPC services are running somewhere other than GKE, for example Compute Engine. We hope you'll join us and check out the setup guide and give us feedback.
      • Cloud Run now supports VPC Service Controls. You can now protect your Cloud Run services against data exfiltration by using VPC Service Controls in conjunction with Cloud Run’s ingress and egress settings. Read more.
      • Read how retailers are leveraging Google Cloud VMware Engine to move their on-premises applications to the cloud, where they can achieve the scale, intelligence, and speed required to stay relevant and competitive. Read more.
      • A series of new features for BeyondCorp Enterprise, our zero trust offering. We now offer native support for client certificates for eight types of VPC-SC resources. We are also announcing general availability of the on-prem connector, which allows users to secure HTTP or HTTPS based on-premises applications outside of Google Cloud. Additionally, three new BeyondCorp attributes are available in Access Context Manager as part of a public preview. Customers can configure custom access policies based on time and date, credential strength, and/or Chrome browser attributes. Read more about these announcements here.
      • We are excited to announce that Google Cloud, working with its partners NAG and DDN, demonstrated the highest performing Lustre file system on the IO500 ranking of the fastest HPC storage systems — quite a feat considering Lustre is one of the most widely deployed HPC file systems in the world.  Read the full article.
      • The Storage Transfer Service for on-premises data API is now available in Preview. Now you can use RESTful APIs to automate your on-prem-to-cloud transfer workflows.  Storage Transfer Service is a software service to transfer data over a network. The service provides built-in capabilities such as scheduling, bandwidth management, retries, and data integrity checks that simplifies the data transfer workflow.
      • It is now simple to use Terraform to configure Anthos features on your GKE clusters. This is the first part of the 3 part series that describes using Terraform to enable Config Sync.  For platform administrators,  this natural, IaC approach improves auditability and transparency and reduces risk of misconfigurations or security gaps. Read more.
      • In this commissioned study, “Modernize With AIOps To Maximize Your Impact”, Forrester Consulting surveyed organizations worldwide to better understand how they’re approaching artificial intelligence for IT operations (AIOps) in their cloud environments, and what kind of benefits they’re seeing. Read more.
      • If your organization or development environment has strict security policies which don’t allow for external IPs, it can be difficult to set up a connection between a Private Cloud SQL instance and a Private IP VM. This article contains clear instructions on how to set up a connection from a private Compute Engine VM to a private Cloud SQL instance using a private service connection and the mysqlsh command line tool.

      Related Article

      New Research: COVID-19 accelerates innovation in healthcare but tech adoption still lags

      Google Cloud and Harris Poll healthcare research reveals COVID-19 impacts on healthcare technology

      Read Article

      Week of Aug 9 -Aug 13

      • Compute Engine users have a new, updated set of VM-level “in-context” metrics, charts, and logs to correlate signals for common troubleshooting scenarios across CPU, Disk, Memory, Networking, and live Processes.  This brings the best of Google Cloud’s operations suite directly to the Compute Engine UI. Learn more.
      • ​​Pub/Sub to Splunk Dataflow template has been updatedto address multiple enterprise customer asks, from improved compatibility with Splunk Add-on for Google Cloud Platform, to more extensibility with user-defined functions (UDFs), and general pipeline reliability enhancements to tolerate failures like transient network issues when delivering data to Splunk. Read more to learn about how to take advantage of these latest features. Read more.
      • Google Cloud and NVIDIA have teamed up to make VR/AR workloads easier, faster to create and tetherless! Read more.
      • Register for the Google Cloud Startup Summit, September 9, 2021 at goo.gle/StartupSummit for a digital event filled with inspiration, learning, and discussion. This event will bring together our startup and VC community to discuss the latest trends and insights, headlined by a keynote by Astro Teller, Captain of Moonshots at X the moonshot factory. Additionally, learn from a variety of technical and business sessions to help take your startup to the next level.
      • Google Cloud and Harris Poll healthcare research reveals COVID-19 impacts on healthcare technology. Learn more.
      • Partial SSO is now available for public preview. If you use a 3rd party identity provider to single sign on into Google services, Partial SSO allows you to identify a subset of your users to use Google / Cloud Identity as your SAML SSO identity provider (short video and demo).

      Related Article

      Google named a Leader in 2021 Gartner Magic Quadrant for Cloud Infrastructure and Platform Services again

      Gartner named Google Cloud a Leader in the 2021 Magic Quadrant for Cloud Infrastructure and Platform Services, formerly Infrastructure as...

      Read Article

      Week of Aug 2-Aug 6

      • Gartner named Google Cloud a Leader in the 2021 Magic Quadrant for Cloud Infrastructure and Platform Services, formerly Infrastructure as a Service. Learn more.
      • Private Service Connect is now generally available. Private Service Connect lets you create private and secure connections to Google Cloud and third-party services with service endpoints in your VPCs. Read more.
      • 30 migration guides designed to help you identify the best ways to migrate, which include meeting common organizational goals like minimizing time and risk during your migration, identifying the most enterprise-grade infrastructure for your workloads, picking a cloud that aligns with your organization’s sustainability goals, and more. Read more.

      Related Article

      The new Google Cloud region in Melbourne is now open

      The new Google Cloud region in Melbourne adds a second region to Australia, supporting economic growth in the region.

      Read Article

      Week of Jul 26-Jul 30

      • This week we hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. Read more about our consumer packaged goods strategy and a guide to key summit content for brands in this blog from Giusy Buonfantino, Google Cloud’s Vice President of CPG.

      • We’re hosting our Retail & Consumer Goods Summit, a digital event dedicated to helping leading retailers and brands digitally transform their business. Read more.

      • See how IKEA uses Recommendations AI to provide customers with more relevant product information. Read more.

      • ​​Google Cloud launches a career program for people with autism designed to hire and support more talented people with autism in the rapidly growing cloud industry. Learn more

      • Google Cloud follows new API stability tenets that work to minimize unexpected deprecations to our Enterprise APIs. Read more.

      Related Article

      Registration is open for Google Cloud Next: October 12–14

      Register now for Google Cloud Next on October 12–14, 2021

      Read Article

      Week of Jul 19-Jul 23

      • Register and join us for Google Cloud Next, October 12-14, 2021 at g.co/CloudNext for a fresh approach to digital transformation, as well as a few surprises. Next ‘21 will be a fully customizable digital adventure for a more personalized learning journey. Find the tools and training you need to succeed. From live, interactive Q&As and informative breakout sessions to educational demos and real-life applications of the latest tech from Google Cloud. Get ready to plug into your cloud community, get informed, and be inspired. Together we can tackle today’s greatest business challenges, and start solving for what’s next.
      • "Application Innovation" takes a front row seat this year– To stay ahead of rising customer expectations and the digital and in-person hybrid landscape, enterprises must know what application innovation means and how to deliver this type of innovation with a small piece of technology that might surprise you. Learn more about the three pillars of app innovation here.
      • We announced Cloud IDS, our new network security offering, which is now available in preview. Cloud IDS delivers easy-to-use, cloud-native, managed, network-based threat detection. With Cloud IDS, customers can enjoy a Google Cloud-integrated experience, built with Palo Alto Networks’ industry-leading threat detection technologies to provide high levels of security efficacy. Learn more.
      • Key Visualizer for Cloud Spanner is now generally available. Key Visualizer is a new interactive monitoring tool that lets developers and administrators analyze usage patterns in Spanner. It reveals trends and outliers in key performance and resource metrics for databases of any size, helping to optimize queries and reduce infrastructure costs. See it in action.
      • The market for healthcare cloud is projected to grow 43%. This means a need for better tech infrastructure, digital transformation & Cloud tools. Learn how Google Cloud Partner Advantage partners help customers solve business challenges in healthcare.

      Related Article

      The new Google Cloud region in Delhi NCR is now open

      The Google Cloud region in Delhi NCR is now open for business, ready to host your workloads.

      Read Article

      Week of Jul 12-Jul 16

      • Simplify VM migrations with Migrate for Compute Engine as a Service: delivers a Google-managed cloud service that enables simple, frictionless, and large-scale enterprise migrations of virtual machines to Google Compute Engine with minimal downtime and risk. API-driven and integrated into your Google Cloud console for ease of use, this service uses agent-less replication to copy data without manual intervention and without VPN requirements. It also enables you to launch non-disruptive validations of your VMs prior to cutover.  Rapidly migrate a single application or execute a sprint with hundred systems using migration groups with confidence. Read more here.
      • The Google Cloud region in Delhi NCR is now open for business, ready to host your workloads. Learn more and watch the region launch event here.
      • Introducing Quilkin: the open-source game server proxy. Developed in collaboration with Embark Studios, Quilkin is an open source UDP proxy, tailor-made for high performance real-time multiplayer games. Read more.
      • We’re making Google Glass on Meet available to a wider network of global customers. Learn more.
      • Transfer Appliance supports Google Managed Encryption Keys — We’re announcing the support for Google Managed Encryption Keys with Transfer Appliance, this is in addition to the currently available Customer Managed Encryption Keys feature. Customers have asked for the Transfer Appliance service to create and manage encryption keys for transfer sessions to improve usability and maintain security. The Transfer Appliance Service can now manage the encryption keys for the customers who do not wish to handle a key themselves. Learn more about Using Google Managed Encryption Keys.

      • UCLA builds a campus-wide API program– With Google Cloud's API management platform, Apigee, UCLA created a unified and strong API foundation that removes data friction that students, faculty, and administrators alike face. This foundation not only simplifies how various personas connect to data, but also encourages more innovations in the future. Learn their story.

      • An enhanced region picker makes it easy to choose a Google Cloud region with the lowest CO2 outputLearn more.
      • Amwell and Google Cloud explore five ways telehealth can help democratize access to healthcareRead more.
      • Major League Baseball and Kaggle launch ML competition to learn about fan engagement. Batter up!
      • We’re rolling out general support of Brand Indicators for Message Identification (BIMI) in Gmail within Google Workspace. Learn more.

      • Learn how DeNA Sports Business created an operational status visualization system that helps determine whether live event attendees have correctly installed Japan’s coronavirus contact tracing app COCOA.

      • Google Cloud CAS provides a highly scalable and available private CA to address the unprecedented growth in certificates in the digital world. Read more about CAS.

      Related Article

      Closer to the action: Call of Duty League and Google Cloud deliver new feature for esports fans

      Google Cloud and Call of Duty League launch ActivStat to bring fans, players, and commentators the power of competitive statistics in rea...

      Read Article

      Week of Jul 5-Jul 9 2021

      • Google Cloud and Call of Duty League launch ActivStat to bring fans, players, and commentators the power of competitive statistics in real-time. Read more.
      • Building applications is a heavy lift due to the technical complexity, which includes the complexity of backend services that are used to manage and store data. Firestore alters this by having Google Cloud manage your backend complexity through a complete backend-as-a-service! Learn more.
      • Google Cloud’s new Native App Development skills challenge lets you earn badges that demonstrate your ability to create cloud-native apps. Read more and sign up.

      Related Article

      AT&T Android customers to have Messages app by default

      Messages by Google is now the default messaging app for all AT&T customers using Android phones in the United States.

      Read Article

      Week of Jun 28-Jul 2 2021

      • Storage Transfer Service now offers preview support for Integration with AWS Security Token Service. Security conscious customers can now use Storage Transfer Service to perform transfers from AWS S3 without passing any security credentials. This release will alleviate the security burden associated with passing long-term AWS S3 credentials, which have to be rotated or explicitly revoked when they are no longer needed. Read more.
      • The most popular and surging Google Search terms are now available in BigQuery as a public dataset. View the Top 25 and Top 25 rising queries from Google Trends from the past 30-days, including 5 years of historical data across the 210 Designated Market Areas (DMAs) in the US. Learn more.
      • A new predictive autoscaling capability lets you add additional Compute Engine VMs in anticipation of forecasted demand. Predictive autoscaling is generally available across all Google Cloud regions. Read more or consult the documentation for more information on how to configure, simulate and monitor predictive autoscaling.
      • Messages by Google is now the default messaging app for all AT&T customers using Android phones in the United States. Read more.
      • TPU v4 Pods will soon be available on Google Cloud, providing the most powerful publicly available computing platform for machine learning training. Learn more.
      • Cloud SQL for SQL Server has addressed multiple enterprise customer asks with the GA releases of both SQL Server 2019 and Active Directory integration, as well as the Preview release of Cross Region Replicas.  This set of releases work in concert to allow customers to set up a more scalable and secure managed SQL Server environment to address their workloads’ needs. Read more.

      Related Article

      How HBO Max uses reCAPTCHA Enterprise to make its customer experience frictionless

      Balancing product, marketing, customer and security needs without slowing down signups.

      Read Article

      Week of Jun 21-Jun 25 2021

      • Simplified return-to-office with no-code technologyWe've just released a solution to your most common return-to-office headaches: make a no-code app customized to solve your business-specific challenges. Learn how to create an automated app where employees can see office room occupancy, check what desks are reserved or open, review disinfection schedules, and more in this blog tutorial.
      • New technical validation whitepaper for running ecommerce applications—Enterprise Strategy Group's analyst outlines the challenges of organizations running ecommerce applications and how Google Cloud helps to mitigate those challenges and handle changing demands with global infrastructure solutions. Download the whitepaper.
      • The fullagendafor Google for Games Developer Summit on July 12th-13th, 2021 is now available. A free digital event with announcements from teams including Stadia, Google Ads, AdMob, Android, Google Play, Firebase, Chrome, YouTube, and Google Cloud. Hear more about how Google Cloud technology creates opportunities for gaming companies to make lasting enhancements for players and creatives. Register at g.co/gamedevsummit
      • BigQuery row-level security is now generally available, giving customers a way to control access to subsets of data in the same table for different groups of users. Row-level security (RLS) extends the principle of least privilege access and enables fine-grained access control policies in BigQuery tables. BigQuery currently supports access controls at the project-, dataset-, table- and column-level. Adding RLS to the portfolio of access controls now enables customers to filter and define access to specific rows in a table based on qualifying user conditions—providing much needed peace of mind for data professionals.
      • Transfer from Azure ADLS Gen 2: Storage Transfer Service offers Preview support for transferring data from Azure ADLS Gen 2 to Google Cloud Storage. Take advantage of a scalable, serverless service to handle data transfer. Read more.
      • reCAPTCHA V2 and V3 customers can now migrate site keys to reCAPTCHA Enterprise in under 10 minutes and without making any code changes. Watch our Webinar to learn more. 
      • Bot attacks are the biggest threat to your business that you probably haven’t addressed yet. Check out our Forbes article to see what you can do about it.

      Related Article

      New Tau VMs deliver leading price-performance for scale-out workloads

      Compute Engine’s new Tau VMs based on AMD EPYC processors provide leading price/performance for scale-out workloads on an x86-based archi...

      Read Article

      Week of Jun 14-Jun 18 2021

      • A new VM family for scale-out workloads—New AMD-based Tau VMs offer 56% higher absolute performance and 42% higher price-performance compared to general-purpose VMs from any of the leading public cloud vendors. Learn more.
      • New whitepaper helps customers plot their cloud migrations—Our new whitepaper distills the conversations we’ve had with CIOs, CTOs, and their technical staff into several frameworks that can help cut through the hype and the technical complexity to help devise the strategy that empowers both the business and IT. Read more or download the whitepaper.
      • Ubuntu Pro lands on Google Cloud—The general availability of Ubuntu Pro images on Google Cloud gives customers an improved Ubuntu experience, expanded security coverage, and integration with critical Google Cloud features. Read more.
      • Navigating hybrid work with a single, connected experience in Google Workspace—New additions to Google Workspace help businesses navigate the challenges of hybrid work, such as Companion Mode for Google Meet calls. Read more.
      • Arab Bank embraces Google Cloud technology—This Middle Eastern bank now offers innovative apps and services to their customers and employees with Apigee and Anthos. In fact, Arab Bank reports over 90% of their new-to-bank customers are using their mobile apps. Learn more.
      • Google Workspace for the Public Sector Sector events—This June, learn about Google Workspace tips and tricks to help you get things done. Join us for one or more of our learning events tailored for government and higher education users. Learn more.

      Related Article

      All about cables: A guide to posts on our infrastructure under the sea

      All our posts on Google’s global subsea cable system in one handy location.

      Read Article

      Week of Jun 7-Jun 11 2021

      • The top cloud capabilities industry leaders want for sustained innovation—Multicloud and hybrid cloud approaches, coupled with open-source technology adoption, enable IT teams to take full advantage of the best cloud has to offer. Our recent study with IDG shows just how much of a priority this has become for business leaders. Read more or download the report.
      • Announcing the Firmina subsea cable—Planned to run from the East Coast of the United States to Las Toninas, Argentina, with additional landings in Praia Grande, Brazil, and Punta del Este, Uruguay, Firmina will be the longest open subsea cable in the world capable of running entirely from a single power source at one end of the cable if its other power source(s) become temporarily unavailable—a resilience boost at a time when reliable connectivity is more important than ever. Read more.
      • New research reveals what’s needed for AI acceleration in manufacturing—According to our data, which polled more than 1,000 senior manufacturing executives across seven countries, 76% have turned to digital enablers and disruptive technologies due to the pandemic such as data and analytics, cloud, and artificial intelligence (AI). And 66% of manufacturers who use AI in their day-to-day operations report that their reliance on AI is increasing. Read more or download the report.
      • Cloud SQL offers even faster maintenance—Cloud SQL maintenance is zippier than ever. MySQL and PostgreSQL planned maintenance typically lasts less than 60 seconds and SQL Server maintenance typically lasts less than 120 seconds. You can learn more about maintenance here.
      • Simplifying Transfer Appliance configuration with Cloud Setup Application—We’re announcing the availability of the Transfer Appliance Cloud Setup Application. This will use the information you provide through simple prompts and configure your Google Cloud permissions, preferred Cloud Storage bucket, and Cloud KMS key for your transfer. Several cloud console based manual steps are now simplified with a command line experience. Read more
      • Google Cloud VMware Engine is now HIPAA compliant—As of April 1, 2021, Google Cloud VMware Engine is covered under the Google Cloud Business Associate Agreement (BAA), meaning it has achieved HIPAA compliance. Healthcare organizations can now migrate and run their HIPAA-compliant VMware workloads in a fully compatible VMware Cloud Verified stack running natively in Google Cloud with Google Cloud VMware Engine, without changes or re-architecture to tools, processes, or applications. Read more.
      • Introducing container-native Cloud DNS—Kubernetes networking almost always starts with a DNS request. DNS has broad impacts on your application and cluster performance, scalability, and resilience. That is why we are excited to announce the release of container-native Cloud DNS—the native integration of Cloud DNS with Google Kubernetes Engine (GKE) to provide in-cluster Service DNS resolution with Cloud DNS, our scalable and full-featured DNS service. Read more.
      • Welcoming the EU’s new Standard Contractual Clauses for cross-border data transfers—Learn how we’re incorporating the new Standard Contractual Clauses (SCCs) into our contracts to help protect our customers’ data and meet the requirements of European privacy legislation. Read more.
      • Lowe’s meets customer demand with Google SRE practices—Learn how Low’s has been able to increase the number of releases they can support by adopting Google’s Site Reliability Engineering (SRE) framework and leveraging their partnership with Google Cloud. Read more.
      • What’s next for SAP on Google Cloud at SAPPHIRE NOW and beyond—As SAP’s SAPPHIRE conference begins this week, we believe businesses have a more significant opportunity than ever to build for their next decade of growth and beyond. Learn more on how we’re working together with our customers, SAP, and our partners to support this transformation. Read more.
      • Support for Node.js, Python and Java repositories for Artifact Registrynow in Preview–With today’s announcement, you can not only use Artifact Registry to secure and distribute container images, but also manage and secure your other software artifacts. Read more.
      • What’s next for SAP on Google Cloud at SAPPHIRE NOW and beyond—As SAP’s SAPPHIRE conference begins this week, we believe businesses have a more significant opportunity than ever to build for their next decade of growth and beyond. Learn more on how we’re working together with our customers, SAP, and our partners to support this transformation. Read more.
      • Google named a Leader in The Forrester Wave: Streaming Analytics, Q2 2021 report–Learn about the criteria where Google Dataflow was rated 5 out 5 and why this matters for our customers here.
      • Applied ML Summit this Thursday, June 10–Watch our keynote to learn about predictions for machine learning over the next decade. Engage with distinguished researchers, leading practitioners, and Kaggle Grandmasters during our live Ask Me Anything session. Take part in our modeling workshops to learn how you can iterate faster, and deploy and manage your models with confidence–no matter your level of formal computer science training. Learn how to develop and apply your professional skills, grow your abilities at the pace of innovation, and take your career to the next level. Register now.

      Related Article

      Colossus under the hood: a peek into Google’s scalable storage system

      An overview of Colossus, the file system that underpins Google Cloud’s storage offerings.

      Read Article

      Week of May 31-Jun 4 2021

      • Security Command Center now supports CIS 1.1 benchmarks and granular access controlSecurity Command Center (SCC) now supports CIS benchmarks for Google Cloud Platform Foundation v1.1, enabling you to monitor and address compliance violations against industry best practices in your Google Cloud environment. Additionally, SCC now supports fine-grained access control for administrators that allows you to easily adhere to the principles of least privilege—restricting access based on roles and responsibilities to reduce risk and enabling broader team engagement to address security. Read more.
      • Zero-trust managed security for services with Traffic Director–We created Traffic Director to bring to you a fully managed service mesh product that includes load balancing, traffic management and service discovery. And now, we’re happy to announce the availability of a fully-managed zero-trust security solution using Traffic Director with Google Kubernetes Engine (GKE) and Certificate Authority (CA) Service. Read more.
      • How one business modernized their data warehouse for customer success–PedidosYa migrated from their old data warehouse to Google Cloud's BigQuery. Now with BigQuery, the Latin American online food ordering company has reduced the total cost per query by 5x. Learn more.
      • Announcing new Cloud TPU VMs–New Cloud TPU VMs make it easier to use our industry-leading TPU hardware by providing direct access to TPU host machines, offering a new and improved user experience to develop and deploy TensorFlow, PyTorch, and JAX on Cloud TPUs. Read more.
      • Introducing logical replication and decoding for Cloud SQL for PostgreSQL–We’re announcing the public preview of logical replication and decoding for Cloud SQL for PostgreSQL. By releasing those capabilities and enabling change data capture (CDC) from Cloud SQL for PostgreSQL, we strengthen our commitment to building an open database platform that meets critical application requirements and integrates seamlessly with the PostgreSQL ecosystem. Read more.
      • How 6 businesses are transforming with SAP on Google Cloud–Thousands of organizations globally rely on SAP for their most mission critical workloads. And for many Google Cloud customers, part of a broader digital transformation journey has included accelerating the migration of these essential SAP workloads to Google Cloud for greater agility, elasticity, and uptime. Read six of their stories.

      Related Article

      6 businesses transforming with SAP on Google Cloud

      Businesses globally are running SAP on Google Cloud to take advantage of greater agility, uptime, and access to cutting edge smart analyt...

      Read Article

      Week of May 24-May 28 2021

      • Google Cloud for financial services: driving your transformation cloud journey–As we welcome the industry to our Financial Services Summit, we’re sharing more on how Google Cloud accelerates a financial organization’s digital transformation through app and infrastructure modernization, data democratization, people connections, and trusted transactions. Read more or watch the summit on demand.
      • Introducing Datashare solution for financial services–We announced the general availability of Datashare for financial services, a new Google Cloud solution that brings together the entire capital markets ecosystem—data publishers and data consumers—to exchange market data securely and easily. Read more.
      • Announcing Datastream in PreviewDatastream, a serverless change data capture (CDC) and replication service, allows enterprises to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency to support real-time analytics, database replication, and event-driven architectures. Read more.
      • Introducing Dataplex: An intelligent data fabric for analytics at scaleDataplex provides a way to centrally manage, monitor, and govern your data across data lakes, data warehouses and data marts, and make this data securely accessible to a variety of analytics and data science tools. Read more
      • Announcing Dataflow Prime–Available in Preview in Q3 2021, Dataflow Prime is a new platform based on a serverless, no-ops, auto-tuning architecture built to bring unparalleled resource utilization and radical operational simplicity to big data processing. Dataflow Prime builds on Dataflow and brings new user benefits with innovations in resource utilization and distributed diagnostics. The new capabilities in Dataflow significantly reduce the time spent on infrastructure sizing and tuning tasks, as well as time spent diagnosing data freshness problems. Read more.
      • Secure and scalable sharing for data and analytics with Analytics Hub–With Analytics Hub, available in Preview in Q3, organizations get a rich data ecosystem by publishing and subscribing to analytics-ready datasets; control and monitoring over how their data is being used; a self-service way to access valuable and trusted data assets; and an easy way to monetize their data assets without the overhead of building and managing the infrastructure. Read more.
      • Cloud Spanner trims entry cost by 90%–Coming soon to Preview, granular instance sizing in Spanner lets organizations run workloads at as low as 1/10th the cost of regular instances, equating to approximately $65/month. Read more.
      • Cloud Bigtable lifts SLA and adds new security features for regulated industries–Bigtable instances with a multi-cluster routing policy across 3 or more regions are now covered by a 99.999% monthly uptime percentage under the new SLA. In addition, new Data Access audit logs can help determine whether sensitive customer information has been accessed in the event of a security incident, and if so, when, and by whom. Read more.
      • Build a no-code journaling app–In honor of Mental Health Awareness Month, Google Cloud's no-code application development platform, AppSheet, demonstrates how you can build a journaling app complete with titles, time stamps, mood entries, and more. Learn how with this blog and video here.
      • New features in Security Command Center—On May 24th, Security Command Center Premium launched the general availability of granular access controls at project- and folder-level and Center for Internet Security (CIS) 1.1 benchmarks for Google Cloud Platform Foundation. These new capabilities enable organizations to improve their security posture and efficiently manage risk for their Google Cloud environment. Learn more.
      • Simplified API operations with AI–Google Cloud's API management platform Apigee applies Google's industry leading ML and AI to your API metadata. Understand how it works with anomaly detection here.
      • This week: Data Cloud and Financial Services Summits–Our Google Cloud Summit series begins this week with the Data Cloud Summit on Wednesday May 26 (Global). At this half-day event, you’ll learn how leading companies like PayPal, Workday, Equifax, and many others are driving competitive differentiation using Google Cloud technologies to build their data clouds and transform data into value that drives innovation. The following day, Thursday May 27 (Global & EMEA) at the Financial Services Summit, discover how Google Cloud is helping financial institutions such as PayPal, Global Payments, HSBC, Credit Suisse, AXA Switzerland and more unlock new possibilities and accelerate business through innovation. Read more and explore the entire summit series.
      • Announcing the Google for Games Developer Summit 2021 on July 12th-13th–With a surge of new gamers and an increase in time spent playing games in the last year, it’s more important than ever for game developers to delight and engage players. To help developers with this opportunity, the games teams at Google are back to announce the return of the Google for Games Developer Summit 2021 on July 12th-13th. Hear from experts across Google about new game solutions they’re building to make it easier for you to continue creating great games, connecting with players and scaling your business. Registration is free and open to all game developers. Register for the free online event at g.co/gamedevsummit to get more details in the coming weeks. We can’t wait to share our latest innovations with the developer community. Learn more.

      Related Article

      A handy new Google Cloud, AWS, and Azure product map

      To help developers translate their prior experience with other cloud providers to Google Cloud, we have created a table showing how gener...

      Read Article

      Week of May 17-May 21 2021

      • Best practices to protect your organization against ransomware threats–For more than 20 years Google has been operating securely in the cloud, using our modern technology stack to provide a more defensible environment that we can protect at scale. While the threat of ransomware isn’t new, our responsibility to help protect you from existing or emerging threats never changes. In our recent blog post, we shared guidance on how organizations can increase their resilience to ransomware and how some of our Cloud products and services can help. Read more.

      • Forrester names Google Cloud a Leader in Unstructured Data Security Platforms–Forrester Research has named Google Cloud a Leader in The Forrester Wave: Unstructured Data Security Platforms, Q2 2021 report, and rated Google Cloud highest in the current offering category among the providers evaluated. Read more or download the report.
      • Introducing Vertex AI: One platform, every ML tool you needVertex AI is a managed machine learning (ML) platform that allows companies to accelerate the deployment and maintenance of artificial intelligence (AI) models. Read more.
      • Transforming collaboration in Google Workspace–We’re launching smart canvas, a new product experience that delivers the next evolution of collaboration for Google Workspace. Between now and the end of the year, we’re rolling out innovations that make it easier for people to stay connected, focus their time and attention, and transform their ideas into impact. Read more.
      • Developing next-generation geothermal power–At I/O this week, we announced a first-of-its-kind, next-generation geothermal project with clean-energy startup Fervo that will soon begin adding carbon-free energy to the electric grid that serves our data centers and infrastructure throughout Nevada, including our Cloud region in Las Vegas. Read more.
      • Contributing to an environment of trust and transparency in Europe–Google Cloud was one of the first cloud providers to support and adopt the EU GDPR Cloud Code of Conduct (CoC). The CoC is a mechanism for cloud providers to demonstrate how they offer sufficient guarantees to implement appropriate technical and organizational measures as data processors under the GDPR. This week, the Belgian Data Protection Authority, based on a positive opinion by the European Data Protection Board (EDPB), approved the CoC, a product of years of constructive collaboration between the cloud computing community, the European Commission, and European data protection authorities. We are proud to say that Google Cloud Platform and Google Workspace already adhere to these provisions. Learn more.
      • Announcing Google Cloud datasets solutions–We're adding commercial, synthetic, and first-party data to our Google Cloud Public Datasets Program to help organizations increase the value of their analytics and AI initiatives, and we're making available an open source reference architecture for a more streamlined data onboarding process to the program. Read more.
      • Introducing custom samples in Cloud Code–With new custom samples in Cloud Code, developers can quickly access your enterprise’s best code samples via a versioned Git repository directly from their IDEs. Read more.
      • Retention settings for Cloud SQL–Cloud SQL now allows you to configure backup retention settings to protect against data loss. You can retain between 1 and 365 days’ worth of automated backups and between 1 and 7 days’ worth of transaction logs for point-in-time recovery. See the details here.
      • Cloud developer’s guide to Google I/O 2021Google I/O may look a little different this year, but don’t worry, you’ll still get the same first-hand look at the newest launches and projects coming from Google. Best of all, it’s free and available to all (virtually) on May 18-20. Read more.

      Related Article

      Anthos 101 learning series: All the videos in one place

      In under an hour, you’ll learn how Anthos lets you develop, run and secure applications across your hybrid and multicloud environments.

      Read Article

      Week of May 10-May 14 2021

      • APIs and Apigee power modern day due diligence–With APIs and Google Cloud's Apigee, business due diligence company DueDil revolutionized the way they harness and share their Big Information Graph (B.I.G.) with partners and customers. Get the full story.
      • Cloud CISO Perspectives: May 2021–It’s been a busy month here at Google Cloud since our inaugural CISO perspectives blog post in April. Here, VP and CISO of Google Cloud Phil Venables recaps our cloud security and industry highlights, a sneak peak of what’s ahead from Google at RSA, and more. Read more.
      • 4 new features to secure your Cloud Run services–We announced several new ways to secure Cloud Run environments to make developing and deploying containerized applications easier for developers. Read more.
      • Maximize your Cloud Run investments with new committed use discounts–We’re introducing self-service spend-based committed use discounts for Cloud Run, which let you commit for a year to spending a certain amount on Cloud Run and benefiting from a 17% discount on the amount you committed. Read more.
      • Google Cloud Armor Managed Protection Plus is now generally available–Cloud Armor, our Distributed Denial of Service (DDoS) protection and Web-Application Firewall (WAF) service on Google Cloud, leverages the same infrastructure, network, and technology that has protected Google’s internet-facing properties from some of the largest attacks ever reported. These same tools protect customers’ infrastructure from DDoS attacks, which are increasing in both magnitude and complexity every year. Deployed at the very edge of our network, Cloud Armor absorbs malicious network- and protocol-based volumetric attacks, while mitigating the OWASP Top 10 risks and maintaining the availability of protected services. Read more.
      • Announcing Document Translation for Translation API Advanced in preview–Translation is critical to many developers and localization providers, whether you’re releasing a document, a piece of software, training materials or a website in multiple languages. With Document Translation, now you can directly translate documents in 100+ languages and formats such as Docx, PPTx, XLSx, and PDF while preserving document formatting. Read more.
      • Introducing BeyondCorp Enterprise protected profiles–Protected profiles enable users to securely access corporate resources from an unmanaged device with the same threat and data protections available in BeyondCorp Enterprise–all from the Chrome Browser. Read more.
      • How reCAPTCHA Enterprise protects unemployment and COVID-19 vaccination portals–With so many people visiting government websites to learn more about the COVID-19 vaccine, make vaccine appointments, or file for unemployment, these web pages have become prime targets for bot attacks and other abusive activities. But reCAPTCHA Enterprise has helped state governments protect COVID-19 vaccine registration portals and unemployment claims portals from abusive activities. Learn more.
      • Day one with Anthos? Here are 6 ideas for how to get started–Once you have your new application platform in place, there are some things you can do to immediately get value and gain momentum. Here are six things you can do to get you started. Read more.
      • The era of the transformation cloud is here–Google Cloud’s president Rob Enslin shares how the era of the transformation cloud has seen organizations move beyond data centers to change not only where their business is done but, more importantly, how it is done. Read more.

      Related Article

      SRE at Google: Our complete list of CRE life lessons

      Find links to blog posts that share Google’s SRE best practices in one handy location.

      Read Article

      Week of May 3-May 7 2021

      • Transforming hard-disk drive maintenance with predictive ML–In collaboration with Seagate, we developed a machine learning system that can forecast the probability of a recurring failing disk—a disk that fails or has experienced three or more problems in 30 days. Learn how we did it.
      • Agent Assist for Chat is now in public previewAgent Assist provides your human agents with continuous support during their calls, and now chats, by identifying the customers’ intent and providing them with real-time recommendations such as articles and FAQs as well as responses to customer messages to more effectively resolve the conversation. Read more.
      • New Google Cloud, AWS, and Azure product map–Our updated product map helps you understand similar offerings from Google Cloud, AWS, and Azure, and you can easily filter the list by product name or other common keywords. Read more or view the map.
      • Join our Google Cloud Security Talks on May 12th–We’ll share expert insights into how we’re working to be your most trusted cloud. Find the list of topics we’ll cover here.
      • Databricks is now GA on Google Cloud–Deploy or migrate Databricks Lakehouse to Google Cloud to combine the benefits of an open data cloud platform with greater analytics flexibility, unified infrastructure management, and optimized performance. Read more.
      • HPC VM image is now GA–The CentOS-based HPC VM image makes it quick and easy to create HPC-ready VMs on Google Cloud that are pre-tuned for optimal performance. Check out our documentation and quickstart guide to start creating instances using the HPC VM image today.
      • Take the 2021 State of DevOps survey–Help us shape the future of DevOps and make your voice heard by completing the 2021 State of DevOps survey before June 11, 2021. Read more or take the survey.
      • OpenTelemetry Trace 1.0 is now available–OpenTelemetry has reached a key milestone: the OpenTelemetry Tracing Specification has reached version 1.0. API and SDK release candidates are available for Java, Erlang, Python, Go, Node.js, and .Net. Additional languages will follow over the next few weeks. Read more.
      • New blueprint helps secure confidential data in AI Platform Notebooks–We’re adding to our portfolio of blueprints with the publication of our Protecting confidential data in AI Platform Notebooks blueprint guide and deployable blueprint, which can help you apply data governance and security policies that protect your AI Platform Notebooks containing confidential data. Read more.
      • The Liquibase Cloud Spanner extension is now GALiquibase, an open-source library that works with a wide variety of databases, can be used for tracking, managing, and automating database schema changes. By providing the ability to integrate databases into your CI/CD process, Liquibase helps you more fully adopt DevOps practices. The Liquibase Cloud Spanner extension allows developers to use Liquibase's open-source database library to manage and automate schema changes in Cloud Spanner. Read more.
      • Cloud computing 101: Frequently asked questions–There are a number of terms and concepts in cloud computing, and not everyone is familiar with all of them. To help, we’ve put together a list of common questions, and the meanings of a few of those acronyms. Read more.

      Related Article

      API design 101: Links to our most popular posts

      Find our most requested blog posts on API design in one location to read now or bookmark for later.

      Read Article

      Week of Apr 26-Apr 30 2021

      • Announcing the GKE Gateway controller, in Preview–GKE Gateway controller, Google Cloud’s implementation of the Gateway API, manages internal and external HTTP/S load balancing for a GKE cluster or a fleet of GKE clusters and provides multi-tenant sharing of load balancer infrastructure with centralized admin policy and control. Read more.
      • See Network Performance for Google Cloud in Performance Dashboard–The Google Cloud performance view, part of the Network Intelligence Center, provides packet loss and latency metrics for traffic on Google Cloud. It allows users to do informed planning of their deployment architecture, as well as determine in real time the answer to the most common troubleshooting question: "Is it Google or is it me?" The Google Cloud performance view is now open for all Google Cloud customers as a public preview. Check it out.
      • Optimizing data in Google Sheets allows users to create no-code apps–Format columns and tables in Google Sheets to best position your data to transform into a fully customized, successful app–no coding necessary. Read our four best Google Sheets tips.
      • Automation bots with AppSheet Automation–AppSheet recently released AppSheet Automation, infusing Google AI capabilities to AppSheet's trusted no-code app development platform. Learn step by step how to build your first automation bot on AppSheet here.
      • Google Cloud announces a new region in Israel–Our new region in Israel will make it easier for customers to serve their own users faster, more reliably and securely. Read more.
      • New multi-instance NVIDIA GPUs on GKE–We’re launching support for multi-instance GPUs in GKE (currently in Preview), which will help you drive better value from your GPU investments. Read more.
      • Partnering with NSF to advance networking innovation–We announced our partnership with the U.S. National Science Foundation (NSF), joining other industry partners and federal agencies, as part of a combined $40 million investment in academic research for Resilient and Intelligent Next-Generation (NextG) Systems, or RINGS. Read more.
      • Creating a policy contract with Configuration as Data–Configuration as Data is an emerging cloud infrastructure management paradigm that allows developers to declare the desired state of their applications and infrastructure, without specifying the precise actions or steps for how to achieve it. However, declaring a configuration is only half the battle: you also want policy that defines how a configuration is to be used. This post shows you how.
      • Google Cloud products deliver real-time data solutions–Seven-Eleven Japan built Seven Central, its new platform for digital transformation, on Google Cloud. Powered by BigQuery, Cloud Spanner, and Apigee API management, Seven Central presents easy to understand data, ultimately allowing for quickly informed decisions. Read their story here.

      Related Article

      In case you missed it: All our free Google Cloud training opportunities from Q1

      Since January, we’ve introduced a number of no-cost training opportunities to help you grow your cloud skills. We've brought them togethe...

      Read Article

      Week of Apr 19-Apr 23 2021

      • Extreme PD is now GA–On April 20th, Google Cloud’s Persistent Disk launched general availability of Extreme PD, a high performance block storage volume with provisioned IOPS and up to 2.2 GB/s of throughput. Learn more.

      • Research: How data analytics and intelligence tools to play a key role post-COVID-19–A recent Google-commissioned study by IDG highlighted the role of data analytics and intelligent solutions when it comes to helping businesses separate from their competition. The survey of 2,000 IT leaders across the globe reinforced the notion that the ability to derive insights from data will go a long way towards determining which companies win in this new era. Learn more or download the study.

      • Introducing PHP on Cloud Functions–We’re bringing support for PHP, a popular general-purpose programming language, to Cloud Functions. With the Functions Framework for PHP, you can write idiomatic PHP functions to build business-critical applications and integration layers. And with Cloud Functions for PHP, now available in Preview, you can deploy functions in a fully managed PHP 7.4 environment, complete with access to resources in a private VPC network. Learn more.

      • Delivering our 2020 CCAG pooled audit–As our customers increased their use of cloud services to meet the demands of teleworking and aid in COVID-19 recovery, we’ve worked hard to meet our commitment to being the industry’s most trusted cloud, despite the global pandemic. We’re proud to announce that Google Cloud completed an annual pooled audit with the CCAG in a completely remote setting, and were the only cloud service provider to do so in 2020. Learn more.

      • Anthos 1.7 now available–We recently released Anthos 1.7, our run-anywhere Kubernetes platform that’s connected to Google Cloud, delivering an array of capabilities that make multicloud more accessible and sustainable. Learn more.

      • New Redis Enterprise for Anthos and GKE–We’re making Redis Enterprise for Anthos and Google Kubernetes Engine (GKE) available in the Google Cloud Marketplace in private preview. Learn more.

      • Updates to Google Meet–We introduced a refreshed user interface (UI), enhanced reliability features powered by the latest Google AI, and tools that make meetings more engaging—even fun—for everyone involved. Learn more.

      • DocAI solutions now generally availableDocument (Doc) AI platformLending DocAI and Procurement DocAI, built on decades of AI innovation at Google, bring powerful and useful solutions across lending, insurance, government and other industries. Learn more.

      • Four consecutive years of 100% renewable energy–In 2020, Google again matched 100 percent of its global electricity use with purchases of renewable energy. All told, we’ve signed agreements to buy power from more than 50 renewable energy projects, with a combined capacity of 5.5 gigawatts–about the same as a million solar rooftops. Learn more.

      • Announcing the Google Cloud region picker–The Google Cloud region picker lets you assess key inputs like price, latency to your end users, and carbon footprint to help you choose which Google Cloud region to run on. Learn more.

      • Google Cloud launches new security solution WAAP–WebApp and API Protection (WAAP) combines Google Cloud Armor, Apigee, and reCAPTCHA Enterprise to deliver improved threat protection, consolidated visibility, and greater operational efficiencies across clouds and on-premises environments. Learn more about WAAP here.
      • New in no-code–As discussed in our recent article, no-code hackathons are trending among innovative organizations. Since then, we've outlined how you can host one yourself specifically designed for your unique business innovation outcomes. Learn how here.
      • Google Cloud Referral Program now available—Now you can share the power of Google Cloud and earn product credit for every new paying customer you refer. Once you join the program, you’ll get a unique referral link that you can share with friends, clients, or others. Whenever someone signs up with your link, they’ll get a $350 product credit—that’s $50 more than the standard trial credit. When they become a paying customer, we’ll reward you with a $100 product credit in your Google Cloud account. Available in the United States, Canada, Brazil, and Japan. Apply for the Google Cloud Referral Program.

      Related Article

      5 cheat sheets to help you get started on your Google Cloud journey

      Whether you need to determine the best way to move to the cloud, or decide on the best storage option, we've built a number of cheat shee...

      Read Article

      Week of Apr 12-Apr 16 2021

      • Announcing the Data Cloud Summit, May 26, 2021–At this half-day event, you’ll learn how leading companies like PayPal, Workday, Equifax, Zebra Technologies, Commonwealth Care Alliance and many others are driving competitive differentiation using Google Cloud technologies to build their data clouds and transform data into value that drives innovation. Learn more and register at no cost.
      • Announcing the Financial Services Summit, May 27, 2021–In this 2 hour event, you’ll learn how Google Cloud is helping financial institutions including PayPal, Global Payments, HSBC, Credit Suisse, and more unlock new possibilities and accelerate business through innovation and better customer experiences. Learn more and register for free: Global & EMEA.
      • How Google Cloud is enabling vaccine equity–In our latest update, we share more on how we’re working with US state governments to help produce equitable vaccination strategies at scale. Learn more.
      • The new Google Cloud region in Warsaw is open–The Google Cloud region in Warsaw is now ready for business, opening doors for organizations in Central and Eastern Europe. Learn more.
      • AppSheet Automation is now GA–Google Cloud’s AppSheet launches general availability of AppSheet Automation, a unified development experience for citizen and professional developers alike to build custom applications with automated processes, all without coding. Learn how companies and employees are reclaiming their time and talent with AppSheet Automation here.
      • Introducing SAP Integration with Cloud Data Fusion–Google Cloud native data integration platform Cloud Data Fusion now offers the capability to seamlessly get data out of SAP Business Suite, SAP ERP and S/4HANA. Learn more.

      Related Article

      SRE fundamentals 2021: SLIs vs SLAs vs SLOs

      What’s the difference between an SLI, an SLO and an SLA? Google Site Reliability Engineers (SRE) explain.

      Read Article

      Week of Apr 5-Apr 9 2021

      • New Certificate Authority Service (CAS) whitepaper–“How to deploy a secure and reliable public key infrastructure with Google Cloud Certificate Authority Service” (written by Mark Cooper of PKI Solutions and Anoosh Saboori of Google Cloud) covers security and architectural recommendations for the use of the Google Cloud CAS by organizations, and describes critical concepts for securing and deploying a PKI based on CAS. Learn more or read the whitepaper.
      • Active Assist’s new feature, predictive autoscaling, helps improve response times for your applications–When you enable predictive autoscaling, Compute Engine forecasts future load based on your Managed Instance Group’s (MIG) history and scales it out in advance of predicted load, so that new instances are ready to serve when the load arrives. Without predictive autoscaling, an autoscaler can only scale a group reactively, based on observed changes in load in real time. With predictive autoscaling enabled, the autoscaler works with real-time data as well as with historical data to cover both the current and forecasted load. That makes predictive autoscaling ideal for those apps with long initialization times and whose workloads vary predictably with daily or weekly cycles. For more information, see How predictive autoscaling works or check if predictive autoscaling is suitable for your workload, and to learn more about other intelligent features, check out Active Assist.
      • Introducing Dataprep BigQuery pushdown–BigQuery pushdown gives you the flexibility to run jobs using either BigQuery or Dataflow. If you select BigQuery, then Dataprep can automatically determine if data pipelines can be partially or fully translated in a BigQuery SQL statement. Any portions of the pipeline that cannot be run in BigQuery are executed in Dataflow. Utilizing the power of BigQuery results in highly efficient data transformations, especially for manipulations such as filters, joins, unions, and aggregations. This leads to better performance, optimized costs, and increased security with IAM and OAuth support. Learn more.
      • Announcing the Google Cloud Retail & Consumer Goods Summit–The Google Cloud Retail & Consumer Goods Summit brings together technology and business insights, the key ingredients for any transformation. Whether you're responsible for IT, data analytics, supply chains, or marketing, please join! Building connections and sharing perspectives cross-functionally is important to reimagining yourself, your organization, or the world. Learn more or register for free.
      • New IDC whitepaper assesses multicloud as a risk mitigation strategy–To better understand the benefits and challenges associated with a multicloud approach, we supported IDC’s new whitepaper that investigates how multicloud can help regulated organizations mitigate the risks of using a single cloud vendor. The whitepaper looks at different approaches to multi-vendor and hybrid clouds taken by European organizations and how these strategies can help organizations address concentration risk and vendor-lock in, improve their compliance posture, and demonstrate an exit strategy. Learn more or download the paper.
      • Introducing request priorities for Cloud Spanner APIs–You can now specify request priorities for some Cloud Spanner APIs. By assigning a HIGH, MEDIUM, or LOW priority to a specific request, you can now convey the relative importance of workloads, to better align resource usage with performance objectives. Learn more.
      • How we’re working with governments on climate goals–Google Sustainability Officer Kate Brandt shares more on how we’re partnering with governments around the world to provide our technology and insights to drive progress in sustainability efforts. Learn more.

      Related Article

      Cloud computing 101: Frequently asked questions

      What are containers? What’s a data lake? What does that acronym stand for? Get answers to the questions you're too afraid to ask.

      Read Article

      Week of Mar 29-Apr 2 2021

      • Why Google Cloud is the ideal platform for Block.one and other DLT companies–Late last year, Google Cloud joined the EOS community, a leading open-source platform for blockchain innovation and performance, and is taking steps to support the EOS Public Blockchain by becoming a block producer (BP). At the time, we outlined how our planned participation underscores the importance of blockchain to the future of business, government, and society. We're sharing more on why Google Cloud is uniquely positioned to be an excellent partner for Block.one and other distributed ledger technology (DLT) companies. Learn more.
      • New whitepaper: Scaling certificate management with Certificate Authority Service–As Google Cloud’s Certificate Authority Service (CAS) approaches general availability, we want to help customers understand the service better. Customers have asked us how CAS fits into our larger security story and how CAS works for various use cases. Our new white paper answers these questions and more. Learn more and download the paper.
      • Build a consistent approach for API consumers–Learn the differences between REST and GraphQL, as well as how to apply REST-based practices to GraphQL. No matter the approach, discover how to manage and treat both options as API products here.

      • Apigee X makes it simple to apply Cloud CDN to APIs–With Apigee X and Cloud CDN, organizations can expand their API programs' global reach. Learn how to deploy APIs across 24 regions and 73 zones here.

      • Enabling data migration with Transfer Appliances in APAC—We’re announcing the general availability of Transfer Appliances TA40/TA300 in Singapore. Customers are looking for fast, secure and easy to use options to migrate their workloads to Google Cloud and we are addressing their needs with Transfer Appliances globally in the US, EU and APAC. Learn more about Transfer Appliances TA40 and TA300.

      • Windows Authentication is now supported on Cloud SQL for SQL Server in public preview—We’ve launched seamless integration with Google Cloud’s Managed Service for Microsoft Active Directory (AD). This capability is a critical requirement to simplify identity management and streamline the migration of existing SQL Server workloads that rely on AD for access control. Learn more or get started.

      • Using Cloud AI to whip up new treats with Mars Maltesers—Maltesers, a popular British candy made by Mars, teamed up with our own AI baker and ML engineer extraordinaire, Sara Robinson, to create a brand new dessert recipe with Google Cloud AI. Find out what happened (recipe included).

      • Simplifying data lake management with Dataproc Metastore, now GADataproc Metastore, a fully managed, serverless technical metadata repository based on the Apache Hive metastore, is now generally available. Enterprises building and migrating open source data lakes to Google Cloud now have a central and persistent metastore for their open source data analytics frameworks. Learn more.

      • Introducing the Echo subsea cable—We announced our investment in Echo, the first-ever cable to directly connect the U.S. to Singapore with direct fiber pairs over an express route. Echo will run from Eureka, California to Singapore, with a stop-over in Guam, and plans to also land in Indonesia. Additional landings are possible in the future. Learn more.

      Related Article

      21 Google Cloud tools, each explained in under 2 minutes

      Need a quick overview of Google Cloud core technologies? Quickly learn these 21 Google Cloud products—each explained in under two minutes.

      Read Article

      Week of Mar 22-Mar 26 2021

      • 10 new videos bring Google Cloud to life—The Google Cloud Tech YouTube channel’s latest video series explains cloud tools for technical practitioners in about 5 minutes each. Learn more.
      • BigQuery named a Leader in the 2021 Forrester Wave: Cloud Data Warehouse, Q1 2021 report—Forrester gave BigQuery a score of 5 out of 5 across 19 different criteria. Learn more in our blog post, or download the report.
      • Charting the future of custom compute at Google—To meet users’ performance needs at low power, we’re doubling down on custom chips that use System on a Chip (SoC) designs. Learn more.
      • Introducing Network Connectivity Center—We announced Network Connectivity Center, which provides a single management experience to easily create, connect, and manage heterogeneous on-prem and cloud networks leveraging Google’s global infrastructure. Network Connectivity Center serves as a vantage point to seamlessly connect VPNs, partner and dedicated interconnects, as well as third-party routers and Software-Defined WANs, helping you optimize connectivity, reduce operational burden and lower costs—wherever your applications or users may be. Learn more.
      • Making it easier to get Compute Engine resources for batch processing—We announced a new method of obtaining Compute Engine instances for batch processing that accounts for availability of resources in zones of a region. Now available in preview for regional managed instance groups, you can do this simply by specifying the ANY value in the API. Learn more.
      • Next-gen virtual automotive showrooms are here, thanks to Google Cloud, Unreal Engine, and NVIDIA—We teamed up with Unreal Engine, the open and advanced real-time 3D creation game engine, and NVIDIA, inventor of the GPU, to launch new virtual showroom experiences for automakers. Taking advantage of the NVIDIA RTX platform on Google Cloud, these showrooms provide interactive 3D experiences, photorealistic materials and environments, and up to 4K cloud streaming on mobile and connected devices. Today, in collaboration with MHP, the Porsche IT consulting firm, and MONKEYWAY, a real-time 3D streaming solution provider, you can see our first virtual showroom, the Pagani Immersive Experience Platform. Learn more.
      • Troubleshoot network connectivity with Dynamic Verification (public preview)—You can now check packet loss rate and one-way network latency between two VMs on GCP. This capability is an addition to existing Network Intelligence Center Connectivity Tests which verify reachability by analyzing network configuration in your VPCs. See more in our documentation.
      • Helping U.S. states get the COVID-19 vaccine to more people—In February, we announced our Intelligent Vaccine Impact solution (IVIs) to help communities rise to the challenge of getting vaccines to more people quickly and effectively. Many states have deployed IVIs, and have found it able to meet demand and easily integrate with their existing technology infrastructures. Google Cloud is proud to partner with a number of states across the U.S., including Arizona, the Commonwealth of Massachusetts, North Carolina, Oregon, and the Commonwealth of Virginia to support vaccination efforts at scale. Learn more.

      Related Article

      Picture this: 10 whiteboard sketch videos that bring Google Cloud to life

      If you’re looking for a visual way to learn Google Cloud products, we’ve got you covered. The Google Cloud Tech YouTube channel has a ser...

      Read Article

      Week of Mar 15-Mar 19 2021

      • A2 VMs now GA: The largest GPU cloud instances with NVIDIA A100 GPUs—We’re announcing the general availability of A2 VMs based on the NVIDIA Ampere A100 Tensor Core GPUs in Compute Engine. This means customers around the world can now run their NVIDIA CUDA-enabled machine learning (ML) and high performance computing (HPC) scale-out and scale-up workloads more efficiently and at a lower cost. Learn more.
      • Earn the new Google Kubernetes Engine skill badge for free—We’ve added a new skill badge this month, Optimize Costs for Google Kubernetes Engine (GKE), which you can earn for free when you sign up for the Kubernetes track of the skills challenge. The skills challenge provides 30 days free access to Google Cloud labs and gives you the opportunity to earn skill badges to showcase different cloud competencies to employers. Learn more.
      • Now available: carbon free energy percentages for our Google Cloud regions—Google first achieved carbon neutrality in 2007, and since 2017 we’ve purchased enough solar and wind energy to match 100% of our global electricity consumption. Now we’re building on that progress to target a new sustainability goal: running our business on carbon-free energy 24/7, everywhere, by 2030. Beginning this week, we’re sharing data about how we are performing against that objective so our customers can select Google Cloud regions based on the carbon-free energy supplying them. Learn more.
      • Increasing bandwidth to C2 and N2 VMs—We announced the public preview of 100, 75, and 50 Gbps high-bandwidth network configurations for General Purpose N2 and Compute Optimized C2 Compute Engine VM families as part of continuous efforts to optimize our Andromeda host networking stack. This means we can now offer higher-bandwidth options on existing VM families when using the Google Virtual NIC (gVNIC). These VMs were previously limited to 32 Gbps. Learn more.
      • New research on how COVID-19 changed the nature of IT—To learn more about the impact of COVID-19 and the resulting implications to IT, Google commissioned a study by IDG to better understand how organizations are shifting their priorities in the wake of the pandemic. Learn more and download the report.

      • New in API security—Google Cloud Apigee API management platform's latest release, Apigee X, works with Cloud Armor to protect your APIs with advanced security technology including DDoS protection, geo-fencing, OAuth, and API keys. Learn more about our integrated security enhancements here.

      • Troubleshoot errors more quickly with Cloud Logging—The Logs Explorer now automatically breaks down your log results by severity, making it easy to spot spikes in errors at specific times. Learn more about our new histogram functionality here.

      Logs Explorer
      The Logs Explorer histogram

      Week of Mar 8-Mar 12 2021

      • Introducing #AskGoogleCloud on Twitter and YouTube—Our first segment on March 12th features Developer Advocates Stephanie Wong, Martin Omander and James Ward to answer questions on the best workloads for serverless, the differences between “serverless” and “cloud native,” how to accurately estimate costs for using Cloud Run, and much more. Learn more.
      • Learn about the value of no-code hackathons—Google Cloud’s no-code application development platform, AppSheet, helps to facilitate hackathons for “non-technical” employees with no coding necessary to compete. Learn about Globe Telecom’s no-code hackathon as well as their winning AppSheet app here.
      • Introducing Cloud Code Secret Manager Integration—Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. Integrating Cloud Code with Secret Manager brings the powerful capabilities of both these tools together so you can create and manage your secrets right from within your preferred IDE, whether that be VS Code, IntelliJ, or Cloud Shell Editor. Learn more.
      • Flexible instance configurations in Cloud SQL—Cloud SQL for MySQL now supports flexible instance configurations which offer you the extra freedom to configure your instance with the specific number of vCPUs and GB of RAM that fits your workload. To set up a new instance with a flexible instance configuration, see our documentation here.
      • The Cloud Healthcare Consent Management API is now generally available—The Healthcare Consent Management API is now GA, giving customers the ability to greatly scale the management of consents to meet increasing need, particularly amidst the emerging task of managing health data for new care and research scenarios. Learn more.

      Related Article

      Colossus under the hood: a peek into Google’s scalable storage system

      An overview of Colossus, the file system that underpins Google Cloud’s storage offerings.

      Read Article

      Week of Mar 1-Mar 5 2021

      • Cloud Run is now available in all Google Cloud regions. Learn more.
      • Introducing Apache Spark Structured Streaming connector for Pub/Sub Lite—We’re announcing the release of an open source connector to read streams of messages from Pub/Sub Lite into Apache Spark.The connector works in all Apache Spark 2.4.X distributions, including Dataproc, Databricks, or manual Spark installations. Learn more.
      • Google Cloud Next ‘21 is October 12-14, 2021—Join us and learn how the most successful companies have transformed their businesses with Google Cloud. Sign-up at g.co/cloudnext for updates. Learn more.
      • Hierarchical firewall policies now GA—Hierarchical firewalls provide a means to enforce firewall rules at the organization and folder levels in the GCP Resource Hierarchy. This allows security administrators at different levels in the hierarchy to define and deploy consistent firewall rules across a number of projects so they're applied to all VMs in currently existing and yet-to-be-created projects. Learn more.
      • Announcing the Google Cloud Born-Digital Summit—Over this half-day event, we’ll highlight proven best-practice approaches to data, architecture, diversity & inclusion, and growth with Google Cloud solutions. Learn more and register for free.
      • Google Cloud products in 4 words or less (2021 edition)—Our popular “4 words or less Google Cloud developer’s cheat sheet” is back and updated for 2021. Learn more.
      • Gartner names Google a leader in its 2021 Magic Quadrant for Cloud AI Developer Services report—We believe this recognition is based on Gartner’s evaluation of Google Cloud’s language, vision, conversational, and structured data services and solutions for developers. Learn more.
      • Announcing the Risk Protection Program—The Risk Protection Program offers customers peace of mind through the technology to secure their data, the tools to monitor the security of that data, and an industry-first cyber policy offered by leading insurers. Learn more.
      • Building the future of work—We’re introducing new innovations in Google Workspace to help people collaborate and find more time and focus, wherever and however they work. Learn more.

      • Assured Controls and expanded Data Regions—We’ve added new information governance features in Google Workspace to help customers control their data based on their business goals. Learn more.

      Related Article

      11 quick tips for making the most of Gmail, Meet, Calendar, and more in Google Workspace

      Whether you’re looking to stay on top of your inbox or make the most of virtual meetings, most of us can benefit from quick productivity ...

      Read Article

      Week of Feb 22-Feb 26 2021

      • 21 Google Cloud tools explained in 2 minutes—Need a quick overview of Google Cloud core technologies? Quickly learn these 21 Google Cloud products—each explained in under two minutes. Learn more.

      • BigQuery materialized views now GA—Materialized views (MV’s) are precomputed views that periodically cache results of a query to provide customers increased performance and efficiency. Learn more.

      • New in BigQuery BI Engine—We’re extending BigQuery BI Engine to work with any BI or custom dashboarding applications that require sub-second query response times. In this preview, BI Engine will work seamlessly with Looker and other popular BI tools such as Tableau and Power BI without requiring any change to the BI tools. Learn more.

      • Dataproc now supports Shielded VMs—All Dataproc clusters created using Debian 10 or Ubuntu 18.04 operating systems now use Shielded VMs by default and customers can provide their own configurations for secure boot, vTPM, and Integrity Monitoring. This feature is just one of the many ways customers that have migrated their Hadoop and Spark clusters to GCP experience continued improvements to their security postures without any additional cost.

      • New Cloud Security Podcast by Google—Our new podcast brings you stories and insights on security in the cloud, delivering security from the cloud, and, of course, on what we’re doing at Google Cloud to help keep customer data safe and workloads secure. Learn more.

      • New in Conversational AI and Apigee technology—Australian retailer Woolworths provides seamless customer experiences with their virtual agent, Olive. Apigee API Management and Dialogflow technology allows customers to talk to Olive through voice and chat. Learn more.

      • Introducing GKE Autopilot—GKE already offers an industry-leading level of automation that makes setting up and operating a Kubernetes cluster easier and more cost effective than do-it-yourself and other managed offerings. Autopilot represents a significant leap forward. In addition to the fully managed control plane that GKE has always provided, using the Autopilot mode of operation automatically applies industry best practices and can eliminate all node management operations, maximizing your cluster efficiency and helping to provide a stronger security posture. Learn more.

      • Partnering with Intel to accelerate cloud-native 5G—As we continue to grow cloud-native services for the telecommunications industry, we’re excited to announce a collaboration with Intel to develop reference architectures and integrated solutions for communications service providers to accelerate their deployment of 5G and edge network solutions. Learn more.

      • Veeam Backup for Google Cloud now available—Veeam Backup for Google Cloud automates Google-native snapshots to securely protect VMs across projects and regions with ultra-low RPOs and RTOs, and store backups in Google Object Storage to enhance data protection while ensuring lower costs for long-term retention.

      • Migrate for Anthos 1.6 GA—With Migrate for Anthos, customers and partners can automatically migrate and modernize traditional application workloads running in VMs into containers running on Anthos or GKE. Included in this new release: 

        • In-place modernization for Anthos on AWS (Public Preview) to help customers accelerate on-boarding to Anthos AWS while leveraging their existing investment in AWS data sources, projects, VPCs, and IAM controls.

        • Additional Docker registries and artifacts repositories support (GA) including AWS ECR, basic-auth docker registries, and AWS S3 storage to provide further flexibility for customers using Anthos Anywhere (on-prem, AWS, etc). 

        • HTTPS Proxy support (GA) to enable M4A functionality (access to external image repos and other services) where a proxy is used to control external access.

      Related Article

      5 resources to help you get started with SRE

      Here are our top five Google Cloud resources for getting started on your SRE journey.

      Read Article

      Week of Feb 15-Feb 19 2021

      • Introducing Cloud Domains in preview—Cloud Domains simplify domain registration and management within Google Cloud, improve the custom domain experience for developers, increase security, and support stronger integrations around DNS and SSL. Learn more.

      • Announcing Databricks on Google Cloud—Our partnership with Databricks enables customers to accelerate Databricks implementations by simplifying their data access, by jointly giving them powerful ways to analyze their data, and by leveraging our combined AI and ML capabilities to impact business outcomes. Learn more.

      • Service Directory is GA—As the number and diversity of services grows, it becomes increasingly challenging to maintain an inventory of all of the services across an organization. Last year, we launched Service Directory to help simplify the problem of service management. Today, it’s generally available. Learn more.

      Related Article

      Optimize your browser deployment: Links to our most popular Chrome Insider posts

      Find all the posts in our Chrome Insider blog series so you can read them all in one place or bookmark them for later.

      Read Article

      Week of Feb 8-Feb 12 2021

      • Introducing Bare Metal Solution for SAP workloads—We’ve expanded our Bare Metal Solution—dedicated, single-tenant systems designed specifically to run workloads that are too large or otherwise unsuitable for standard, virtualized environments—to include SAP-certified hardware options, giving SAP customers great options for modernizing their biggest and most challenging workloads. Learn more.

      • 9TB SSDs bring ultimate IOPS/$ to Compute Engine VMs—You can now attach 6TB and 9TB Local SSD to second-generation general-purpose N2 Compute Engine VMs, for great IOPS per dollar. Learn more.

      • Supporting the Python ecosystem—As part of our longstanding support for the Python ecosystem, we are happy to increase our support for the Python Software Foundation, the non-profit behind the Python programming language, ecosystem and community. Learn more

      • Migrate to regional backend services for Network Load Balancing—We now support backend services with Network Load Balancing—a significant enhancement over the prior approach, target pools, providing a common unified data model for all our load-balancing family members and accelerating the delivery of exciting features on Network Load Balancing. Learn more.

      Related Article

      A giant list of Google Cloud resources

      The growth of Google Cloud has been staggering. I decided to invest some time in building you a comprehensive list of resources.

      Read Article

      Week of Feb 1-Feb 4 2021

      • Apigee launches Apigee X—Apigee celebrates its 10 year anniversary with Apigee X, a new release of the Apigee API management platform. Apigee X harnesses the best of Google technologies to accelerate and globalize your API-powered digital initiatives. Learn more about Apigee X and digital excellence here.
      • Celebrating the success of Black founders with Google Cloud during Black History Month—February is Black History Month, a time for us to come together to celebrate and remember the important people and history of the African heritage. Over the next four weeks, we will highlight four Black-led startups and how they use Google Cloud to grow their businesses. Our first feature highlights TQIntelligence and its founder, Yared.

      Related Article

      The Service Mesh era: All the posts in our best practices blog series

      Find all the posts in our Service Mesh Era blog series in one convenient location—to read now or bookmark for later.

      Read Article

      Week of Jan 25-Jan 29 2021

      • BeyondCorp Enterprise now generally available—BeyondCorp Enterprise is a zero trust solution, built on Google’s global network, which provides customers with simple and secure access to applications and cloud resources and offers integrated threat and data protection. To learn more, read the blog post, visit our product homepage, and register for our upcoming webinar.

      Related Article

      6 database trends to watch

      Using managed cloud database services like Cloud SQL, Spanner, and more, can bring performance, scale, and more. See what’s next for mode...

      Read Article

      Week of Jan 18-Jan 22 2021

      • Cloud Operations Sandbox now available—Cloud Operations Sandbox is an open-source tool that helps you learn SRE practices from Google and apply them on cloud services using Google Cloud’s operations suite (formerly Stackdriver), with everything you need to get started in one click. You can read our blog post, or get started by visiting cloud-ops-sandbox.dev, exploring the project repo, and following along in the user guide

      • New data security strategy whitepaper—Our new whitepaper shares our best practices for how to deploy a modern and effective data security program in the cloud. Read the blog post or download the paper.   

      • WebSockets, HTTP/2 and gRPC bidirectional streams come to Cloud Run—With these capabilities, you can deploy new kinds of applications to Cloud Run that were not previously supported, while taking advantage of serverless infrastructure. These features are now available in public preview for all Cloud Run locations. Read the blog post or check out the WebSockets demo app or the sample h2c server app.

      • New tutorial: Build a no-code workout app in 5 steps—Looking to crush your new year’s resolutions? Using AppSheet, Google Cloud’s no-code app development platform, you can build a custom fitness app that can do things like record your sets, reps and weights, log your workouts, and show you how you’re progressing. Learn how.


      Week of Jan 11-Jan 15 2021

      • State of API Economy 2021 Report now available—Google Cloud details the changing role of APIs in 2020 amidst the COVID-19 pandemic, informed by a comprehensive study of Apigee API usage behavior across industry, geography, enterprise size, and more. Discover these 2020 trends along with a projection of what to expect from APIs in 2021. Read our blog post here or download and read the report here.
      • New in the state of no-code—Google Cloud's AppSheet looks back at the key no-code application development themes of 2020. AppSheet contends the rising number of citizen developer app creators will ultimately change the state of no-code in 2021. Read more here.


      Week of Jan 4-Jan 8 2021

      • Last year's most popular API posts—In an arduous year, thoughtful API design and strategy is critical to empowering developers and companies to use technology for global good. Google Cloud looks back at the must-read API posts in 2020. Read it here.


      Week of Dec 21-Dec 25 2020


      Week of Dec 14-Dec 18 2020

      • Memorystore for Redis enables TLS encryption support (Preview)—With this release, you can now use Memorystore for applications requiring sensitive data to be encrypted between the client and the Memorystore instance. Read more here.
      • Monitoring Query Language (MQL) for Cloud Monitoring is now generally available—Monitoring Query language provides developers and operators on IT and development teams powerful metric querying, analysis, charting, and alerting capabilities. This functionality is needed for Monitoring use cases that include troubleshooting outages, root cause analysis, custom SLI / SLO creation, reporting and analytics, complex alert logic, and more. Learn more.


      Week of Dec 7-Dec 11 2020

      • Memorystore for Redis now supports Redis AUTH—With this release you can now use OSS Redis AUTH feature with Memorystore for Redis instances. Read more here.
      • New in serverless computing—Google Cloud API Gateway and its service-first approach to developing serverless APIs helps organizations accelerate innovation by eliminating scalability and security bottlenecks for their APIs. Discover more benefits here.
      • Environmental Dynamics, Inc. makes a big move to no-code—The environmental consulting company EDI built and deployed 35+ business apps with no coding skills necessary with Google Cloud’s AppSheet. This no-code effort not only empowered field workers, but also saved employees over 2,550 hours a year. Get the full story here.
      • Introducing Google Workspace for Government—Google Workspace for Government is an offering that brings the best of Google Cloud’s collaboration and communication tools to the government with pricing that meets the needs of the public sector. Whether it’s powering social care visits, employment support, or virtual courts, Google Workspace helps governments meet the unique challenges they face as they work to provide better services in an increasingly virtual world. Learn more.


      Week of Nov 30-Dec 4 2020

      • Google enters agreement to acquire Actifio—Actifio, a leader in backup and disaster recovery (DR), offers customers the opportunity to protect virtual copies of data in their native format, manage these copies throughout their entire lifecycle, and use these copies for scenarios like development and test. This planned acquisition further demonstrates Google Cloud’s commitment to helping enterprises protect workloads on-premises and in the cloud. Learn more.
      • Traffic Director can now send traffic to services and gateways hosted outside of Google Cloud—Traffic Director support for Hybrid Connectivity Network Endpoint Groups (NEGs), now generally available, enables services in your VPC network to interoperate more seamlessly with services in other environments. It also enables you to build advanced solutions based on Google Cloud's portfolio of networking products, such as Cloud Armor protection for your private on-prem services. Learn more.
      • Google Cloud launches the Healthcare Interoperability Readiness Program—This program, powered by APIs and Google Cloud’s Apigee, helps patients, doctors, researchers, and healthcare technologists alike by making patient data and healthcare data more accessible and secure. Learn more here.
      • Container Threat Detection in Security Command Center—We announced the general availability of Container Threat Detection, a built-in service in Security Command Center. This release includes multiple detection capabilities to help you monitor and secure your container deployments in Google Cloud. Read more here.
      • Anthos on bare metal now GA—Anthos on bare metal opens up new possibilities for how you run your workloads, and where. You can run Anthos on your existing virtualized infrastructure, or eliminate the dependency on a hypervisor layer to modernize applications while reducing costs. Learn more.


      Week of Nov 23-27 2020

      • Tuning control support in Cloud SQL for MySQL—We’ve made all 80 flags that were previously in preview now generally available (GA), empowering you with the controls you need to optimize your databases. See the full list here.
      • New in BigQuery ML—We announced the general availability of boosted trees using XGBoost, deep neural networks (DNNs) using TensorFlow, and model export for online prediction. Learn more.
      • New AI/ML in retail report—We recently commissioned a survey of global retail executives to better understand which AI/ML use cases across the retail value chain drive the highest value and returns in retail, and what retailers need to keep in mind when going after these opportunities. Learn more  or read the report.


      Week of Nov 16-20 2020

      • New whitepaper on how AI helps the patent industry—Our new paper outlines a methodology to train a BERT (bidirectional encoder representation from transformers) model on over 100 million patent publications from the U.S. and other countries using open-source tooling. Learn more or read the whitepaper.
      • Google Cloud support for .NET 5.0—Learn more about our support of .NET 5.0, as well as how to deploy it to Cloud Run.
      • .NET Core 3.1 now on Cloud Functions—With this integration you can write cloud functions using your favorite .NET Core 3.1 runtime with our Functions Framework for .NET for an idiomatic developer experience. Learn more.
      • Filestore Backups in preview—We announced the availability of the Filestore Backups preview in all regions, making it easier to migrate your business continuity, disaster recovery and backup strategy for your file systems in Google Cloud. Learn more.
      • Introducing Voucher, a service to help secure the container supply chain—Developed by the Software Supply Chain Security team at Shopify to work with Google Cloud tools, Voucher evaluates container images created by CI/CD pipelines and signs those images if they meet certain predefined security criteria. Binary Authorization then validates these signatures at deploy time, ensuring that only explicitly authorized code that meets your organizational policy and compliance requirements can be deployed to production. Learn more.
      • 10 most watched from Google Cloud Next ‘20: OnAir—Take a stroll through the 10 sessions that were most popular from Next OnAir, covering everything from data analytics to cloud migration to no-code development. Read the blog.
      • Artifact Registry is now GA—With support for container images, Maven, npm packages, and additional formats coming soon, Artifact Registry helps your organization benefit from scale, security, and standardization across your software supply chain. Read the blog.


      Week of Nov 9-13 2020

      • Introducing the Anthos Developer Sandbox—The Anthos Developer Sandbox gives you an easy way to learn to develop on Anthos at no cost, available to anyone with a Google account. Read the blog.
      • Database Migration Service now available in preview—Database Migration Service (DMS) makes migrations to Cloud SQL simple and reliable. DMS supports migrations of self-hosted MySQL databases—either on-premises or in the cloud, as well as managed databases from other clouds—to Cloud SQL for MySQL. Support for PostgreSQL is currently available for limited customers in preview, with SQL Server coming soon. Learn more.
      • Troubleshoot deployments or production issues more quickly with new logs tailing—We’ve added support for a new API to tail logs with low latency. Using gcloud, it allows you the convenience of tail -f with the powerful query language and centralized logging solution of Cloud Logging. Learn more about this preview feature.
      • Regionalized log storage now available in 5 new regions in preview—You can now select where your logs are stored from one of five regions in addition to global—asia-east1, europe-west1, us-central1, us-east1, and us-west1. When you create a logs bucket, you can set the region in which you want to store your logs data. Get started with this guide.


      Week of Nov 2-6 2020

      • Cloud SQL adds support for PostgreSQL 13—Shortly after its community GA, Cloud SQL has added support for PostgreSQL 13. You get access to the latest features of PostgreSQL while Cloud SQL handles the heavy operational lifting, so your team can focus on accelerating application delivery. Read more here.
      • Apigee creates value for businesses running on SAP—Google Cloud’s API Management platform Apigee is optimized for data insights and data monetization, helping businesses running on SAP innovate faster without fear of SAP-specific challenges to modernization. Read more here.
      • Document AI platform is live—The new Document AI (DocAI) platform, a unified console for document processing, is now available in preview. You can quickly access all parsers, tools and solutions (e.g. Lending DocAI, Procurement DocAI) with a unified API, enabling an end-to-end document solution from evaluation to deployment. Read the full story here or check it out in your Google Cloudconsole.
      • Accelerating data migration with Transfer Appliances TA40 and TA300—We’re announcing the general availability of new Transfer Appliances. Customers are looking for fast, secure and easy to use options to migrate their workloads to Google Cloud and we are addressing their needs with next generation Transfer Appliances. Learn more about Transfer Appliances TA40 and TA300.


      Week of Oct 26-30 2020

      • B.H., Inc. accelerates digital transformation—The Utah based contracting and construction company BHI eliminated IT backlog when non technical employees were empowered to build equipment inspection, productivity, and other custom apps by choosing Google Workspace and the no-code app development platform, AppSheet. Read the full story here.
      • Globe Telecom embraces no-code development—Google Cloud’s AppSheet empowers Globe Telecom employees to do more innovating with less code. The global communications company kickstarted their no-code journey by combining the power of AppSheet with a unique adoption strategy. As a result, AppSheet helped Globe Telecom employees build 59 business apps in just 8 weeks. Get the full story.
      • Cloud Logging now allows you to control access to logs via Log Views—Building on the control offered via Log Buckets (blog post), you can now configure who has access to logs based on the source project, resource type, or log name, all using standard IAM controls. Logs views, currently in Preview, can help you build a system using the principle of least privilege, limiting sensitive logs to only users who need this information. Learn more about Log Views.
      • Document AI is HIPAA compliantDocument AI now enables HIPAA compliance. Now Healthcare and Life Science customers such as health care providers, health plans, and life science organizations can unlock insights by quickly extracting structured data from medical documents while safeguarding individuals’ protected health information (PHI). Learn more about Google Cloud’s nearly 100 products that support HIPAA-compliance.


      Week of Oct 19-23 2020

      • Improved security and governance in Cloud SQL for PostgreSQL—Cloud SQL for PostgreSQL now integrates with Cloud IAM (preview) to provide simplified and consistent authentication and authorization. Cloud SQL has also enabled PostgreSQL Audit Extension (preview) for more granular audit logging. Read the blog.
      • Announcing the AI in Financial Crime Compliance webinar—Our executive digital forum will feature industry executives, academics, and former regulators who will discuss how AI is transforming financial crime compliance on November 17. Register now.
      • Transforming retail with AI/ML—New research provides insights on high value AI/ML use cases for food, drug, mass merchant and speciality retail that can drive significant value and build resilience for your business. Learn what the top use cases are for your sub-segment and read real world success stories. Download the ebook here and view this companion webinar which also features insights from Zulily.
      • New release of Migrate for Anthos—We’re introducing two important new capabilities in the 1.5 release of Migrate for Anthos, Google Cloud's solution to easily migrate and modernize applications currently running on VMs so that they instead run on containers in Google Kubernetes Engine or Anthos. The first is GA support for modernizing IIS apps running on Windows Server VMs. The second is a new utility that helps you identify which VMs in your existing environment are the best targets for modernization to containers. Start migrating or check out the assessment tool documentation (Linux | Windows).
      • New Compute Engine autoscaler controls—New scale-in controls in Compute Engine let you limit the VM deletion rate by preventing the autoscaler from reducing a MIG's size by more VM instances than your workload can tolerate to lose. Read the blog.
      • Lending DocAI in previewLending DocAI is a specialized solution in our Document AI portfolio for the mortgage industry that processes borrowers’ income and asset documents to speed-up loan applications. Read the blog, or check out the product demo.


      Week of Oct 12-16 2020

      • New maintenance controls for Cloud SQL—Cloud SQL now offers maintenance deny period controls, which allow you to prevent automatic maintenance from occurring during a 90-day time period. Read the blog.
      • Trends in volumetric DDoS attacks—This week we published a deep dive into DDoS threats, detailing the trends we’re seeing and giving you a closer look at how we prepare for multi-terabit attacks so your sites stay up and running. Read the blog.
      • New in BigQuery—We shared a number of updates this week, including new SQL capabilities, more granular control over your partitions with time unit partitioning, the general availability of Table ACLs, and BigQuery System Tables Reports, a solution that aims to help you monitor BigQuery flat-rate slot and reservation utilization by leveraging BigQuery’s underlying INFORMATION_SCHEMA views. Read the blog.
      • Cloud Code makes YAML easy for hundreds of popular Kubernetes CRDs—We announced authoring support for more than 400 popular Kubernetes CRDs out of the box, any existing CRDs in your Kubernetes cluster, and any CRDs you add from your local machine or a URL. Read the blog.
      • Google Cloud’s data privacy commitments for the AI era—We’ve outlined how our AI/ML Privacy Commitment reflects our belief that customers should have both the highest level of security and the highest level of control over data stored in the cloud. Read the blog.

      • New, lower pricing for Cloud CDN—We’ve reduced the price of cache fill (content fetched from your origin) charges across the board, by up to 80%, along with our recent introduction of a new set of flexible caching capabilities, to make it even easier to use Cloud CDN to optimize the performance of your applications. Read the blog.

      • Expanding the BeyondCorp Alliance—Last year, we announced our BeyondCorp Alliance with partners that share our Zero Trust vision. Today, we’re announcing new partners to this alliance. Read the blog.

      • New data analytics training opportunities—Throughout October and November, we’re offering a number of no-cost ways to learn data analytics, with trainings for beginners to advanced users. Learn more.

      • New BigQuery blog series—BigQuery Explained provides overviews on storage, data ingestion, queries, joins, and more. Read the series.


      Week of Oct 5-9 2020

      • Introducing the Google Cloud Healthcare Consent Management API—This API gives healthcare application developers and clinical researchers a simple way to manage individuals’ consent of their health data, particularly important given the new and emerging virtual care and research scenarios related to COVID-19. Read the blog.

      • Announcing Google Cloud buildpacks—Based on the CNCF buildpacks v3 specification, these buildpacks produce container images that follow best practices and are suitable for running on all of our container platforms: Cloud Run (fully managed), Anthos, and Google Kubernetes Engine (GKE). Read the blog.

      • Providing open access to the Genome Aggregation Database (gnomAD)—Our collaboration with Broad Institute of MIT and Harvard provides free access to one of the world's most comprehensive public genomic datasets. Read the blog.

      • Introducing HTTP/gRPC server streaming for Cloud Run—Server-side HTTP streaming for your serverless applications running on Cloud Run (fully managed) is now available. This means your Cloud Run services can serve larger responses or stream partial responses to clients during the span of a single request, enabling quicker server response times for your applications. Read the blog.

      • New security and privacy features in Google Workspace—Alongside the announcement of Google Workspace we also shared more information on new security features that help facilitate safe communication and give admins increased visibility and control for their organizations. Read the blog.

      • Introducing Google Workspace—Google Workspace includes all of the productivity apps you know and use at home, at work, or in the classroom—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, Chat and more—now more thoughtfully connected. Read the blog.

      • New in Cloud Functions: languages, availability, portability, and more—We extended Cloud Functions—our scalable pay-as-you-go Functions-as-a-Service (FaaS) platform that runs your code with zero server management—so you can now use it to build end-to-end solutions for several key use cases. Read the blog.

      • Announcing the Google Cloud Public Sector Summit, Dec 8-9—Our upcoming two-day virtual event will offer thought-provoking panels, keynotes, customer stories and more on the future of digital service in the public sector. Register at no cost.

    • BigQuery Explainable AI now in GA to help you interpret your machine learning models Tue, 18 Jan 2022 21:00:00 -0000

      Explainable AI(XAI) helps you understand and interpret how your machine learning models make decisions. We're excited to announce that BigQuery Explainable AI is now generally available (GA). BigQuery is the data warehouse that supports explainable AI in a most comprehensive way w.r.t both XAI methodology and model types. It does this at BigQuery scale, enabling millions of explanations within seconds with a single SQL query.

      Why is Explainable AI so important? To demystify the inner workings of machine learning models, Explainable AI is quickly becoming an essential and growing need for businesses as they continue to invest in AI and ML. With 76% of enterprises now prioritizing artificial intelligence (AI) and machine learning (ML) over other initiatives in 2021 IT budgets, the majority of CEOs (82%) believe that AI-based decisions must be explainable to be trusted according to a PwC survey.

      While the focus of this blogpost is on BigQuery Explainable AI, Google Cloud provides a variety of tools and frameworks to help you interpret models outside of BigQuery, such as with Vertex Explainable AI, which includes AutoML Tables, AutoML Vision, and custom-trained models.

      So how does Explainable AI in BigQuery work exactly? And how might you use it in practice? 

      Two types of Explainable AI: global and local explainability

      When it comes to Explainable AI, the first thing to note is that there are two main types of explainability as they relate to the features used to train the ML model: global explainability and local explainability.

      Imagine that you have a ML model that predicts housing price (as a dollar amount), based on three features: (1) number of bedrooms, (2) distance to the nearest city center, and (3) construction date.

      Global explainability (a.k.a. global feature importance) describes the features' overall influence on the model and helps you understand if a feature had a greater influence than other features over the model's predictions. For example, global explainability can reveal that the number of bedrooms and distance to city center typically has a much stronger influence than the construction date on predicting housing prices. Global explainability is especially useful if you have hundreds or thousands of features and you want to determine which features are the most important contributors to your model. You may also consider using global explainability as a way to identify and prune less important features to improve the generalizability of their models.

      Local explainability (a.k.a. feature attributions) describes the breakdown of how each feature contributes towards a specific prediction. For example, if the model predicts that house ID#1001 has a predicted price of $230,000, local explainability would describe a baseline amount (e.g. $50,000) and how each of the features contributes on top of the baseline towards the predicted price. For example, the model may say that on top of the baseline of $50,000, having 3 bedrooms contributed an additional $50,000, close proximity to the city center added $100,000, and construction date of 2010 added $30,000, for a total predicted price of $230,000. In essence, understanding the exact contribution of each feature used by the model to make each prediction is the main purpose of local explainability.

      What ML models does BigQuery Explainable AI apply to?

      BigQuery Explainable AI applies to a variety of models, including supervised learning models for IID data and time series models. The documentation for BigQuery Explainable AI provides an overview of the different ways of applying explainability per model. Note that each explainability method has its own way of calculation (e.g. Shapley values), which are covered more in-depth in the documentation.

      Examples with BigQuery Explainable AI

      In this next section, we will show three examples of how to use BigQuery Explainable AI in different ML applications: 

      Regression models with BigQuery Explainable AI

      Let's use a boosted tree regression model to predict how much a taxi cab driver will receive in tips for a taxi ride, based on features such as number of passengers, payment type, total payment and trip distance. Then let's use BigQuery Explainable AI to help us understand how the model made the predictions in terms of global explainability (which features were most important?) and local explainability (how did the model arrive at each prediction?).

      The taxi trips dataset comes from the BigQuery public datasets and is publicly available in the table: bigquery-public-data.new_york_taxi_trips.tlc_yellow_trips_2018

      First, you can train a boosted tree regression model.

      Now let's do a prediction using ML.PREDICT, which is the standard way in BigQuery ML to make predictions without explainability.

      Regression ML Predict

      But you might wonder—how did the model generate this prediction of ~11.077?

      BigQuery Explainable AI can help us answer this question. Instead of using ML.PREDICT, you use ML.EXPLAIN_PREDICT with an additional optional parameter top_k_features. ML.EXPLAIN_PREDICT extends the capabilities of ML.PREDICT by outputting several additional columns that explain how each feature contributes to the predicted value. In fact, since ML.EXPLAIN_PREDICT includes all the output from ML.PREDICT anyway, you may want to consider using ML.EXPLAIN_PREDICT every time instead.

      Regression ML Explain Predict

      The way to interpret these columns is:

      Σfeature_attributions + baseline_prediction_value = prediction_value

      Let's break this down. The prediction_value is ~11.077, which is simply the predicted_tip_amount. The baseline_prediction_value is ~6.184, which is the tip amount for an average instance. top_feature_attributions indicates how much each of the features contributes towards the prediction value. For example, total_amount contributes ~2.540 to the predicted_tip_amount

      ML.EXPLAIN_PREDICT provides local feature explainability for regression models. For global feature importance, see the documentation for ML.GLOBAL_EXPLAIN.

      Classification models with BigQuery Explainable AI

      Let's use a logistic regression model to show you an example of BigQuery Explainable AI with classification models. We can use the same public dataset as before: bigquery-public-data.new_york_taxi_trips.tlc_yellow_trips_2018.

      Train a logistic regression model to predict the bracket of the percentage of the tip amount out of the taxi bill.

      Next, you can run ML.EXPLAIN_PREDICT to get both the classification results and the additional information for local feature explainability. For global explainability, you can use ML.GLOBAL_EXPLAIN. Again, since ML.EXPLAIN_PREDICT includes all the output from ML.PREDICT anyway, you may want to consider using ML.EXPLAIN_PREDICT every time instead.

      Classification ML Explain Predict

      Similar to the regression example earlier, the formula is used to derive the prediction_value:

      Σfeature_attributions + baseline_prediction_value = prediction_value

      As you can see in the screenshot above, the baseline_prediction_value is ~0.296. total_amount is the most important feature in making this specific prediction, contributing ~0.067 to the prediction_value, though followed by trip_distance. The feature passenger_count contributes negatively to prediction_value by -0.0015. The features vendor_id, rate_code, and payment_type did not seem to contribute much to the prediction_value.

      You may wonder why the prediction_value of ~0.389 doesn’t equal the probability value of  ~0.359. The reason is that unlike for regression models, for classification models, prediction_value is not a probability score. Instead, prediction_value is the logit value (i.e., log-odds) for the predicted class, which you could separately convert to probabilities by applying the softmax transformation to the logit values. For example, a three-class classification has a log-odds output of [2.446, -2.021, -2.190]. After applying the softmax transformation, the probability of these class predictions is [0.9905, 0.0056, 0.0038].

      Time-series forecasting models with BigQuery Explainable AI

      Plot of historical daily number of bike trips in NYC

      Explainable AI for forecasting provides more interpretability into how the forecasting model came to its predictions. Let's go through an example of forecasting the number of bike trips in NYC using the new_york.citibike_trips public data in BigQuery.

      You can train a time-series model ARIMA_PLUS:

      This function outputs the forecasted values and the prediction interval. Plotting it in addition to the input time series gives the following figure.

      Plot of historical daily number of bike trips with forecasts and prediction intervals using ML.FORECAST

      But how does the forecasting model arrive at its predictions? Explainability is especially important if the model ever generates unexpected results.

      With ML.EXPLAIN_FORECAST, BigQuery Explainable AI provides extra transparency into the seasonality, trend, holiday effects, level (step) changes, and spikes and dips outlier removal. In fact, since ML.EXPLAIN_FORECAST includes all the output from ML.FORECAST anyway, you may want to consider using ML.EXPLAIN_FORECAST every time instead.

      Plot of historical daily number of bike trips with forecasts and prediction intervals, and the time series component breakdown using ML.EXPLAIN_FORECAST.

      Compared to the previous figure which only shows the forecasting results, this figure shows much richer information to explain how the forecast is made.  

      First, it shows how the input time series is adjusted by removing the spikes and dips anomalies, and by compensating the level changes. That is:

      time_series_adjusted_data = time_series_data - spikes_and_dips - step_changes

      Second, it shows how the adjusted input time series is decomposed into different components such as both weekly and yearly seasonal components, holiday effect component and trend component. That is

      time_series_adjusted_data = trend + seasonal_period_yearly + seasonal_period_weekly + holiday_effect + residual

      Finally, it shows how these components are forecasted separately to compose the final forecasting results. That is:

      time_series_data = trend + seasonal_period_yearly + seasonal_period_weekly + holiday_effect

      For more information on these time series components, please see the documentation here.

      Conclusion

      With the GA of BigQuery Explainable AI, we hope you will now be able to interpret your machine learning models with ease. 

      Thanks to the BigQuery ML team, especially Lisa Yin, Jiashang Liu, Amir Hormati, Mingge Deng, Jerry Ye and Abhinav Khushraj. Also thanks to the Vertex Explainable AI team, especially David Pitman and Besim Avci.

      Related Article

      How to build demand forecasting models with BigQuery ML

      With BigQuery ML, you can train and deploy machine learning models using SQL. With the fully managed, scalable infrastructure of BigQuery...

      Read Article
    • The future of work requires a more human approach to security Tue, 18 Jan 2022 17:00:00 -0000

      Most businesses that have adopted off-site or hybrid working models over the last two years made the change under immense pressure. The need was incredibly urgent and timing was a major factor. Now that they’ve had a chance to adapt and settle in, leaders are revisiting how their teams work in a more proactive way. They’re updating strategies and policies with a focus on what will be best for both the company and the employees long term. 

      This is especially true when it comes to data integrity and security. 

      Hybrid/flexible work will be a “standard practice” within three years, say more than 75 percent of respondents to a survey conducted by Economist Impact and commissioned by Google Workspace. And while the security challenges related to flexible work certainly aren’t new, the last 18 months have highlighted many vulnerabilities at scale.

      We’re in a new era of data security where business leaders must abandon traditional ideas of what a workplace looks like. Work is no longer a physical space, but rather a series of interconnected policies on how to get things done. Where the work happens, whether that’s at home, in a traditional office, or any number of locations between them, simply isn’t as important as it used to be. 

      With this thinking, security requires a new approach. It’s no longer just about protecting information or restricting how that information is accessed—it’s about building safe, efficient, and effective ways to facilitate seamless collaboration and information-sharing. 

      Take employee-owned laptops, for example. If business leaders didn’t provide workers with all of the hardware and devices needed to thrive when work shifted off-site, many would be using their own personal devices to complete job-related tasks. Their personal devices may not be equipped with the same security protections as in-office devices. Sensitive data loss, leakage, and theft is far more likely when using personal devices than it was when everyone was in a controlled office environment. 

      The same is true for the opposite scenario, in which employees are using company laptops on personal Wi-Fi. Leaking sensitive company data is among the top security challenges, say 20 percent of business leaders surveyed in 2021 by Entrust. In the same survey, 21 percent of business leaders say they are worried about security risks from unmanaged home networks. 

      So what’s a security-minded business leader to do? 

      Cloud-based security

      On-premises business systems have relied on hyper-controlled environments, most often through in-office network security or Virtual Private Networks (VPN). Cloud-based platforms, on the other hand, promote data sharing and collaboration regardless of physical location. While there are many upsides to moving information to a cloud-based program, anywhere, anytime access is crucial. And these days, almost all business-critical programs and apps can be accessed through browsers such as Chrome, which means employees don’t need additional device drivers in order to access the information they need to be successful.

      Zero-trust policies

      Zero-trust models shift the focus to the individual user without a need for VPN technology, so access controls are enforced no matter where the user is or what device they’re using. Any user or device attempting to access a network or its resources requires authorization, which creates higher security limits on file-sharing, application downloads, and data usage. It also extends to employees using their personal devices, which can alleviate some of the worry that well-meaning employees could cause an unintentional breach.

      Secure by design

      The last thing an employer wants to do is create barriers to collaboration, and requiring an excessive number of checks and balances to access sensitive information can do just that. When tools are secure by design, however, employees can work together seamlessly. Rather than avoiding risk completely, businesses can monitor and maintain security risk governance to open up the lines of communication and foster a more collaborative and innovative culture. 

      When implemented well, this holistic approach prioritizes security while making systems virtually invisible to employees. Aside from the occasional nudge to the end user that their activity may be unsafe, everything happens behind the scenes. 

      Building a culture of security

      Beyond secure infrastructure, creating a company culture that prioritizes security can help minimize risk among a dispersed workforce. But remember, security and privacy policies are only as strong as their latest update. A 2020 report stated that nearly 25 percent of organizations hadn’t updated their security protocols in over a year. When updating policies and protocols, business leaders have the opportunity to meet employees where they are. This not only builds a culture of trust, but one of holistic security. 

      One way leaders can embed security culture into their organization is to collaborate with IT leaders on best practices and share them in actionable bites. Developing security training for employees and holding dedicated “office hours” to answer questions as they arise are two additional approaches to security culture. 

      Employee partnership

      Perspective is important and organizations have the opportunity to view employees as both partners and a line of defense, rather than seeing them as potential liabilities. It’s true that the way people work—and the way they access sensitive information—won’t always be perfectly secure, but letting workers know that they’re inherently trusted improves productivity and employee experience. When organizations block access to things like news, music, and email for employees, it can create tension. The best approach is to create checks and balances that allow for efficient response if and when problems do arise, instead of monitoring every click and download. 

      Looking ahead

      The shift to hybrid work compels business leaders to reflect on their practices and adopt new security solutions. And because these work models aren’t going anywhere, it’s important to address potential risks in a holistic manner. With an employee-centered approach, organizations can navigate today’s complex threat landscape with more confidence and better results. 

      Learn more about how to protect your organization.

      Related Article

      Google Workspace delivers new levels of trusted collaboration for a hybrid work world

      We’re adding new security features in Google Workspace, including client-side encryption and trust rules for Drive.

      Read Article



    Google has many products and the following is a list of its products: Android AutoAndroid OSAndroid TVCalendarCardboardChromeChrome EnterpriseChromebookChromecastConnected HomeContactsDigital WellbeingDocsDriveEarthFinanceFormsGboardGmailGoogle AlertsGoogle AnalyticsGoogle Arts & CultureGoogle AssistantGoogle AuthenticatorGoogle ChatGoogle ClassroomGoogle DuoGoogle ExpeditionsGoogle Family LinkGoogle FiGoogle FilesGoogle Find My DeviceGoogle FitGoogle FlightsGoogle FontsGoogle GroupsGoogle Home AppGoogle Input ToolsGoogle LensGoogle MeetGoogle OneGoogle PayGoogle PhotosGoogle PlayGoogle Play BooksGoogle Play GamesGoogle Play PassGoogle Play ProtectGoogle PodcastsGoogle ShoppingGoogle Street ViewGoogle TVGoogle TasksHangoutsKeepMapsMeasureMessagesNewsPhotoScanPixelPixel BudsPixelbookScholarSearchSheetsSitesSlidesSnapseedStadiaTilt BrushTranslateTravelTrusted ContactsVoiceWazeWear OS by GoogleYouTubeYouTube KidsYouTube MusicYouTube TVYouTube VR


    Google News
    TwitterFacebookInstagramYouTube



    Think with Google
    TwitterFacebookInstagramYouTube

    Google AI BlogAndroid Developers BlogGoogle Developers Blog
    AI is Artificial Intelligence


    Nightmare Scenario: Inside the Trump Administration’s Response to the Pandemic That Changed. From the Washington Post journalists Yasmeen Abutaleb and Damian Paletta - the definitive account of the Trump administration’s tragic mismanagement of the COVID-19 pandemic, and the chaos, incompetence, and craven politicization that has led to more than a half million American deaths and counting.

    Since the day Donald Trump was elected, his critics warned that an unexpected crisis would test the former reality-television host - and they predicted that the president would prove unable to meet the moment. In 2020, that crisis came to pass, with the outcomes more devastating and consequential than anyone dared to imagine. Nightmare Scenario is the complete story of Donald Trump’s handling - and mishandling - of the COVID-19 catastrophe, during the period of January 2020 up to Election Day that year. Yasmeen Abutaleb and Damian Paletta take us deep inside the White House, from the Situation Room to the Oval Office, to show how the members of the administration launched an all-out war against the health agencies, doctors, and scientific communities, all in their futile attempts to wish away the worst global pandemic in a century...


    GoogBlogs.com
    TwitterFacebookInstagramYouTube



    ZDNet » Google
    TwitterFacebookInstagramYouTube



    9to5Google » Google
    TwitterFacebookInstagramYouTube



    Computerworld » Google
    TwitterFacebookInstagramYouTube

    • Can Google be trusted? Fri, 21 Jan 2022 03:00:00 -0800

      For years, it seemed, Google lived up to its old motto, “Don’t be evil.” It also seemed to do no wrong in terms of product superiority.

      Google built its reputation as an ethical company that outperformed competitors. But is that reputation still deserved?

      One thing is true: It’s been a bad year for Google’s reputation.

      Does Google engage in unethical business practices?

      An antitrust lawsuit brought by a coalition of US states in 2020 and published in unredacted form last week alleges that Google suppressed competition by manipulating advertising auctions.

      Google used what are called “second price” auctions, where the highest bidder wins the auction, but pays the publisher an amount equal to the second-highest bid. If one company bids $10 per click, another bids $8 and another $6. The $10 bidder wins — but pays $8 per click to the publisher.

      To read this article in full, please click here

    • The cold, bitter truth about the Android-iOS messaging mess Fri, 14 Jan 2022 02:55:00 -0800

      My goodness, my fellow Google-observers: We've got quite the bit of virtual geek theater playing out in front of our googly eyes right now.

      Have you caught wind of this whole debacle yet? Following a report in The Wall Street Journal that cited the "dreaded green text bubble" as the core reason for The Youths™ supposedly veering toward iPhones over Android devices these days, Google's chief Android exec fired off a feisty series of tweets attacking Apple over its refusal to support contemporary cross-platform messaging standards.

      To read this article in full, please click here

    • 5 new Chrome OS features you should find right now Wed, 12 Jan 2022 03:00:00 -0800

      Android may be Google's highest profile platform, but Chrome OS is arguably the place where the fastest and most exciting progress tends to go down these days.

      Every year as of late, we see leaps and bounds being made in how Chromebooks work and what they're capable of doing — the types of programs they're able to run, the kinds of advantages they're able to offer, and the interesting ways they're able to interact with Android to create a more connected and cohesive-feeling Google ecosystem experience.

      To read this article in full, please click here

    • Privacy-centric DuckDuckGo to release Mac desktop browser Thu, 23 Dec 2021 04:11:00 -0800

      Popular, privacy-centered search engine DuckDuckGo plans to launch a desktop browser for macOS laptops and desktops.

      The browser is designed from the ground up to maintain privacy; that means it will not collect information about users and will not install cookies or tracking codes on devices. The company also claims it can block “hidden trackers” before they load.

      DuckDuckGo’s browser is already available as a download for mobile devices. In 2019, DuckDuckGo added Apple Maps support and has since added other improvements to how it works on Apple devices.  

      To read this article in full, please click here

    • The best Android apps for business in 2022 Thu, 23 Dec 2021 03:00:00 -0800

      Trying to find the right app for any given area on Android is a lot like trying to order dinner at a restaurant with way too many options on the menu. How can you possibly find the right choice in such a crowded lineup? With the Google Play Store now boasting somewhere in the neighborhood of 70 gazillion titles (last I checked), it's no simple task to figure out which apps rise above the rest and provide the best possible experiences.

      That's why I decided to step in and help. I've been covering Android from the start and have seen more than my fair share of incredible and not so incredible apps. From interface design to practical value, I know what to look for and how to separate the ordinary from the extraordinary. And taking the time to truly explore the full menu of options and find the cream of the crop is quite literally my job.

      To read this article in full, please click here

    • 6 easy fixes for Android 12 annoyances Fri, 10 Dec 2021 02:55:00 -0800

      By and large, Android 12 is a true treat to use.

      Google's latest and greatest Android effort is without a doubt the most outwardly significant Android update since 2014's Android 5.0 Lollipop release — at least, if you're using a Pixel phone, where the software's most noticeable interface enhancements and feature additions are fully present.

      But just like Lollipop — and most any new Android version, really — Android 12 also comes with its share of quirks and controversial decisions. For all of the positive progress, you're bound to run into a few things that rub you the wrong way and perhaps even make you less productive than you felt before.

      To read this article in full, please click here

    • Give your Chrome OS interface an instant upgrade Wed, 08 Dec 2021 03:00:00 -0800

      Google's Chrome OS software is in a constant state of evolution. And for the tinkering-lovin' tech nerds among us, that means there's always an opportunity to find and embrace something new — often long before it's officially released and available to the masses.

      Well, gang, we've got quite the tasty treat to tinker with today. It's a massive update Google's been cookin' up for its Chromebook app launcher for a while now, and it'll bring a significant change not only to how your device looks but also to what it's like to use.

      The new Chrome OS launcher design has actually been under development since this summer. Thus far, though, the work has mostly been taking place in the higher-tier, less stable Chrome OS channels, where regular Chromebook-ownin' folk rarely dare venture.

      To read this article in full, please click here

    • New Goldman Sachs-AWS data service points to a larger banking trend Tue, 07 Dec 2021 03:00:00 -0800

      Goldman Sachs, through a partnership with Amazon Web Services (AWS), has launched a financial data management and analytics service to help clients crunch data to extract business value.

      The announcement is part of a larger trend where key players in vertical industries — in this case, financial services — partner with hyperscalers to offer cloud services.

      Goldman Sachs’ new Financial Cloud for Data targets hedge funds, asset managers, and other institutional clients who face growing amounts of market data in a digital-first age.

      To read this article in full, please click here

    • The Pixel-exclusive rebirth of a beloved Android feature Wed, 01 Dec 2021 06:58:00 -0800

      Friends, rabbits, internet-persons, lend me your ears (bunny-shaped or otherwise). Today, we need to take a titillating trip back in time — 'cause a pivotal part of our Android-flavored past is about to poke its way into the present.

      So rewind with me for a sec, won't ya? The year was 2012 — the same exact numbers as our current moment on this earth, only with a flickety-flick of those final two digits. The Android version of the era was Android 4.1, better known as sweet, juicy Jelly Bean. Google's Pixel phones didn't yet exist; rather, the Samsung-made Galaxy Nexus served as the flagship of the platform that summer, while the LG-birthed Nexus 4 was on its way out of the virtual womb and into the world.

      To read this article in full, please click here

    • 13 hidden Pixel phone superpowers Tue, 23 Nov 2021 03:00:00 -0800

      One of the best parts of using a Pixel is the way tasty little specks of Google intelligence get sprinkled all throughout the experience. Those small but significant morsels show off the value of having Google's greatest ingredients integrated right into your phone's operating system, without any competing forces or awkwardly conflicting priorities at play.

      And Goog almighty, does that make a world of difference. The features in question may not always be the most eye-catching or marketing-friendly advantages, but they're incredibly practical touches that can make your life easier in some pretty powerful ways.

      To read this article in full, please click here

    • The Android 12 Quick Settings trick you've been missing Fri, 19 Nov 2021 03:00:00 -0800

      We've seen lots of significant changes to Android over the past decade. For the first time in a long time, though, Android 12 actually feels like a whole new smartphone experience.

      That's because Android 12 is the first Android version in years to introduce sweeping changes to the software's front-facing appearance. The new Material You design standard represents a gigantic evolution for the way Android looks and what a device running the operating system is like to use.

      By and large, that evolution is a good thing. But with any progression comes certain quirks that don't always jibe with your day-to-day desires.

      To read this article in full, please click here

    • A 20-second tweak for smarter, simpler Android security Wed, 17 Nov 2021 06:18:00 -0800

      Security is important. That much is obvious, right?

      And despite all the over-the-top, hilariously sensational headlines suggesting the contrary, the most realistic security threats on Android aren't from the big, bad malware monster lurking in the shadows and waiting to steal your darkest secrets whilst drinking all of your cocoa.

      Nope — the biggest risk to your security on Android is (drumroll, please...) you. The likelihood that you'll at some point provide personal information to an ill-intending person or fail to properly secure an account in some way is without a doubt the most realistic threat to your virtual wellbeing. Malware? Meh. That's rarely scary in anything more than a theoretical sense.

      To read this article in full, please click here

    • Store your corporate card on an iPhone? Uh-oh Mon, 15 Nov 2021 06:58:00 -0800

      Apple and Google (and especially Visa) last week gave us yet another example of how security and convenience are often at odds with each other. And it looks like they opted for convenience.

      The latest issue speaks to only a subset of iPhone and Android users — specifically, those who use their phones for mass transit payments. If you think of how subways work in a major city (I’ll use New York City as an example), they require extreme speed. Using facial recognition or entering a PIN right before paying to get on the subway would dramatically slow down the line. 

      Instead of allowing authentication to happen earlier — say, perhaps within five minutes of a transaction — or by accelerating the process to a split second, Apple, Google, and Visa apparently chose to forego any meaningful authentication. (Note: I am focusing on Visa because the hole still exists for it. MasterCard and others have already patched the flaw.)

      To read this article in full, please click here

    • A handy hack for the Pixel's new shortcut system Fri, 12 Nov 2021 03:00:00 -0800
    • 11 advanced Assistant tricks you should really remember on Android Wed, 10 Nov 2021 03:00:00 -0800

      Fancy new features are fan-frickin'-tastic. But let's face it: We aren't all carrying Google's shiny new Pixel phones. And we don't all have Android 12 in front of our shiny faces just yet.

      With all of that in mind, I thought now would be a fine time to turn our attention to some of Android's many buried treasures — phenomenal time-saving and productivity-boosting possibilities built right into the software on our existing phones, no matter who made 'em or how old they may be (within the realm of reason, anyway; if you're still totin' around a phone with Froyo, sorry pal, but you're on your own).

      Specifically, I want to think our way through some incredibly useful advanced features connected to Google Assistant — the friendly if sometimes slightly sassy virtual companion that's always standing by and ready to lend a helping hand (and/or voice).

      To read this article in full, please click here

    • 5 Android 12 features you can bring to any phone today Fri, 05 Nov 2021 03:00:00 -0700

      Google's Android 12 software is packed with interesting treasures — but unless you're using one of Google's own Pixel phones, it's still a ways off from actually landing in your hands.

      The tortoise-like pace of most Android updates is another subject for another day (as is the tortoise named Rupert who I'm pretty sure is responsible — that slimy-shelled rascal). Today, I want to explore some creative solutions for bringing a small but significant smidgeon of Android 12's goodness onto any device this minute.

      To read this article in full, please click here

    • 7 new hidden Pixel treasures to find in Android 12 Fri, 29 Oct 2021 03:00:00 -0700

      Android 12 may seem like old news to those of us in the land o' Pixels at this point, but hold the phone: Google's latest software has some pretty phenomenal features that are lurking beneath the surface and all too easy to overlook.

      We explored a dozen such treasures the other day, but there's even more juicy goodness where that came from. So here now are seven more spectacular hidden gems you'll absolutely want to dig up in Android 12 on your Pixel phone — regardless of whether you're packin' the new Pixel 6 or Pixel 6 Pro or one of the older Pixel models.

      To read this article in full, please click here

    • Pixel 6 or Pixel 6 Pro? Some real-world guidance that might surprise you Tue, 26 Oct 2021 07:15:00 -0700

      To Pixel 6 or to Pixel 6 Pro? That is the question.

      By now, you've no doubt heard plenty about Google's latest and greatest Pixels. So rather than repeat what you've already seen in several dozen other places, I thought we'd focus on a purely practical, real-world comparison for anyone torn between the two Pixel 6 models. I've been carrying around both the regular Pixel 6 and the Pixel 6 Pro and living with 'em for nearly two weeks now, and I've collected some pretty telling observations that might be a bit different from the more spec-oriented comparisons you've pored over elsewhere.

      To read this article in full, please click here

    • Got a Pixel? Don't miss these 8 buried Android 12 treasures Fri, 22 Oct 2021 03:00:00 -0700

      Well, my fellow Pixel pals, the time has finally arrived: After months of development, mountains of beta releases, and a weirdly empty-feeling tease of a code release, Google's Android 12 update is finally on its way into our moist, sweaty paws.

      Android 12 represents one of the most significant evolutions to the operating system in ages — arguably since all the way back with 2014's Android 5.0 Lollipop release. And much of that comes down to how the software operates on Google's own Pixel devices.

      To read this article in full, please click here

    • The two Pixel 6 numbers that could change everything Thu, 21 Oct 2021 05:53:00 -0700

      They're here. They're really, truly, officially here.

      After what's felt like 47 years of waiting and approximately 994 gazillion unofficial leaks, Google's Pixel 6 and Pixel 6 Pro phones are out of hiding and on their way into the world.

      Well, okay: To be fair, they've technically been in the world for a handful of days now — at least, for those of us lucky enough to receive loaner review units for evaluation. I've been totin' the Pixel 6 and its plus-sized sibling around in my dusty ol' dungarees for nearly a week at this point, and lemme tell ya: Based on these first several days, the devices are every bit as impressive as we'd been hoping they would be.

      To read this article in full, please click here



    Pac-Man Video Game - Play Now

    A Sunday at Sam's Boat on 5720 Richmond Ave, Houston, TX 77057