Monday, December 23, 2024
HomeArtificial IntelligenceNextBillion AI Partners with Google Cloud To Deploy Data Pipelines

NextBillion AI Partners with Google Cloud To Deploy Data Pipelines

NextBillion AI will explore Google Cloud AI and ML tools such as Vision AI as it moves into new business areas such as content and facial recognition


NextBillion AI, an industry-leading startup in mapping platforms that provides software-as-a-service (SaaS) for enterprises, partners with Google Cloud to improve time-to-market for hyperlocal AI solutions by running datasets and algorithms on Cloud Storage and Cloud SQL to minimize operational overheads with Google Kubernetes Engine.

The lack of local language content can isolate customers further when located in remote areas. Rather than offering a homogeneous mobile app developed for urban cities, companies can attract rural customers with a location-based experience that adapts to their local needs, from last-mile delivery to native language support.

“We built a nimble AI platform on Google Cloud that allows clients to choose plug-and-play modules, depending on their use-cases, Ajay Bulusu Co-founder of NextBillion AI stated. “While some clients want to detect road names from street-level imagery to improve delivery accuracy, others may want to annotate a politician’s name to uncover public sentiment from news articles and forums.”

With the help of Google Cloud partner Searce, NextBillion AI ran a two-week proof of concept to evaluate cloud service providers based on feature richness, support effectiveness, and its ability to follow latest updates. Now, Google Kubernetes Engine (GKE) provides native support for Kubernetes, keeping clusters up-to-date with the latest versions.

Results seen using Google Cloud:

Improves speed to market for features Released with CI/CD pipelines on Google Kubernetes Engine. Delivers 99% uptime on highly available Google Cloud to minimize survive disruptions to customers. Simplifies on-demand cluster creation with Dataproc to accelerate compute-intensive workloads

NextBillion AI deploys its data pipelines on Google Kubernetes Engine to ensure high uptime for clients and minimize maintenance with auto-updates. The company stores client datasets in Cloud Storage and uses Cloud SQL to run real-time queries.

“Building an AI solution is a big commitment, from recruiting AI talent to investing in the infrastructure. Some companies may develop AI in-house for one or two critical business functions, but it’s hard to invest at the same level for non-core tasks,” states Gaurav Bubna, Co-founder of NextBillion AI. “That’s where we add value as a third party with the people, process, and AI capabilities on Google Cloud to take on non-core tasks such as mapping in a cost-effective manner.”

NextBillion AI helps companies connect to users with hyperlocal AI solutions. NextBillion AI Maps offer custom map solutions through a range of APIs such as routing and navigation that integrate with the customer’s mobile or web app. NextBillion AI Tasks combines artificial intelligence (AI) and human intelligence to carry out tasks such as decoding and simplifying multilingual text, classifying images, and annotating videos.

Chang Zhao, Head of Engineering at NextBillion AI, explains, “Tasks such as scaling up and down, rollouts, and rollbacks previously required a lot of scripting work on our legacy platform. Google Kubernetes Engine simplifies DevOps processes, so we don’t need a large team to maintain the infrastructure and pipelines.”

As a service provider, NextBillion AI focuses on service-level agreement (SLA) commitments such as response and resolution time. Using Cloud Logging to store logs from VM, output error messages can be viewed in real-time to allow the admin to conduct a deep dive into online issues.

Data scientists at NextBillion AI also use Dataproc to manage compute-intensive AI workloads, without DevOps to set up the environment. Using programmer-friendly command lines, data scientists can start, scale, or shut down a cluster in seconds on Dataproc. After processing, the data moves to Cloud Storage and shuts down the full cluster without extra costs on idle clusters.

“Dataproc matches our business needs. We’re not running one job that streams petabytes of data into a cluster at all times. Instead, we run multiple AI pipelines to process data or train models for different customers in parallel,” Chang adds.

As part of the onboarding process, NextBillion AI gives clients access to the portal containing dedicated folders to upload raw data to Cloud Storage for ETL. The necessary features for ML training and model training are extracted, followed with the return of processed data back to the customer through APIs for NextBillion AI Maps or CSV files for NextBillion AI Tasks.

“We run a secure pipeline in Google Cloud. Clients have fine-grained control over who has what access to their folder or bucket in Cloud Storage,” says Ajay. “Client data is kept in different folders so no action by one client can affect another client.”

NextBillion AI will explore Google Cloud AI and ML tools such as Vision AI as it moves into new business areas such as content and facial recognition. To overcome content-related challenges in security and monetization, NextBillion will utilize facial recognition AI technology to address identity verification problems in FinTech.

View the full Google Cloud case study: https://cloud.google.com/customers/nextbillion-ai

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments