[ mukeshbalani.com ] “You heard it here first…if you haven’t already heard it elsewhere”…
Google Wants to Apply AI & Machine Learning to All Its Products
A Platform Shift
It’s the time of year again for Google I/O, the company’s annual developer conference. On Wednesday, the showcase started with a keynote from CEO Sundar Pichai who reiterated the company’s latest approach for all that it does: artificial intelligence (AI). Google does a lot, from its original search engine function, to email, video services, and mobile software — and the company wants to see AI will be at the helm of it all.
Pichar highlighted this “AI-first” focus, something he mentioned at the I/O last year. Now, Google has given the rest of the world a glimpse of how machine learning will work behind every platform it has.
An AI Chip in the Cloud
Of course, there’s no amount of AI and machine learning-based tech that can work without a specialized processor to run them. That’s why Google launched the second generation of its tensor processing unit chips, called Cloud TPU. These new TPUs will be accessible to anyone to use for running and training artificial neural networks through Google’s cloud computing platform.
Much of Google’s research on AI is conducted by DeepMind, a company under its Alphabet group. But with all the advances being made in the discipline, Google still needs a platform to put resources together, including research, tools, and applied AI. Google.ai is precisely that, a way to democratize its AI research.
Machine Learning for Images
Image recognition is one of the first beneficiaries of machine learning development. There isn’t a shortage of algorithms designed to perform visual recognition tasks. Now, Google wants to take this to a whole new level by bringing its search engine expertise to your camera.
Pichai introduced Google Lens, which is essentially a way to search the internet using your smartphone’s camera. You take a picture and Lens tells you what it is. “[I]t’s a set of vision-based computing capabilities that can understand what you’re looking at,” Pichai explained, “and help you take action based on that information.”
It’ll be available initially as part of Google Photos — to scrape through your old photos for a bunch of data — and Google Assistant, which will serve as your primary way to interact with Lens.
Google Home Gets an Upgrade
As the future of smart home devices begin to take shape — thanks in large part to devices like the Amazon Echo — Google doesn’t want to get left behind. So, it launched its second iteration of Google Home — no longer just a little smart speaker that can play music. Now, it also offers proactive assistance, hands-free calling, and visual responses, among other features.
A Search Engine for Jobs
Further specializing its search engine prowess, Google is bringing its power people in the U.S. looking for jobs that will suit them, and helping employers find the employees they need. “46% of U.S. employers say they face talent shortages and have issues filling open job positions,” Pichai explained. “While job seekers may be looking for openings right next door – there’s a big disconnect here. […] We want to better connect employers and job seekers through a new initiative, Google for Jobs.”
The post Google Wants to Apply AI & Machine Learning to All Its Products appeared first on Futurism.