Open sourcing – freely sharing code to software with the rest of the Internet – has become one of the most powerful development tools that drive the industry forward. Google has joined the movement by open sourcing its artificial intelligence engine, showing the world how computer software is advancing.
- Google joined the “open-sourcing” movement with TensorFlow
- TensorFlow is Google’s AI engine, available now for all developers to snoop in and use
- Artificial intelligence relies on graphics processing units (GPUs)
- Google has made a step forward by using GPUs in both deep-learning and delivering services
While opening its TensorFlow AI engine to other engineers will definitely mean feeding all sorts of research outside the company, Google will also profit when they feed back into the search engine’s company.
Google’s AI engine is also a perfect mirror for how the world of computer hardware has evolved. When dealing with language translation, image recognition and speech recognition, TensorFlow is rather dependent on graphics processing units (GPUs). These chips were initially designed for rendering graphics in games, but engineers found they are also useful for other tasks.
As Google AI-engineer Jeff Dean explained, GPUs are not only used for training its artificial intelligence services, but also in delivering them to the smartphones on the market. This way of using GPUs represents a significant shift in how things are done in the tech industry.
Facebook, for example, uses GPUs to train its face recognition services inside its massive computer data centers, but it still relies on computer processors units (CPUs) for delivering the services to users – actually identifying their faces on their social platforms.
But when it comes to actually seeing the work of GPUs in action, places like Google, Facebook, and Microsoft are the fittest to look at. One of the most interesting branches in artificial intelligence is the so-called “deep learning”, and GPUs have proven extremely important in processing lots of little bits of data in parallel.
Teaching machines to “think” is based on neural networks – systems that mimic the web of neurons in the human brain – and these networks need constant flow of feed in order to analyze data at speed, and learn from it. This is what GPUs are good at, and they also consume less power than the CPUs.
Google hasn’t revealed yet how they managed to get TensorFlow to run on GPUs during the execution stage; the company is already using deep learning to recognize spoken words, identify photos, translate, and boost search results, while other companies have been making headway into using the same technology in computer security and ad targeting.
Image Source: SlashGear