Intel Dives into AI, Facebook Brings Neural Networks to Your Phones, and Dr. Fei-Fei Li Joins Google

Excerpted from our Last Week in the Future newsletter

Yitaek Hwang
Intel-Dives-into-AI-Facebook-Brings-Neural-Networks-to-Your-Phones-and-Dr.-Fei-Fei-Li-Joins-Google

Intel Dives into AI

The rise of deep learning has been a boon for Nvidia and its GPU business. Most applications of neural networks such as computer vision and natural language processing currently utilize Nvidia’s graphic processors. Intel is poised to flip the script as it plans to roll out several AI products and partnerships, building on the momentum gained from acquiring Nervana Systems. Intel is still waiting to close the acquisition of Movidius to fully launch its AI suite, but Intel’s recent moves adding Altera, Nervana, Phi, and Xeon suggest that it may even embrace non-x86 architecture to compete for the emerging AI market.

Image Credit: Intel AI

Summary:

  • Movidius creates programmable, low-power computer vision chips for embedded visual computing. Intel’s impending acquisition of Movidius would complete its full range of AI jobs.
  • Naveen Rao, the CEO and co-founder of Nervana, introduced a new accelerator named Lake Crest. This architecture is said to use less data than traditional floating point operations, providing 10x performance boost on many calculations and ultimately better performance on neural network tasks.
  • Intel plans to integrate its ML-friendly Xeon processor with the Nervana accelerator to create Knights Crest. This integration should provide higher performance at reduced programming complexity as memory management becomes simplified.

Takeaway: Nvidia maintains its lead in the AI-hardware market with advanced GPUs and newly revealed Tegra Xavier, the supercomputer aimed for autonomous cars. However, Intel’s move into this market signals bigger things to come. Along with hardware advances, Intel is planning to open-source a graph compiler for Nervana, optimize TensorFlow frameworks to work with its x86 architecture, and release a SDK for deep learning next spring. It will be worth watching how quickly and flawlessly Intel integrates their big-time acquisitions with its core chip business.

Bringing Neural Networks to Your Phones

Sometimes it’s easy forget that today’s smart phones have comparable computing power as bulky, expensive supercomputers of the past. Facebook wants to push a similar advance in technology in the area of AI. With its lightweight AI framework called Caffe2go, Facebook is bringing the power of AI — specifically image processing and style transfer — mobile. Like it or not, AI is beginning to affect our daily lives, and it is coming to us, running in real time, on our mobile devices.

Image Credit: Facebook Code

Summary:

  • Facebook’s new creative-AI tool uses a technique called style transfer: it translates artistic styles (say Van Gogh) to other images and videos. The novelty here isn’t the technique itself as it was introduced last year, but the real-time application to mobile phones.
  • Various AI tools need huge computing power, limiting the applications to go through virtual machines hosted on big cloud servers. Caffe2Go brings powerful AI capabilities to mobile devices.
  • Caffe2Go is extremely lightweight and modular, using a “lean algorithm” framework called DAG: directed acyclic graph. With the CPU feature called NEON, Caffe2Go can improve computation speed and bring style transfer to smart phones.

Takeaway: Caffe2Go by itself may not seem groundbreaking. However, Facebook’s move signifies a transition of industrial-strength deep learning platforms to mobile devices. Imagine AI frameworks that currently seem esoteric and out-of-reach for most people, even developers and researchers, due to computing power limitations, suddenly available for millions to use on their phone. This will only accelerate the widespread use and adoption of machine learning and artificial intelligence.

Quote of the Week

“In the past universities employed the world’s best AI experts. Now tech firms are plundering departments of robotics and machine learning (where computers learn from data themselves) for the highest-flying faculty and students, luring them with big salaries similar to those fetched by professional athletes.

All that is to the good, but the hiring spree could also impose costs. One is that universities, unable to offer competitive salaries, will be damaged if too many bright minds are either lured away permanently or distracted from the lecture hall by commitments to tech firms.

Another risk is if expertise in AI is concentrated disproportionately in a few firms. Tech companies make public some of their research through open sourcing. They also promise employees that they can write papers. In practice, however, many profitable findings are not shared. Some worry that Google, the leading firm in the field, could establish something close to an intellectual monopoly.”

– “Million-dollar babies” via The Economist

With Dr. Fei-Fei Li (a Stanford professor who was a pioneer in Deep Learning, especially in Image-Net) joining Google full-time, the massive exodus of AI researchers in academia continues. According to Quid, Google, Facebook, Microsoft, and Baidu spent $8.5 billion on hiring AI talent alone. On one hand, tech firms give researchers access to huge data sets and computing power, eliminating lost time trying to secure grants and data. But unequal distribution of talent may put the world at the mercy of tech giants to share their findings.

There is no reason to believe that the hiring spree will slow down. Applications for AI are continuing to grow, and the tech giants will pour in more money to stay on top. But there are others out there to reverse the trend. Elon Musk pledged last December to spend over $1 billion on OpenAI, a nonprofit that will make its findings public to counteract the AI-research monopoly. Will there be normalization of talent distribution as AI tools become more commonplace or will the gap continue to grow?

The Rundown

  • Hierarchical Object Detection with Reinforcement Learning — NIPS 2016
  • Introduction to Deep Learning — Algorithmia
  • Hiring Your First Chief AI Officer — HBR
  • Manhattan Project Fallacy — alkeus.github.io

Resources

  • Quiver: Interactive deep convolutional networks features visualization
  • RAISR: Sharpen and enhance your images using RAISR
  • BITAG: IoT Security and Privacy Recommendations
  • CSSReference: A free visual guide to CSS
  • Dply: Create a free cloud server for 2 hours
Author
Yitaek Hwang
Yitaek Hwang - Senior Writer, IoT For All
Yitaek is a Senior Writer at IoT For All who loves learning about IoT, machine learning, and artificial intelligence. He graduated from Duke University with a dual degree in electrical/computer and biomedical engineering and is a huge Cameron Crazie.
Yitaek is a Senior Writer at IoT For All who loves learning about IoT, machine learning, and artificial intelligence. He graduated from Duke University with a dual degree in electrical/computer and biomedical engineering and is a huge Cameron Crazie.