The F8 second day focused on the long-term technology investments Facebook is making in three areas: connectivity, AI, and AR/VR.
Chief Technology Officer Mike Schroepfer kicked off the keynote, followed by Engineering Director Srinivas Narayanan, Research Scientist Isabel Kloumann, and Head of Core Tech Product Management Maria Fernandez Guajardo.
From advances in bringing connectivity to more people throughout the world to state-of-the-art research breakthroughs in AI to the development of entirely new experiences in AR/VR, Facebook continues to build new technologies that will bring people closer together and help keep them safe.
Read the official blogpost from Facebook below:
We view AI as a foundational technology, and we’ve made deep investments in advancing the state of the art through scientist-directed research. Today at F8, our artificial intelligence research and engineering teams shared a recent breakthrough: the teams successfully trained an image recognition system on a data set of 3.5 billion publicly available photos, using the hashtags on those photos in place of human annotations. This new technique will allow our researchers to scale their work much more quickly, and they’ve already used it to score a record-high 85.4% accuracy on the widely used ImageNet benchmark. We’ve already been able to leverage this work in production to improve our ability to identify content that violates our policies.
This image recognition work is powered by our AI research and production tools: PyTorch, Caffe2, and ONNX. Today, we announced the next version of our open source AI framework, PyTorch 1.0, which combines the capabilities of all these tools to provide everyone in the AI research community with a fast, seamless path for building a broad range of AI projects. The technology in PyTorch 1.0 is already being used at scale, including performing nearly 6 billion text translations per day for the 48 most commonly used languages on Facebook. In VR, these tools have helped in deploying new research into production to make avatars move more realistically.
The PyTorch 1.0 toolkit will be available in beta within the next few months, making Facebook’s state-of-the-art AI research tools available to everyone. With it, developers can take advantage of computer vision advances like DensePose, which can put a full polygonal mesh overlay on people as they move through a scene — something that will help make AR camera applications more compelling.
A new tool we call DensePose generates full 3D surfaces that can be applied, in real time, to footage of human bodies in motion.
Posted by Facebook Engineering on Monday, 30 April 2018
For a deeper dive on all of today’s AI updates and advancements, including our open source work on ELF OpenGo, check out the posts on our Engineering Blog or visit facebook.ai/developers, where you can get tools and code to build your own applications.
Facebook’s advancements in AR and VR draw from an array of research areas to help us create better shared experiences, regardless of physical distance. From capturing realistic-looking surroundings to producing next-generation avatars, we’re closer to making AR/VR experiences feel like reality.
Our research scientists have created a prototype system that can generate 3D reconstructions of physical spaces with surprisingly convincing results. The video below shows a side-by-side comparison between normal footage and a 3D reconstruction. It’s hard to tell the difference. (Hint: Look for the camera operator’s foot, which appears only in the regular video.)
Our research scientists have created a prototype system that can generate 3D reconstructions of physical spaces with surprisingly convincing results. The video below shows a side-by-side comparison between normal footage and a 3D reconstruction. It's hard to tell the difference. (Spoiler: Look for the camera operator's foot, which appears only in the regular video.)
Posted by Facebook Engineering on Wednesday, 2 May 2018
Realistic surroundings are important for creating more immersive AR/VR, but so are realistic avatars. Our teams have been working on state-of-the-art research to help computers generate photorealistic avatars, seen below.
Our teams have been working on state-of-the-art research to help computers generate lifelike avatars. This example shows two researchers testing the tool in real-time.
Posted by Facebook Engineering on Wednesday, 2 May 2018
These advances in AI and AR/VR are relevant only if you have access to a strong internet connection — and there are currently 3.8 billion people around the world who don’t have internet access. To increase connectivity around the world, we’re focused on developing next-generation technologies that can help bring the cost of connectivity down to reach the unconnected and increase capacity and performance for everyone else. In Uganda, we partnered with local operators to bring new fiber to the region that, when completed, will provide backhaul connectivity covering more than 3 million people and enable future cross-border connectivity to neighboring countries. Meanwhile, Facebook and City of San Jose employees have begun testing an advanced Wi-Fi network supported by Terragraph. Trials of Terragraph are also planned for Hungary and Malaysia. We are also working with hundreds of partners in the Telecom Infra Project to build and launch a variety of innovative, efficient network infrastructure solutions. And, as with our work in AI and other areas, we are sharing what we learn about connectivity so that others can benefit from it.
We worked with operators to design and build an open-access, shared backhaul network that will improve connectivity services in a region covering 3 million people in Northwest Uganda.
Posted by Facebook Engineering on Tuesday, 1 May 2018