Tech Talk

Share |
E-Mail ArticleE-Mail Article Printer-FriendlyPrinter-Friendly

What's Next?

Author: Michael E. Duffy
May, 2016 Issue
Columnist

Michael E. Duffy
All articles by columnist
Columnist's Blog
Author: Michael E. Duffy
May, 2016 Issue

Google has talked about 2020 as a target for fully functioning self-driving cars, and that’s probably a realistic expectation.

 
 
 
The Gartner Group recently published its annual list of the Top Ten Technology Trends. One of those trends is “The Device Mesh,” which, when stripped of Gartner-speak, boils down to the idea that all our digital information will be connected to (and updated by) all our digital devices, including new kinds of devices like wearables and augmented-reality (AR) devices. Augmented reality is adding information to the reality we already experience. Google Glass was an early attempt at a practical AR device, and there are also iPhone apps that overlay the view from your camera with useful information. This differs from VR devices, which immerse you in a virtual environment by shutting out the world around you.
 
Palmer Luckey, CEO of Oculus (which is now owned by Facebook), describes AR and VR as a continuum: On the AR end of the scale, “telepresence” lets you meet with people by projecting them into your environment and vice-versa; at the other end, VR lets you meet with people in a completely virtual environment, where they are represented by a real time video or detailed 3-D model of themselves.
 
There’s a lot of hype around VR right now. In addition to the $600 Oculus device (which requires a high-end Windows PC to run it), there are also offerings from Samsung (the $100 Gear headset works with its Edge 7 phone), HTC (the $800 HTC Vive coming in April), Sony (the $500 Playstation VR due this fall) and, last but not least, Google (a $15 cardboard viewer that uses your smartphone as the display). As you might expect, the more you pay, the better the VR experience. There were several VR exhibitors at Silicon Valley ComicCon in March, and the lines to experience the devices were long. It’ll be interesting to watch VR demos this year, but it’s still a market of early adopters. If you’re interested, get a Google viewer and try VR for yourself inexpensively. And yes, it looks like a GAF Viewmaster from the 1960s.
 
Another Gartner theme is “Autonomous Agents and Things.” An obvious example is the self-driving car. Again, lots of hype surrounds autonomous vehicles. Tesla has already provided software for its Model S, which will do the driving for you, although Telsa recommends keeping your hand on the wheel (the car will remind you if your hand is off the wheel for more than a few seconds). But there are still bugs to be worked out. A Google self-driving car caused its first accident in February by pulling into the path of a bus when avoiding an obstacle created by sandbags in its own lane (the anticlimactic video, taken by cameras aboard the bus, is available on YouTube). And it’s been reported to have trouble when trying to figure out pedestrians and bicycles at stop signs. Google has talked about 2020 as a target for fully functioning self-driving cars, and that’s probably a realistic expectation, despite Telsa’s aggressive rollout of features.
 
The Holy Grail of smart machines, however, is machines that can learn for themselves (Gartner’s “Advance Machine Learning” trend). Again, Google is doing a lot of work in this area. A recent technical report showed how it developed a robot arm that learned how to pick up items of all different shapes and sizes, basically by trial and error. The trick? It ran 14 robot arms in parallel and shared the acquired knowledge among all of them, speeding up the process. To quote the Google Research blog (tinyurl.com/jc6ymp8):
 
A human child is able to reliably grasp objects after one year and takes around four years to acquire more sophisticated precision grasps. However, networked robots can instantaneously share their experience with one another, so if we dedicate 14 separate robots to the job of learning grasping in parallel, we can acquire the necessary experience much faster.
 
Long story short: it took the arms 800,000 separate attempts to learn how to pick up small objects reliably, about 3,000 robot-hours of time.
 
The robot arms learn using software called a neural network. Starting from nothing, neural networks get better and better at predicting the outcome of a grasping attempt based on a camera image and a potential arm command, basically developing hand-eye coordination. Think of this robotic arm training as sifting useful information (how to pick up an object) from a stream of data (what images and motions lead to failure and success). Given that we live in a world where there’s more and more data (what Gartner has termed the “Information of Everything” trend), this ability for machines to learn by observation and prediction (rather than being programmed to perform a particular task) is important. It’ll take smart machines to sift through all the data that machines now produce if they’re to produce useful information for businesses to act on.
 
Neural networks are fascinating, and I’ll use my next column to explain how they work. In the meantime, send your questions to mduffy@northbaybiz.com.

 

 

In this Issue

A Passion for Perfection

David Stare, founder of Dry Creek Vineyard, is sitting across from me at his vineyard garden. His demeanor is considerate and responsible, stable and kind. But, if it were not for his passion, this ...

Wine and Weather

There’s no argument that the wine in your glass showcases the skill of the winemaker. Yet it was Mother Nature who engineered the growing season that made it all possible. Rain at the right ti...

Napa vs. Sonoma

Napa and Sonoma counties are remarkably similar on paper. They appear as next-door neighbors sharing a mountain range on the map, and rivers, valleys and fertile agricultural areas define the topogr...

See all...