Reality Computing and the State of Image Sensing

The tools required to measure and sense remote places and objects, in three dimensions, are at your disposal.
By Mary Catherine O'Connor
Mar 04, 2015

Last week, I attended REAL 2015, a summit of users of Autodesk's software that enables what the company calls "reality capture." For example, with Autodesk's ReCap, one can upload high-resolution images collected from a laser scanner and create 3D images that have a range of applicability in terms of building construction, product design and, of course, 3D printing.

Software from Autodesk and other companies can also create 3D images from other types of imaging devices—even the camera in your smartphone. In fact, Autodesk announced at the summit that it has released a beta version of Memento, a cloud-based software program that uses a process called photogrammetry to enable anyone to create 3D images by uploading multiple photographs of an item, taken from several perspectives. The software stitches a 3D image by taking many overlapping 2D images of a subject, extracting the camera's location and orientation for each image, and then plots the pixels on X, Y and Z coordinates, by creating polygons that form point clouds.

So, what does this have to do with the Internet of Things? Quite a bit, when you consider the ways in which imaging places and things in three dimensions can quantify and qualify them—especially when using specialized cameras. A thermal camera can be used for planning the placement of solar panels, for example. Energy companies employ thermal or multispectral cameras to detect machinery that is running hot. Magnetic resonance imaging (MRI) and ground-penetrating radar are two other examples of powerful imaging technology. All of these tools become extensions of the IoT when they create images that are conveyed as part of a wider context, which includes where the thing or place being imaged is located, when it was photographed, and other factors that can simultaneously be sensed, such as temperature or vibrations.

When combined with unmanned aerial vehicles (UAVs, or drones), reality-capture technologies can be especially powerful, since UAVs can take the imaging devices to places where humans could not quickly, easily or cheaply go themselves.

BNSF Railway has hired 3DR, a Berkeley, Calif.-based UAV manufacturer, to help it search for safety hazards by flying UAVs with cameras aimed at sections of the 32,000 miles of track on which BNSF moves freight within the United States. BNSF hopes that images from those UAVs, combined with sonar images taken from train-mounted cameras, can help it pinpoint weaknesses or obstructions in the railway, and fix those problems before they cause accidents. Given the increasing amount of Bakken and other crude oil moving on rails today—as well as the recent derailment and spill of crude into West Virginia's Kanawha River—rail safety is an increasingly important issue to railway operators.

We've reported on how Skycatch, another Bay Area drone company, uses Autodesk reality-capture software to help construction firms and mining companies sense anything from how closely a construction project is sticking to its architectural plans to how much ore a mining firm extracts during a shift.

AECOM, a construction management firm that serves a wide range of industries, also uses UAVs and imaging software. Jon Amdur, the company's VP of unmanned aerial systems, told REAL 2015 attendees that AECOM sometimes uses UAVs to fly over fixed sensors, such as motion sensors embedded in bridges or other infrastructure, to collect data from those sensors.

Simply enter a question for our experts.
Sign up for the RFID Journal Newsletter
We will never sell or share your information
RFID Journal LIVE! RFID in Health Care LIVE! LatAm LIVE! Brasil LIVE! Europe RFID Connect Virtual Events RFID Journal Awards Webinars Presentations
© Copyright 2016 RFID Journal LLC.
Powered By: Haycco