Saturday, April 20, 2024

Think your house is smart now? Here’s a peek at what it’ll be like with AR

Share

In Plato’s Allegory of the Cave, the influential Greek philosopher asks us to imagine a group of prisoners living their entire lives inside a cave. All that they can see of the real world comes from shadows which appear on the cave walls. Eventually, a prisoner escapes and realizes that his or her previous view of existence was based on a low resolution, flat understanding of how the world actually operated.

A slightly pretentious way of starting an article on augmented reality? Perhaps. But the broad idea is the same: Right now, in the pre-AR world, we have a visual perspective that contains only the details of things around us that we can see on the surface. AR, a technology which has been talked about increasingly in recent years, promises to let us go deeper.

Imagine walking down the street and having landmarks, store opening hours, Uber rider credentials and other (useful) contextual information overlaid on top of our everyday perspective. Or walking around your home and being able to determine, for instance, the live power draw of a power strip simply by looking at it. Or how much battery life is remaining on your smoke detector. Or the WiFi details of your router. Or any other useful number of “at a glance” details you might want to know.

Like the shift in perception described in Plato’s Cave, this won’t be an occasional “nice to have” supplement to the way we view the world. Augmented reality will, its biggest boosters claim, fundamentally alter our perception of real, physical places; permanently altering the way we view and experience reality and the possibilities offered by the real world.

The future of AR interfaces?

Right now, it’s not yet at that point. AR is still all about games and, if we’re lucky, the opportunity to pick and place virtual Ikea furniture in our apartments to show us how much better our life might be if we owned a minimalist Scandinavian bookshelf or a handwoven rug. There’s still much progress to be made, and lots of infrastructure to be laid down before the world around us can be rewritten in AR’s image.

One group working hard to achieve this vision is the Future Interfaces Group at Carnegie Mellon University. The group has previously created futuristic technology that ranges from conductive paint that turns walls into giant touchpads to a software update for smartwatches which allows them to know exactly what your hands are doing and respond accordingly. In other words, FIG anticipates the way we’ll be interfacing with technology and the world around us tomorrow (or, well, maybe the day after that).

[youtube https://www.youtube.com/watch?v=435OSV8Hudw]

In its latest work, the group has developed something called LightAnchors. This is a technique for spatially anchoring data in augmented reality. In essence, it creates a prototype tagging system that precisely places labels on top of everyday scenes. It marks up the real world like a neat, user friendly schematic. That’s important. After all, to “augment” means to make something better by adding to it; not to crowd it with unclear, messy popups and banner ads like a 1998 website. Augmented reality needs something like this if it’s ever going to live up to its promise.

“LightAnchors is sort of the AR equivalent of barcodes or QR Codes, which are everywhere,” Chris Harrison, head of Carnegie Mellon’s Future Interfaces Group, told Digital Trends. “Of course, barcodes don’t do a whole lot other than providing a unique ID for looking up price [and things like that.] LightAnchors can be so much more, allowing devices to not only say who and what they are, but also share live information and even interfaces. Being able to embed information right into the world is very powerful.”

How LightAnchors work

LightAnchors work by looking for light sources blinked by a microprocessor. Many devices already contain microprocessors used for things like controlling status lights. According to the Carnegie Mellon researchers, these could be LightAnchor-enabled simply via firmware update. In the event that an object does not currently display these blinked lights, an inexpensive microcontroller could be linked up to a simple LED for just a couple of bucks.

As part of their proof-of-concept, the researchers showed how a glue gun could be made to transmit its live temperature or a ride share’s headlights made to emit a unique ID to help passengers find the right vehicle.

Once the lights have been located, LightAnchors then scour video frame images to look for the right area to position a label. This is found by searching for bright pixels surrounded by darker ones.

“These candidate anchors are then tracked across time, looking for a blinked binary pattern,” Karan Ahuja, one of the researchers on the project, told Digital Trends. “Only candidates with the correct preamble are accepted, after which their data payloads can be decoded. LightAnchors allow ‘dumb’ devices to become smarter through AR with minimal extra cost. [For example,] a security camera can broadcast its privacy policy using the in-built LED.”

Right now, it’s still a concept that has yet to be commercialized. Implemented right, however, this could be one way to let users navigate and access the dense ecosystems of smart devices popping up with increasing regularity in the real world. “At present, there are no low cost and aesthetically pleasing methods to give appliances an outlet in the AR world,” Ahuja said. “AprilTags or QR codes are inexpensive, but visually obtrusive.”

Could LightAnchors be the answer? It’s certainly an exciting concept to explore. Suddenly we’re feeling more than ready for AR glasses to take off in a big way!

Read more

More News