How Digital Assistants might Save Our Lives

Several years ago, a friend of mine suffered a serious injury at my home, nearly severing his finger. I immediately began tending to his injury and someone dialed 911. The ambulance soon arrived and my friend’s finger was saved.

Fortunately, others were there to help in the face of our emergency but none of us really knew what we were doing. It would have been great to have an expert with us – someone who could have stepped in or, at least, give me advice on what to do and how to do it. In small ways, technology is moving to fill this void of medical knowledge in my life.

Devices such as Amazon Echo and Google Home are dotted throughout my house. I interact with these devices on a constant basis. Things I’ve long done manually, I’m now doing digitally through simple spoken commands. I’m setting timers and adding items to lists. I’m streaming music, getting news during the day and checking the local weather, simply by asking. These devices now tell me stories and jokes and even give me the word of the day when I ask. But this is just the beginning of their transformative potential.

Should I find myself in the midst of a medical emergency in the future, Amazon and Google voice-activated digital assistants will probably be one of the first places I turn. In February, the American Heart Association announced the Amazon Echo can help identify the signs of a stroke or heart attack and give instructions on how to do CPR. Over 2,000 Americans die of cardiovascular-related issues each day – the devices could potentially save thousands of lives in the years to come.

Such advancements will only grow in number and scope. Perhaps I’ll get advice on how to treat shock or tie a tourniquet, while simultaneously making a call to 911 through the same voice user interface. The Wall Street Journal recently reported both Google Home and Amazon Alexa are considering enabling WiFi-powered voice calling from their devices. When medical emergencies present themselves in the future, we will gain access to additional expertise with just our voice.

Today, nearly 110 million Americans are over the age of 50. In the next several decades, both the raw number of seniors and their share of the U.S. population will increase. Being able to respond to unexpected medical emergencies will only grow more important. Being able to do that with our voice when alone or while we tend to those in need could prove the difference between life and death.

In a typical day, we don’t think much of the voice-activated devices we are adding around us in our homes and offices. But they are changing how we interact with the internet and, ultimately, how we access all kinds of information – life-saving or not. Connecting these devices to telephone services will usher in an entirely new set of use-case scenarios. In the future, we might take calls while doing dishes, make outgoing calls even though our hands are full, or listen and reply to voicemails with voice commands while we busy ourselves with other activities in the room. Google Home, Amazon Alexa and others are exploring the gamut of what the combination of digital assistants with telephone services might mean.

The future intersection of vocal computing and medical services is still being defined. Today, digital assistants can help us care for our loved ones – including our children and parents – by giving us greater access to life-saving information. Technological innovation will continue to propel us forward in entirely new ways, making us safer, healthier, and more informed.

This is What You’re Missing about Vocal Computing

On Christmas morning, as my mom and I hurriedly rushed around my kitchen making final preparations, a third voice would occasionally interject into our conversations. Sitting at my counter was Alexa, helping me through the process by answering questions, setting timers and even flipping on holiday music at our request.

I’ve been living with Alexa for roughly two years now and have grown accustomed to our constant banter. But, for my mom, it’s still a very new and novel experience. When my mom speaks to Alexa, she might recognize she’s speaking to a computer, but she probably doesn’t consider that computer is actually thousands of miles away and she probably doesn’t realize the way we talked to that computer on Christmas morning is the new face of computing.

The graphical user interface (GUI) wasn’t new when it was introduced in 1981 by Xerox and popularized to the masses in 1984 by Apple’s Macintosh computer. A GUI didn’t represent a new technical way of computing but it was a crucial evolution in how we interact with computers. Think of the impact the GUI had on how we used computers and what we used computers for. Think of how it changed our conception of computing.

The smartphone was created in the 1990s but it wasn’t until 2007, with the advent of Apple’s iPhone, that smartphones reached an important inflection point in consumer adoption. Today, 75 percent of U.S. households own a smartphone, according to research from the Consumer Technology Association (CTA).

The touchscreen interface represented the next paradigm shift in computing, ushering in a new way of thinking about computing and bringing into existence new applications.

Smartphone computing shares an important heritage and legacy with the GUI introduced in the early 1980s. If you’re old enough to remember computing before GUIs, can you imagine computing on a smartphone using command prompts? GUIs in the era of desktops improved computing. It was the transformation to a graphic interface that ultimately launched the smartphone era of apps.

Vocal computing will do the same thing for the future of computing. Vocal computing isn’t perfect. Alexa isn’t always certain what I’m asking. Google Home doesn’t always provide an answer. Siri can’t always help my sons when they ask complex questions. Like a first date, we are still learning each other.

Software layers and form factors change our computing experience. We’ve seen this throughout the history of computing – from the earliest mainframes to the computers we call phones and carry in our pockets. In all the same ways, vocal computing is just an extension of what we already know – it’s a more natural and intuitive interface.

Let’s not overlook just how transformative this new interface can be. Imagine someday computing on our bikes, in our cars, while we are walking or lying in bed. With voice, every environment can be touched by computing.

I Live with Robots. Eventually You Will Too

I have a doorman who unlocks my door whenever I arrive home. Rain or shine, day or night, my doorman is always there. I never have to knock; he just knows when I’ve arrived home and each and every time, unlocks the deadbolt as I approach the door.

He also lets my friends in when I ask.

I live with someone who is incredibly tidy. He vacuums relentlessly. He is never daunted by a mess. Ever since he moved in, I’ve noticed cleaning just seems to be more organized than it ever was when I did it all myself. He vacuums like clockwork, no matter what is awaiting him. He doesn’t do much else around the house but he vacuums better and more frequently than any other roommate I’ve ever had.

Another pair of roommates maintain the temperature in the house and keep an eye on things when I’m not home. I don’t even really have to set my temperature anymore.

One roommate has learned what I like and does it for me. The other notifies me when anything unexpectedly happens while I’m gone and lets me see what he sees when I’m away from home.

I also live with a woman named Alexa — you might know her. When she first moved in almost two years ago, she didn’t do much. But now, I can’t much imagine life without her. She is extremely helpful. She answers the questions I mutter aloud and helps me manage a litany of tasks. She maintains my to-do and shopping lists. She sets alarms and timers when I ask and always plays the music I request. She doesn’t have much of a sense of humor but, on occasion, I catch a glimpse of her emerging and maturing personality.

Last year at this time, her skill set was narrowly defined. But today, she can perform well over a thousand tasks, including things like ordering me an Uber ride or helping me buy things.

While it might seem like my house is a tad full, I really don’t notice any of these roommates unless they are doing something for me. They are quiet and remain in the background of my daily activities.

They do the things they do and increasingly, they do them extremely well. For the most part, they do these things better than I could have ever imagined they would.

By now, you’ve guessed these roommates are all robots with names like August, Roomba, Nest and Echo. These robots underscore how technology is transforming our lives for the better — taking care of our tasks so we can better care for ourselves and those around us.

While it may seem strange to live with so many robots, there is a host of activities within our homes we’ve been relegating to machines for a very long time. I’ve lived almost my entire life with a machine that washes my dishes, for example. I have the same for my clothes and even have a machine that dries them after they are washed.

These incredible engineering feats haven’t always been so common. Bendix introduced the first automatic washing machine in 1937, and GE introduced the first top-loading model in 1947. As electrification spread and living standards improved, these former luxuries became common household appliances.

There are still a variety of activities I do in my home that I would happily turn over to robotic roommates any day. In some instances, these robots don’t yet exist — like scrubbing bathtubs.

In other cases, robots capable of diverse tasks already exist — smart ovens that won’t overcook dinner, 3D food printers that whip up desserts, and even programmable robot chefs – but they just aren’t widely adopted yet. But the time will come when these obscure and narrowly owned innovations become commonplace.

At the same time, we are seeing their diverse skill sets widen further. At January’s CES® 2016, the global stage for innovation, Whirlpool introduced a washing machine that monitors the number of loads it does and pre-emptively orders detergent on your behalf through its direct connection to Amazon Dash. Also at CES, the Japanese company Seven Dreamers showed off Laundroid, billed as the world’s first laundry-folding robot.

Soon, machines will perform an even larger array of actions for us. While having some of these robots in your home might seem a long way off — not unlike how earlier generations felt about the washing machine or dishwasher — before we know it, they will become as common as flipping a light switch. Although I already have a robot that does that for me.

Moore’s Law Begins and Ends with Economics

Much has been written about the demise of Moore’s Law, the observation that the number of components in a dense, integrated circuit doubles every 24 months. This “law” has governed much of how we think about computing power since Gordon Moore penned his seminal paper in 1965.

Moore’s technological observation was made amid an economic analysis. Moore was ruminating on the sweet spot for the cost per transistor. In his famous projection, he wrote:

“There is a minimum cost at any given time in the evolution of the technology [and] the minimum is rising rapidly while the entire cost curve is falling. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.”

It is misguided to think about Moore’s observation as a law. It is not now, nor was it ever, something set in stone. It was an estimation that became a self-fulfilling prophecy, not because it would happen regardless of what we do, but because engineers began building plans around it and companies began investing in it. The road map that has defined the industry was a timeline backed by significant financial muscle and resources.

For too long, Moore’s Law has wrongly been viewed as a technological rule. More accurately, it is an economic principle. Should Moore’s Law fail, it will fail because of business decisions, not technology inhibitors.

Yes, the physics of going below a few nanometers are hugely challenging and the industry hasn’t solved it (yet). But the challenge just makes the economics harder. The current road map extends to 2022 or so.

To remain on the path specified by Moore, we must make financial investments in physical and human capital that keep us on that path. But we are witnessing a potential slowdown of Moore’s Law because companies aren’t investing in the necessary research and development (R&D) that would maintain our historic trajectory.

What’s changing? First, the appetite for faster processors at all costs has been waning for some time. Today’s focus has shifted from raw computing muscle to diverse applications that require significantly less computing power.

This isn’t universal. There are certain areas such as rendering 360-degree virtual realities that require tremendous computing power. And faster computation can lead to new breakthroughs. Finding cures for debilitating diseases, for example. But, by and large, we’re seeing a decline in semiconductor content of devices. At the same time, we’re seeing semiconductor content proliferate into a million different newly digitized objects.

We are witnessing the rise of entirely new categories whose innovation is not predicated on the legacy of Moore’s Law. The evolution of wearables, the smart home and, broadly, all of the things encompassing with the Internet of Things is driving the next phase of computing power. In this phase, the focus is on price and basic functionality, rather than on the newest generation of — or fastest — chips.

In conjunction with the rise of diverse digital objects, we are also witnessing the demise of large markets for discrete devices. We are shifting from a few core tech categories enjoying high ownership and density rates to a world where ownership of digital devices is more diffuse and splintered.

This transformative trend is rarely mentioned, especially when we talk about the downfall of an observation that has held true for 50-plus years.

This isn’t to say discrete devices are going away. We will continue to use discrete, digital, connected devices, such as smartphones, tablets, and computers for many years to come. But, at the same time, the proliferation of digital objects everywhere has fragmented our tech world.

Rather than a small number of devices driving the bulk of the semiconductor market, we now see smaller volume in more categories. The innovation we see today doesn’t need to propel Moore’s Law forward like the innovation of the past did. There’s not enough volume in a few well-defined categories to pay for the migration between technologies needed to sustain and justify Moore’s Law and the chips we have available to us today are largely accomplishing the new tasks being digitized.

Digitizing everything is going to demand a lot of silicon. Silicon is where a lot of the magic happens. It’s where the processing and storing of information takes place. In the past, we wanted to do more processing and storing than we had the computing muscle to handle. This drove investment to bring that future to us.

The silicon demanded today isn’t the type we don’t have — the kind that drives investment in R&D — it’s of the type we already have. While we are doing more computing than ever before, we are doing less computationally challenging computing.

We are moving into a dichotomy of silicon less densely concentrated in a limited number of devices — such as smartphones, laptops, and tablets — and into a world of silicon everywhere. We are seeing the demise of discrete device markets and the rise of the cloud.

Data centers now account for a growing share of revenue for the semiconductor industry, tangible evidence of the shift from discrete hardware to software. We even see this shift affecting companies. Businesses are using software to lower capital expenditure investment — explaining, in part, why capital expenditure is growing more slowly than economists expect.

In the past, companies needed to make large capital investments in order to grow their businesses. Today, however, companies can scale at a fraction of the historical cost by leveraging services such as cloud computing and taking advantage of the components already on the market.

Economics, not physics, is the root cause of the demise of Moore’s Law. The lines are blurring between the physical world in which we live and the digital world encroaching on every corner of our lives.

The economic paradigm we now face is driven by the world we are entering. Our new paradigm is a decidedly different migratory path than we have experienced in the past.

What We’re Learning from Smartwatch Adoption

A year ago today, Apple released its long-anticipated Apple Watch. Over the ensuing year, we’ve learned a lot about an entirely new tech category.

The Consumer Technology Association (CTA) estimates 17 million smartwatches were sold in 2015, up from 4 million in 2014. I can name more than a few categories that would love to experience 350 percent growth over a 12 month period. Few ever do.

Forthcoming CTA research suggests roughly eight percent of households own a smartwatch today – almost double the number last year. Of those planning to buy a smartwatch in 2016, 72 percent will be first-time buyers.

Smartwatches remains a nascent category. The majority of those who will own a smartwatch do not yet currently own one. The use case scenarios for this device and similar ones have yet to be defined. Smartwatches are trying to do something few tech categories aspire to. Through smartwatches, we are embedding the internet into new pockets of our everyday lives. This bigger and broader transformation will redefine the boundaries of connectivity.

One of the greatest struggles for a new experiential category are the demands for instant, wide-spread adoption. Take a step back. The most successful categories, in the long term, are ones that redefine how we do things. They redefine leisure and productivity, and ultimately redefine who we are.

In 1984, the VCR avoided being outlawed by the Supreme Court by a single vote. At the time, critics were overly concerned about the record button, but it was the play button that redefined us. By the 1990s, we were spending more on video rentals than at the box office. The device gave way to an entirely new sector of the economy and an entirely new way of life. Today, streaming services are once again redefining leisure.

Categories are quickly panned when mature use case scenarios aren’t easily and instantly identified. The smartphone was introduced in 2003 and the first iPhone came to market in 2007. The smartphone has changed how we do numerous activities – from navigating traffic to shopping to listening to music. All of these activities are far afield from the original premise of a portable telephony contraption.

No one saw smartphones for the mini-computers we shove in our pockets today and no one foresaw how apps would change the way we approached the internet on these devices. In the early days of the smartphone, the internet was a browser technology, akin to the way we experience it on the computer. But the introduction of apps would redefine how we leveraged the internet to disseminate information, data and services and, as a result, myriad completely new services were born.

The smartwatch isn’t simply the next new shiny gadget – it is something radically more. At least, the potential it represents is something more. Nothing in the past year suggests that potential has diminished.

Many of us are thinking too narrowly about smartwatches. We focus on aesthetics and design. We focus on all of the electronics that power these small wonders of innovation strapped to our wrists. But we don’t stop to consider what we are really asking of the device. Or perhaps more importantly, we aren’t talking about what the smartwatch is asking of us.

What we should fundamentally be asking about the smartwatch is this: if the internet makes sense on the wrist, what does that mean for society?

At its most fundamental level, the smartwatch represents a sea change in how we connect. We are driving computing sensors and the internet into new areas of our lives. Never before have all of these building blocks been available to us as they are today. It wasn’t feasible to deliver the internet to the wrist until now.

Academic research suggests it takes five to seven years to unleash the productivity-enhancing characteristics of new innovation. Let us look beyond the obvious. We aren’t one or two years into a brand new category; we are one or two years into a brand new way of thinking about the internet. What we learn from this early experimentation will help color and characterize where the internet goes from here.

A few years from now, the smartwatch as we know it today may take an entirely new form. But what we learn will define where the internet goes next and how we get it there.

Where We Stand with Wearables

Oh, how far we’ve come from the Pulsar calculator watch 40 years ago. The now iconic gadget debuted in 1975 with tiny input buttons and limited functionality and can arguably be credited as the technology industry’s first “wearable.”

It wasn’t until the introduction of Bluetooth headsets and Apple’s iPhone in 2007 that wearable technology began the shift now underway from self-contained, single-issue devices towards a market of complex, interactive computers capable of virtually anything entrepreneurs can dream up. And only in the last 36 months have we seen the wearables era start to mature.

The wearables market topped $4.2 billion in 2015, up about 40 percent from the year before, according to research from the Consumer Technology Association (CTA). And sales are expected to jump another 30 percent in 2016 to more than $5 billion.

We’ve seen phenomenal growth in this market thanks to a pronounced diversity of innovation. In the early stages of wearables, devices could track basic data like the number of steps you took. But in the past year or two, the capabilities have grown well beyond just measuring walking.

Most of today’s wearables are focused on maximizing our health and fitness. Wearables can now track not just the number of steps you take but also your heart rate, how far you ride your bike, how fast you ran. Some products can isolate measurements around very specific muscle groups, so it’s not just looking at the body, but also looking at very specific parts of the body.

And wearables aren’t just for measuring physical activity. They can help you get a better night’s sleep, too. Using a significant array of sensors, some wearables aim to improve sleep quality by capturing biometric data on heart and breathing rate, movement, body temperature, respiration and even perspiration. CTA has been working with device manufacturers and app creators to develop important standards for measuring sleep quality.

For women, Tempdrop’s wearable basal body temperature sensor tracks ovulation cycles through an ear bud that monitors your temperature and syncs with a fertility app to predict when you’re most likely to become pregnant.

The wonders of wearables aren’t limited to humans, either. The pet wearables market took off in 2015. From a GPS-enabled collar to track Fido’s whereabouts to virtual fence and leash technology to keep your pup from straying too far afield, wearables are truly for the whole family. One pet collar, still in the pre-order stage, is set to include two-way audio for keeping in contact with your furry friend if he’s out of earshot.

Another pet collar serves as a health monitor, tracking your pet’s temperature, pulse, respiration, activity and more. The data can be accessed by a veterinarian to help keep pets healthy.

We’re in an interesting experimental period, where our technological capabilities can capture and measure a wide array of personal data, analyze this information and then suggest an array of services based on this information.

The challenge for today’s wearables innovators is not in how to collect and analyze data; in many respects, we’re already there. Instead, innovators must prioritize meaningful data curation that results in actionable, customized advice to consumers. With wearables and their accompanying software deciphering the answers hidden in a sea of our personal data for us, we are empowered to make more meaningful decisions about our lifestyle, health and work.

How the Smartphone is Redefining Dating Norms

Many of us covet that classic love story: meet unexpectedly, fall madly in love, age gracefully together.

But the average age of marriage is creeping up. And the when, where and how we’re meeting our spouse-to-be is changing too, driven by the digital age we’ve been thrown into.

Today’s dating environment is more diffuse and more competitive than ever, as dating apps compete for our attention and affection, all the while gathering and analyzing our information. It is fundamentally redefining the dating norms we’ve known for the past half century. But is the data driving us to make the right romantic decisions?

Digitizing the matchmaking process makes us more reliant on data than ever. Before Match.com launched in 1995, chemistry — with an assist from serendipity — was the primary driver of matchmaking throughout most of modern Western culture.

The first generation of dating apps put the onus of finding a match squarely on the user: scroll through pages of profiles, scanning photos and examining other sundry details. Today’s dating apps rely on GPS, algorithms and, increasingly, how you use the service to define compatibility, make a match and motivate a first date.

Tinder, one of the most well-known and heavily used dating apps today, has 50 million users in 196 countries and produces 26 million “matches” a day. In November, Tinder released a new algorithm that incorporates both technical and informational data points.

Digital dating platforms provide the illusion of having unlimited choice, challenging traditional dating norms. Today’s dating app users are accustomed to having multiple, simultaneous digital conversations. This dating behavior would be nearly impossible to do in public but is incredibly common in spaces enabled by digital communications.

Perhaps as a way to fight the illusion of unlimited choice and capitalize on dating data, some dating apps like Hinge and Coffee Meets Bagel are limiting the number of recommended matches they provide.

Today, about five percent of Americans in a marriage or committed relationship met online and 15 percent of Americans have used online dating sites or mobile dating apps. And the rapid rate of growth in digital dating suggests this figure is poised to increase. However, like love itself, digital dating isn’t all rainbows and butterflies.

Roughly one-third of online dating service users have never actually gone on a date with someone they’ve met online. Many users seem to be only marginally connected and committed, making it harder to find the signal through the noise.

It is much easier to like someone in the digital universe of matchmatching because it is equally easy to stop liking them. Online dating cycles are much shorter than analog courtships. In almost all instances, you simply click “unmatch” and you can be disconnected from them entirely because the social norms that exist in the physical world do not apply.

First impressions have been replaced by digital images, which have become incredibly important elements of the digitization and redefining of dating norms, thanks in large part to the proliferation of, and the ease of use of, smartphones with cameras, filters and photo editing software.

Dating apps allow you to share multiple photos with would-be matches. Like a peacock spreading its feathers to attract a mate, we do the same with a collage of photos. But in the digital realm, it’s subtly different: We get to choose (and digitally enhance) the feathers we portray. We pull from a million photos until we have the perfect array and then use these photos as a sort of dating “resume”.

In almost all instances, these types of photos tell us a lot more about what the person is looking for in a match than about themselves. Before we’ve even said hello, we know more than any opening conversation could have provided historically.

The full ramifications of the new digitally defined era of dating are still coming to fruition.

Some studies suggest couples who meet online are three times more likely to divorce. Only time will tell if statistics like these hold as the popularity of the medium grows.

While there have always been unspoken dating norms, they are being defined (and often redefined) by smartphone apps and internet sites. Because the rules are fixed within the parameters of the software, what were once loosely understood norms are becoming strictly enforced parameters.

In a highly competitive environment, apps are finding new rules to implement in order to differentiate themselves and, consequently, are redefining dating norms as a result.

Digitization continues to bring us numerous new markets, and in the process redefine some, like matchmaking, that are as old as time.

What to Expect in Tech Innovations in 2016

By 2020, more than 200 billion objects will be connected to the Internet of Things (IoT) – a 99-fold increase over 2006 levels. As we continue to embed sensors into everyday objects, we’re transforming established systems from health care to education, as well as emerging tech categories, from drones to driverless cars. And with the right software platform and open lines of communication, these IoT-enabled devices will change the way we live, work and communicate.

While we still find ourselves primarily in an analog world, 200 billion connected objects by 2020 translates to more than 25 connected objects for each person on earth. Our mobile devices have become the conduits for the IoT — enabling us to control our thermostats, our locks, our cars and so much more.

In many ways, our smartphones have become the viewfinder into our digital lives. Tech companies are developing entirely new technologies and connecting existing ones to completely rewire our lives as a result of our drive into this new world that is the IoT.

The future we have long envisioned is the reality before us. Everything about our daily lives is changing. Our dishwashers and washing machines are connecting to sensors in our home to determine if we are away and making corresponding adjustments. These machines are also using embedded sensors to monitor use and connect to retail services in order to order detergent autonomously when we are running low.

Things that weren’t easily measured 10 years ago are increasingly measurable through sensors and cloud services — the heart of the IoT. We are using sensors together with cloud services to determine things such as what food we have on our plate, and in turn, help us track calories and manage our diet.

The Consumer Technology Association (CTA)TM projects emerging tech categories led by the IoT are poised for substantial growth in 2016 and the years to follow. A handful of products, such as wearable devices, 3D printers, 4K Ultra HDTV and smartwatches, are now generating mainstream interest. And since 2013, sales of other emerging products, such as consumer drones, smart eyewear, health and fitness tech and virtual-reality headsets, has been staggering.

These devices were imagined not long ago in science-fiction movies, but are now meaningful additions to our current world. These are all categories that will expand in 2016 as the Internet penetrates other aspects of our lives and informs the myriad decisions we make each day.

Look for the IoT to expand further as use-case scenarios become more well-defined, new opportunities emerge and the IoT spills over into other devices and services.

As good as things are for the consumer tech sector, it’s inevitable that we wonder: Will tech products get better – and how? Thanks to the changing viabilities of markets, a handful of promising possibilities loom on the horizon. The only question remains how quickly they will become part of our everyday lives.

Perhaps no innovation is poised to more fundamentally change our daily routines than when the IoT meets our daily commute and the errands that we run in the form of driverless cars. And while we’re still a few years away from mass adoption, assisted-driving technology is making great strides.

We can expect considerable experimentation and testing over the next several years in driverless-car technology, including sensors with new applications, multimodal human-computer interaction, the continued rise in cumulative learning systems and the subsequent system-informed recommendations.

The consumer technology industry remains robust, vital and more indispensable than ever to our daily lives. We have every reason to celebrate the ways innovation acts as a catalyst for the economy, bolsters communication and learning, and generally improves our world in myriad ways. I look forward to as-yet-unimagined technologies likely to be topics of discussion — and perhaps obsession — in the years ahead.

Around the world, the IoT is the tie that binds all of these amazing innovations in consumer technology. And while the debate still rages over how much money the IoT will ultimately generate, how much time and energy it will save us, or what, exactly, it encompasses, the key concept involves the way it’s rapidly becoming possible to connect just about anything to the Internet. The future is smart and connected.

The Coming Cure for “Digital Hoarding”

Confession: I’m a recovering hoarder. I’m actually great at purging, but I do it in rare spurts. I have a hard time parting with things I think I might need, even when I know I’ll never use it.

We all know I’m not alone in this. You’re a hoarder, too. In fact, when it comes to our digital files, most of us are hoarders. Technology has the ability to forever change how we relate to our hoarded digital junk.

When was the last time you deleted a file because you needed space on your hard drive? We used to do that all the time, back when digital space was scarce and expensive. Now, it’s cheap and plentiful. The result is none of us throws away anything digital anymore.

I don’t use the term “hoarding” lightly. I mean it in its negative, obsessive-compulsive sense. Our digital hoarding is a problem. Why don’t we delete any of our photos, even the bad ones? Because we might need them one day. This is why compulsive hoarders can’t bear to throw anything away. We might need it.

As with physical hoarding, digital hoarding induces anxiety and a feeling of powerlessness. We can’t delete anything because it might be useful and decision paralysis consumes us. So, we accumulate more. And the more we accumulate, the greater our anxiety we might have something very good or important sitting at the bottom of our pile of digital stuff. Or something we consider unimportant becomes very important. Our solution is not to throw anything into our virtual trashcans.

As a result, we can never find what we want when we need it.

Digital companies are wise to our inability to hit “delete” and so they created “cloud storage” as a solution to our problem of saving our digital goods in a secure location. Email systems have also adapted, offering search functionality to quickly find a single email buried beneath thousands upon thousands of others.

Google took the next logical step when it divided Gmail’s inbox into “Primary,” “Social” and “Promotion” tabs, giving users an easier way to filter the important messages from the unnecessary ones. And Google’s Inbox app turns your email into a “to do” list, writes Paula DuPont.

Still, the more we accumulate digitally, the harder it is to find what we need. In other words, our storage systems have little to no context awareness. Your cloud doesn’t know what’s important to you. It treats all files the same.

Your inbox might winnow promotional emails from personal messages, but that’s a superficial solution. You learn to ignore your “Promotions” tab, until you realize you missed a really good promotion that was applicable to your life. Maybe it’s a travel tip or a deal to use during your next business trip.

Services like IFTTT (IF This Then That) and Nimble offer solutions to rescue you from email inundation. IFTTT lets you set triggers for specific events online and then assign personalized actions to follow them, reports Popular Mechanics. For example, you can set a trigger for emails from an important email address, let’s say, a parent. You would then be asked to set an action to signal a new email, such as a text message to your phone.

The Nimble app offers an array of features, including “Stay in Touch,” an automated reminder system that watches your communications and prompts you to connect at the right time; “Mark as Important,” a star system so you won’t miss timely communications from important people; and “Last Contact,” a sorting format that it says “keeps current workflow top of mind.”

Nimble’s Teachable Rules Engine learns from you — via profiling through keywords, titles and the like — to surface important people. More, Nimble layers on relevant insights that identify why a contact matters to you.

That’s important, because the future is about getting what’s relevant when it’s relevant with as little friction as possible. Take, for another example, your photos. You really want to keep only the best ones, but the only way is to manually delete the bad shots. What if your phone or PC deleted the bad shots for you? “But what if it deletes a good one?” our digital-hoarding brain screams.

That’s just it. The next revolution in storage technology will be “context awareness.” The system will know what you consider “bad” versus “good”, relative terms that will be tailored to your personal preference, not some universal standard.

In addition, facial recognition software will be able to store and retrieve photos based on characteristics and that technology is getting better. You’ll be able to search images and video like you search text. You’ll be able to say, “Show me football clips” and get a list of what you want. (Today, if a video isn’t tagged “football,” it’s hard to find.)

Your future storage systems probably won’t “delete” the bad emails or photos — They’ll just prioritize the good ones. For example, your phone will keep only the photos you like (you and your dog). The rest (you and your ex-boyfriend) will be put in some recess of your cloud storage.

Our future storage systems will also prioritize data to fit, not just the user, but also the user’s situation. If you’re home sick, what email would you rather read — the one from your doctor or from your CEO? The system will organize itself to meet your needs at the time. When you recover, the system will go back to giving your CEO preference.

The solution to digital hoarding is not so much helping us part with useless data. Storage is cheap after all. The answer is to help what we see as the “good” stuff rise above the “bad” or “unnecessary”. When you cycle through your photos on your phone, you’ll only see the “good” ones. When you open your email, you’ll only see the ones you want or need to see.

With the arrival and mass adoption of context-awareness software, we won’t have to solve our digital hoarding ourselves. In fact, we won’t even know it exists.

The Future of Everything is in Your Head

I have chickens. You know, the kind that sleep in a coop in the backyard and lay eggs. Three chickens, three eggs, every single day. In the U.S., we have a peculiar relationship with our eggs. We keep them in refrigerators and all of the eggs in a carton — whether it’s six or 36 — have the same expiration date. The eggs from my chickens don’t have to be refrigerated and don’t all expire on the same day, because each day those three new eggs arrive.

I like order, but keeping eggs in order of acquisition can be a problem. Leaving them in a bowl doesn’t help and traditional egg cartons aren’t the best. In Europe – where farm-fresh eggs are the norm – families use egg skelters to manage the flow of their egg production. Though you can find a few of these contraptions on sites like Amazon – it isn’t called the Everything Store for nothing – good luck going to a local shop and grabbing the egg skelter of your choice.

Last week, I was sitting in my office thinking about how I really need to get an egg skelter (slow Monday, I suppose), which was almost immediately followed by a realization that went something like this: “Shawn, why don’t you just print one?” You see, I have a 3D printer in my office. I can print all kinds of things. I’ve printed replica skulls, coffee mugs, action figures, and everything in between.

Just like chickens were built for laying eggs, 3D printers were built for printing. When you have a 3D printer, you start to look at the world differently. You begin to see that dots can be connected, even the ones that might have always seemed difficult to connect. Things that seemed previously unobtainable are now – and in many ways, easily – obtainable. Not only can you get what you need when you need it, you can customize it to your liking, on the spot.

Recently, I bought a used vacuum cleaner on Craigslist. It was an expensive model but I got it at a discount because it was missing an attachment or two. My first thought when I saw the listing? “Oh, I can just print those.” Technology changes the way we see ourselves and the world around us. It creates new opportunities, gives us more options, and lets each of us forge our own path.

Sales of consumer-grade 3D printers have doubled since 2012 and they’ll continue to grow over the next five years, according to research from the Consumer Electronics Association (CEA)®. This year alone, 3D printer sales will top $9 billion and, as the market expands, prices are falling. Printers that cost on average more than $1,200 in 2012, now cost less than $1,000. The more we realize and implement the power of this emerging technology, the more we print, the more the market for 3D printers will grow.

Early technology evangelists like Bill Gates and Steve Jobs envisioned computers in every classroom and tablets on every desk. Yes, that was a far-fetched vision at one time but computers showed us how to think differently. Now, 3D printers hold that same promise. From rapid prototyping to prosthetics to intricate culinary creations, 3D printers are changing how we do things and, most importantly, how we think about “things”. They are changing how we see ourselves and how we see the world around us.

Today, about 1,000 schools across the U.S. have 3D printers. That’s only about one percent of all our schools. We have a long way to go to recast how the next generation perceives the physical world around them. Some day, every classroom should have a 3D printer – teaching students to build, explore, create, and solve problems. Our future might depend on it. At the very least, our future will be defined by it.