Podcast: Microsoft, Amazon, Google, Ebay, Dell World, China OEMs

This week Ben Bajarin, Jan Dawson and Bob O’Donnell discuss earnings results from Microsoft, Amazon, Google and Ebay, analyze some of the news and trends coming from Dell World, and chat about the growing impact of Chinese hardware companies across many different device categories.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Arrogance of Tech

As someone who makes his living from the coffers of tech companies, it may not be the wisest move to criticize the hands that feed you, but I feel something needs to be said.

Many technology companies, and the tech industry as a whole, have gotten incredibly arrogant. In some cases, obnoxiously so.

Everywhere you turn, there are people in tech describing how they are completely reinventing businesses or business models or ways of doing business. We have VCs and other tech investors who have convinced a good portion of the world that it’s not only okay to lose a lot of money, it’s almost of badge of honor to do so. It’s all about growth; except now, for many, it’s not….

Similarly, only when tech folks have brought their particular form of magic to other industries, such as transportation and logistics, are they deemed worthy of thinking, talking, or writing about. (Uber, anyone?)

The common assumption behind these, and many other, examples seems to be that only people in tech can really figure these things out.

To be fair, technology really can be magical (I wouldn’t be part of this industry if I didn’t think so), and there are some other businesses that, on virtually any objective scale, are not really that interesting. But when the thinking extends to the point where people start to believe that the smartest people are only in tech, and only the things that they touch can actually turn to gold, well, you get my point…

The latest example of this hubris comes in the world of automobiles. Cars have been around for over 100 years, but it’s only been in the last few years, it seems, that they’ve taken on a new aura of importance.

Why? Well, it’s all about tech-enabled smart cars, connected cars, and eventually autonomous cars. Now that the tech industry has shown a fascination with cars, our four-wheeled friends have become cool all over again.

As a tech guy and a car guy, that’s actually an exciting development. But what I find rather disconcerting is that the assumption, once again, seems to be that only the tech industry can “fix” what’s “wrong” with the auto business.[pullquote]What I find disconcerting is the assumption that only the tech industry can ‘fix’ what’s ‘wrong’ with the auto business.”[/pullquote]

So, for example, it seems to be increasingly common thinking that big tech companies like Apple or Google would be able to get into the auto business with little difficulty, as long as they throw the right number of people and resources at the issue. The fact that Apple has supposedly hired 600 people to work on a smart car or other automotive project is given as an example of how these developments might occur.

No one, however, seems to consider the possibility that a GM or other large car manufacturer, with decades of automaking history, could hire 600 people to make electronics and software products that are better for smart cars and autonomous driving than what tech companies could build. (Oh, and in the process, ensure that they don’t give away to tech companies the value and critical importance that electronics now play in today’s cars.)

Admittedly, car makers don’t have much of a track record when it comes to software user interfaces on their cars, but how much of a track record do tech companies have to make enormously complicated pieces of machinery that comfortably move us down the highway at 65 miles an hour?

To put it another way, mechanical engineering has a different set of challenges than software engineering, but that does not make it any less difficult a task, nor require any less intelligent people.

Obviously, there are critical roles to play for technology components and software companies in today’s (and tomorrow’s) cars, as well as many other different devices. In fact, as we start to see the Internet of Things and other related trends start to really develop, we’ll finally start to see the entire tech industry focus less on being a stand-alone entity and more on being a true enabler for virtually every other industry. (Yes, even the less glamorous ones.) Who knows, we’ll probably even see new business models start to develop outside of tech, hard though that may be for some to believe.

In order for this industry maturation process to move along, however, the tech business may want to take a step back and start thinking more collaboratively instead of combatively. After all, the best rewards often come from helping others.

Podcast: PC Campaign, Intel, Netflix, Square

This week Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the recent PC industry marketing campaign, Intel and Netflix’ earnings, and analyze Square’s IPO plans.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Tech World Moves to AND, Away from OR

In the end, all the developments in tech can be reduced to mathematical logic.

At a concrete level, the principles of Boolean algebra lie at the very heart of digital electronics, software programming, and most of our modern conveniences. Put very simply, the math involves manipulating two key variables, TRUE or FALSE—or, more commonly denoted, a 1 or a 0—via simple logical operations: AND, OR, and NOT.

At a conceptual level, these Boolean logic operators have arguably been used—albeit more subconsciously—to drive many of the assumptions about where technology products stand and how they might fare when pitted against one another.

It’s either got to be iOS OR Android; a tablet, NOT a PC; and so on.

For many people, it boils down to what is perceived to be an inherent superiority of one solution or one option over another. Of course, in many cases, that has indeed been true. There have been, and continue to be, many examples of products or technologies far better than their competitors or predecessors because of price, cost, ease-of-use, or several other factors.

As technology products have matured, however, the gaps between competitive solutions have become smaller and less obvious. Sure, you can make reasonable arguments about, say, one technology standard for smart home connectivity versus another one, but you’re also likely to run into some serious and legitimate rebuttals from the other side.

Ultimately, this is a good thing, because it means as we make technology decisions for ourselves, our families, or our companies, we’re selecting from a range of good choices. To put it another way, there are a lot of positives one can glean about the state of the tech industry when the best answers to the question about choice are, “You can’t really go wrong with any of them” or “Why not both?”

Beyond the simplistic notion of choice, however, this development has profound implications about where the tech industry is and needs to be going. Increasingly, we’re living in a world of Boolean AND, and less in a world of Boolean OR. There are many technology options—whether it be individual products, technologies, platforms, standards, apps, services, etc.—that are being used together.

The problem is they were designed with the mindset of a Boolean OR and not a Boolean AND. In other words, the product or technology creators made decisions about what to do or how to do something based on the assumption their option was the only option (or, at least, the only one that mattered).

The end result is a whole variety of tech products, technologies, platforms, standards, apps, and services that really don’t play well with each other—at all. In fact, the problem seems to just be getting worse. In addition, there are still many efforts to paint one device, technology, platform as the only one to do a task when, actually, it’s much more realistic to think about how tasks are shared across multiple devices, technologies, and platforms.[pullquote]Smart devices, consumer services, operating systems, you name it, could all benefit from a much stronger focus on connectivity, or at least acknowledged co-existence, with other options.”[/pullquote]

The whole world of IoT—from smart homes, smart buildings, smart cars, smart cities, and beyond—for example, seems to be suffering from a deplorably exact interpretation of Boolean logic: “Our way of doing/thinking about things is TRUE, AND the other way is FALSE.” A lot more progress could be made if instead there was a recognition that “Our way of doing/thinking about things is TRUE, AND the other way is also TRUE.” Even enabling an option for greater interoperability would be a step in the right direction.

These challenges aren’t just limited to the IoT world, however. Smart devices, consumer services, operating systems, you name it, could all benefit from a much stronger focus on connectivity, or at least acknowledged co-existence, with other options.

As our world gets infused with more tech products and services, it will inevitably get more complex. In order for that complexity not to completely overwhelm us, key changes in outlook and approach need to be made. All it takes to get started is fewer ORs and more ANDs.

Podcast: Microsoft Event, Dell and HP PCs

This week, Tim Bajarin, Jan Dawson, and Bob O’Donnell analyze the recent Microsoft hardware launch event, including their new Surface 2-in-1s and Lumia phones, and discuss new PC announcements from both Dell and HP.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Reimagining Personal Computing

As a person who tracks the ebbs and flows of the computing market—in all its various forms—the last few weeks have been interesting, to say the least. First, we saw Apple extend the iPad into its most compute-friendly (or computer competitive?) form, with the release of the iPad Pro and its accompanying Smart Keyboard and Apple Pencil. Then, Google unveiled the Pixel C, an Android-based 2-in-1 device with a detachable keyboard and a high-resolution screen (308 ppi) 10.2” screen. Finally, today saw the release of the much-anticipated Surface Pro 4 from Microsoft, as well as the unexpected Surface Book.

The clear takeaway from all of this is that, despite early criticisms, Microsoft clearly struck a chord with the Surface devices—particularly the Surface Pro 3—and the future of computing is looking increasingly like a combination notebook/tablet. This is ironic in several ways because many people wrote off these 2-in-1 devices as a fad, and arguably, the 2-in-1 category didn’t really exist until Microsoft brought out the Surface.

But now, several years, several iterations and several similar competitors later, it seems Microsoft may have been onto something after all. In fact, the Surface Pro 3 has done surprisingly well, and nearly singled-handedly rescued the clamshell form factor from tablet-dominated oblivion.[pullquote]Several years, several iterations and several similar competitors later, it seems Microsoft may have been onto something with Surface. after all.”[/pullquote]

Of course, I say this despite the fact that Microsoft insists on calling Surface a tablet and refusing to bundle the keyboard that nearly every single Surface purchaser ends up buying and using anyway. In practical, real-world use, however, essentially every single Surface Pro 3 I’ve ever seen is used like a clamshell notebook with a detachable keyboard.

Microsoft gave people interested in this unique design even more compelling reasons to consider one at their launch event today. The new Surface Pro 4 builds on the heritage, design, and even peripherals of the Surface Pro 3, but adds important extensions of its own. First, the company reduced the bezel size of the display and increased the screen size from 12 to 12.3”, all while maintaining its 3:2 aspect ratio. As expected, the company also updated the Windows 10-only device to Intel’s 6th generation core (codenamed “Skylake”) CPUs, offering variations with a Core M, Core i5 and Core i7. In addition, the company added a redesigned, magnetic Surface pen, and a Microsoft-designed IR camera that can do facial recognition for Windows Hello. There’s also a new set of improved keyboard options, including one with a fingerprint scanner, and all of them are backwards compatible with any previous Surface.

The real surprise of the day, however, comes from the company’s new Surface Book—what they call the first Surface notebook. Housed in a sleek, 3.5-pound aluminum design, the device offers a 13.5”, 6K resolution display (3K by 2K), the infrared facial recognition camera, the redesigned Surface Pen, and Intel’s latest CPUs. In addition, however, is a detachable metal keyboard that houses an additional battery and optional nVidia GPU. The “tablet” portion of the device—which the company claims is the thinnest core i7 computing device in the world—holds enough battery for 3 hours usage, but connected to the keyboard, you can get 12 hours, as well as access to the optional GPU (connected via PCIe over Microsoft’s proprietary Surface dock connector).

Pricing starts at $1,499 for the sleek new device, and ranges up over $2,000 with GPU and high-capacity (up to 2 TB) solid-state storage. Microsoft claims they’re going directly after the MacBook Pro’s bread and butter audience—creative types, graphics professionals, and other highly-demanding users. While it remains to be seen how well the new Surface Book does, my brief time with the device suggests that PC vendors and Apple have some serious new competition in the more “traditional” notebook space.

Given that Microsoft also used this event to unveil more details about its HoloLens head-mounted computer, as well as showcase how their new high-end Windows 10 Lumia 950 smartphones can function like a PC, by connecting directly to an HD monitor (or TV), and leveraging Bluetooth or USB keyboards, this day truly has shown the range to which Microsoft is extending the concept of personal computing.

All told, it was an impressive display, and one that will likely be looked back on as having started some important reimagining of what personal computers can and should be.

Podcast: Google Nexus, Amazon, AT&T

This week Tim Bajarin, Ben Bajarin, Jan Dawson and Bob O’Donnell discuss the recent Google Nexus product introduction event, debate Amazon’s stopping sales of Google Chromecast and Apple TV hardware, and analyze AT&T’s recent industry analyst event.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Rebirth of Virtual Clients

As with many industries, the history of the computing business often follows a circuitous path. In the early days, centralized mainframes provided the computing horsepower to dumb terminals. Then came independent PCs, followed by the transition back to server-based computing with thin clients—storage-free devices that essentially served as the graphics front end for the computing workloads being performed on the server.

While thin clients made a reasonable impact for many businesses—particularly those in regulated industries—they didn’t ever provide the kind of transformation to the computing environment that many had predicted. Full-powered PCs, it turns out, were still essential for many tasks, particularly those that were dependent on graphics, so we saw a renewed emphasis back on PCs.

Now, we’ve moved into the era of mobility, where smartphones have become increasingly powerful compute devices. Arguably, however, smartphones (and tablets) have actually become next generation thin client/virtual client devices, with much of the computing experience they deliver coming from cloud-based services.

As great as these mobile devices may be, however, there’s still a very strong need for full desktop computing experiences in business environments. And in that great circle of ongoing compute evolution, we’re now seeing a new generation of server-based, graphics virtualization-powered, cloud-based computing solutions enabling a whole new round of virtual clients, thin clients, and other remote computing models.

Some of the early work in this area was enabled with AMD’s virtualization-enabled server-focused GPUs. Early versions of virtual desktops could only virtualize the CPU, and not the GPU, so the first efforts to virtualize GPUs in servers several years back were a critical step forward.

More recently, nVidia’s Grid 2.0 efforts are bringing a second generation of workstation-level graphics to servers and virtual desktops. Today’s announcement by Microsoft and nVidia of support for Grid in Microsoft’s Azure cloud computing platform extends the range of options that companies now have to use new computing models to deliver desktop experiences.

Businesses can now choose to deliver desktop capabilities from their own internal servers, from shared external servers, from 3rd parties that offer “desktops as a service,” and several other variations on these basic themes. More importantly, IT departments are able to deliver an experience to their end users—both inside and outside of the company’s physical walls—that can truly match what only standalone PCs and even workstations were once able to do.[pullquote]IT departments are now able to deliver an experience to their end users—both inside and outside of the company’s physical walls—that can truly match what only standalone PCs and even workstations were once able to do.”[/pullquote]

This is key, because for all the benefits of the virtual client/thin client computing model—particularly on security, management of the devices, management of the applications, and operating systems running on those devices—the actual end user performance often suffered.

Agonizingly slow screen redrawing, tepid browsing experiences, and other productivity-killing hassles turned some early thin client installations into painful experiences for end users, as well as the IT departments who chose to deploy them. Thankfully, numerous improvements along every stage of the virtual client computing chain have made those types of experiences a distant memory.

Improvements in the performance of the thin client devices themselves, from vendors such as Dell/Wyse and HP, to enhancements in VDI architectures and protocols from Citrix, VMWare, and Microsoft, to software and hardware refinements on the compute, storage, and network stacks within data centers, have all come together to deliver a significantly more usable virtual client/thin client experience. Plus, thin clients are no longer limited to desktop devices—there’s more and more experimentation with clamshell-styles and other mobile form factors.

As a result, there’s an expanding range of companies from industries well beyond thin client stalwarts, such as health care, financial, and government, who have started to embrace the virtual client/thin client computing experience. Mainstream companies in industries of all types and sizes continue to explore and invest in these new evolutions of thin clients and virtual clients.

Admittedly, the virtual client/thin client story is not a new one, and for some IT decision makers, the devices may bring back painful memories. Nevertheless, as critical refinements are made to these virtual client computing models, it’s worthwhile for companies to reassess their stand on virtual clients and thin clients and give them a fresh new look. They might be surprised at what they find.

Podcast: China, Apple Car, Office 2016

This week Tim Bajarin, Ben Bajarin, Jan Dawson and Bob O’Donnell discuss the recent meetings between Chinese President Xi Jinping and major tech leaders, debate the possibilities of an Apple Car, and analyze Microsoft’s announcement of Office 2016.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

What’s Next for Consumer Tech?

The last few decades have seen an explosion in technology-driven products for consumers. From PCs, tablets, and smartphones, large flat-screen TVs, high-resolution recorded media, and numerous delivery methods for consuming that media, technology has become completely intertwined into most people’s lives.

Along the way, a fundamental belief that technology could continue to drive hot new product categories has become so widespread, expectations of such developments have started to define our views of the future. “What’s the next new hot category for tech?” has become not just a common buzz phrase, but the filter through which we view and try to interpret the latest product news and technology advances.

The view through this particular lens has begun to get very murky though. It’s not that we’re not continuing to see important new products and technology innovations—of course we are. After a long stream of critical, widely adopted devices, however, there is no tech heir apparent.

Sure, there are lots of interesting new categories—wearables, smart home components, virtual and/or augmented reality, connected cars, to name a few, but none of them look to have the kind of widespread acceptance and influence on our day-to-day lives the things like PCs, smartphones, flat-panel TVs, and even tablets have had.

To put them into a more historical perspective, several of these new categories feel more like MP3 players or BluRay—certainly important technologies in their day, but not categories that have withstood the test of time when it comes to widespread ongoing usage.[pullquote]Several of the consumer new categories feel more like MP3 players or BluRay—certainly important technologies in their day, but not ones that have withstood the test of time.”[/pullquote]

In some instances, such as wearables and virtual/augmented reality headsets, the challenge is the products really only appeal to a small portion of the overall consumer audience. To be fair, these products are also only in their earliest stages and will undoubtedly improve to the point where they do appeal to a wider audience. But, even still, they just don’t seem like categories we’ll be talking much about in 5-7 years.

In the case of both smart homes and connected cars, we’re also very early in the development process. In fact, I expect the arc of development to be significantly longer for both of these categories. Arguably, we could even be in earlier phases comparatively speaking. But the biggest challenge to acceptance and widespread adoption of these categories is not the technology. The real challenge in moving forward is competing standards.

Unfortunately, different standards are being promulgated by big name players, including Intel, Qualcomm, Google and Apple—none of whom are likely to abandon their positions anytime soon. This is going to keep things extremely complicated in the smart home market for many years to come.

For connected cars, each auto company will make important safety and connectivity improvements to their own vehicles. Both the technical and legal standards/requirements necessary to enable vehicle-to-vehicle intelligence, however, are still just a glimmer in the eyes of some forward-looking auto and tech industry executives, as well as insurers and legislators.

The net result is the next few years for consumer tech are likely to be ones of refinement and organization, with more efforts being made to get various individual elements talking to one another—essentially, getting your tech under control. Some of that may happen through mobile apps, but I also believe we’ll see a decreasing influence of individual apps and an increasing impact from consumer-focused services that extend beyond individual devices and specific operating systems.

At the same time, we will continue to see the evolution of the aforementioned categories, such as wearables and smart homes, along with a few yet to be invented. But we’re likely to see much more specialization, with a wide range of new tech products that appeal to increasingly targeted (and therefore, smaller) markets.

I don’t view this as a bad development, but it’s certainly a different one than we’ve experienced to date. It’s also one that’s likely to create a different lens through which we’ll start viewing ongoing product and technical advances.

The Key to IOT Security

The potential opportunities within the Internet of Things (IOT) continue to be at the forefront of many people’s minds. But lurking in the back corners of those same minds are concerns about the potential security nightmares of a fully connected world.

Even barring the crazy Skynet scenarios from The Terminator, there are plenty of good reasons to be concerned about the hyper-connectedness of IOT, as I’ve written about in the past. In fact, the possibility of security-based issues creating problems is one of the key reasons I believe it will be a very, very long time before we see widespread use of fully autonomous automobiles on our roads.

We’ll certainly see lots of great developments in smarter cars that have collision avoidance features and other automated safety improvements, but that’s still a big difference from being fully autonomous. In other areas, we’ll likely see similar types of adjustments that reflect concerns around the potential for insecure connections.

To be sure, the move toward greater connectivity across multiple devices continues to gain momentum, and it’s arguably an unstoppable force at this point. Nevertheless, conscientious efforts to modestly slow, or perhaps refocus or reshape some of these developments around a security-based paradigm, is going to be critically important for the long-term success of IOT.

One way of doing that is by looking at some of the essential ways to drive a more secure IOT environment. I believe one of the key solutions is going to be leveraging hardware-based security models—think embedded tokens, device IDs or secure elements that can uniquely identify a given device on a network.

By establishing a root of trust on a device, a secure embedded element can help the device and any embedded operating system on it assure that they “are” who they think they are, and also ensure that no changes have been made to any firmware or boot code on the device. Though admittedly very technical, this is a key element in maintaining the security of a single device.[pullquote]By establishing a root of trust on a device, a secure embedded element can help the device and any embedded operating system on it assure that they “are” who they think they are.”[/pullquote]

Even more importantly, however, a hardware-based security element can also be used to identify and authenticate a device on a network. At a simplistic level, this is actually how SIM cards work with carrier networks—they identify your phone to the network, assuring that the phone can function and that your phone’s number/identity is who it says it is.

Of course, the concept of an embedded hardware element and the reality of its implementation can be two different things. Long-time industry observers may recall the brouhaha that Intel created many years back when it tried to put CPU IDs into its processors.

Times have changed, however, and the security breaches that bombard us in the news every day have likely changed the minds of individuals who may have had concerns about these technologies in the past. Plus, the highly networked nature of all our devices makes the issue more pressing now than it ever has been.

There are now a significantly larger number of companies (and devices) involved in trying to solve these issues. Everyone from SIM card makers like Gemalto to CPU vendors like Intel to IP licensing companies like ARM, Imagination Technologies, Synopsys and others are working to create different types of device ID “card” equivalents that can be used to piece together a more secure environment for IOT.

Just as one type of key won’t work on all types of locks, there’s still a lot of hard work to ensure that the different types of secure IDs and different security protocols and authentication methods can talk to one another. But software alone can’t solve the challenges of IOT security—it’s still going to take some hardware to make digital security keys really work.

Podcast: Apple Event Analysis

This week Tim Bajarin, Ben Bajarin and Bob O’Donnell dive into an in-depth analysis of Apple’s recent product event, including discussions around the iPhone 6S and 6S Plus, the iPad Pro, and the new Apple TV.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Home Gateways: Extinction or Evolution?

In the early days of “smart homes” or “connected homes” or the “smart living room” or whatever phrase you chose to describe the idea of a living space with more than just a simple internet connection for your PC, there was a lot of talk about home gateways.

The theory was this magical box—the home gateway—was going to be not just the central point of connection for lots of different devices, but also the key to accessing services and many other revenue-generating products.

Instead, we ended up with a bunch of standalone WiFi routers. Now, there’s nothing wrong with WiFi routers and—in reality—you can actually do a lot of the things that were promised for home gateways with a simple router. Connect all your devices? Check. Access TV services, telephony services, etc? Check. It’s just that, well, it really didn’t live up to the hype of the home gateway concept.

Since then, we’ve seen WiFi routers integrated into cable modems and other boxes that service providers such as telcos, cable companies, and satellite TV providers include as part of their packages. The basic idea behind these combo boxes was/is to reduce the number of nondescript blinking boxes our homes and apartments were/are becoming overrun with. Of course if, like me, you find you want to piece together better internet service from one provider with better TV service from another, you end up with multiple boxes anyway (and the need to actually turn the WiFi off on one of them to avoid network congestion and other hassles—but I digress).

To be fair, some of these combo boxes actually come close to the original promise of home gateways—including access to premium TV content and other services—but the world of over-the-top (OTT) delivery of video services from companies like Hulu, Netflix, etc., have changed the landscape and put into question the original need for a home gateway.

At the same time, we’ve seen some big improvements in the quality of WiFi routers, adding support for new technologies like 802.11ac and 802.11ad that offer higher sustained throughput, multiple antennas, and MIMO (multi-input, multi-output), the ability to handle more devices, and so on.

So, the question now becomes, are we seeing the end of home gateways?

While it might be easy to answer yes, I actually think the answer is no. What I think we are seeing is the evolution of home gateways into smarter, more broadly connected devices.

Traditional gateways and routers have frankly been nothing more than connection points. That’s why they never engendered a great deal of enthusiasm among consumers. You find out the name of the network and the password, you enter it into all your devices, and you’re good to go. Whether it was PCs, tablets, smartphones or even some connected appliances, all you seemed to need was a WiFi network for simple, utilitarian connection tasks.

But as great as WiFi is, it’s not the best connection technology for everything, particularly low-power applications. If I want to have access to a smart light bulb from Cree, for example, options like Zigbee are a better (and cheaper) option. Other smart home devices use ZWave and/or some variation of Bluetooth.[pullquote]While the original vision of what home gateways were supposed to be never really came to pass, we are starting to see an interesting evolution of the concept that could start to serve as the true central connecting point for all the connected devices in a home.”[/pullquote]

As a result, traditional home routers/gateways are starting to incorporate a wider range of radios and, along the way, more intelligence. From home automation-inspired boxes like the Wink hub, Insteon hub or SmartThings hub, to super-charged versions of home routers, like the Qualcomm-powered Google OnHub, we’re seeing a real evolution (and widening) of the types of equipment that can now connect to these devices (which everyone seems to be calling hubs). Plus, with support for technologies like multi-user MIMO (MU-MIMO), which Qualcomm’s MU EFX chip enables and which allows more efficient (i.e., faster) speeds with multiple WiFi devices, these new gateways are getting smarter about working with our existing devices.

In addition to more connectivity options, these gateways are getting sophisticated enough they require apps to properly configure them. But unlike the often-confusing IP address connectivity requirements of traditional routers, these new home gateways offers simple smartphone-based apps that make the process of using them much easier.

We’re also just beginning to see the rise of new types of devices that could soon incorporate more of these gateway-type functions. For example, Apple’s widely anticipated new Apple TV box is expected to incorporate a range of connectivity options and serve as the company’s hub for its HomeKit home automation developments. In addition, future iterations of living room challengers, such as nVidia’s Shield Android TV, are also likely to incorporate more home gateway style capabilities.

The bottom line is that, while the original vision of what home gateways were supposed to be never really came to pass, we are starting to see an interesting evolution of the concept that could start to serve as the true central connecting point for all the connected devices in a home. Ultimately, that’s what a gateway should be all about.

Podcast: Apple Event Preview, Intel Skylake, Android Wear

This week Tim Bajarin, Jan Dawson and Bob O’Donnell preview the upcoming Apple event, discuss the launch of Intel’s Skylake platform and debate the opportunities and challenges for Android Wear in conjunction with iOS.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Real Software Revolution? It’s in the Data Center

Sometimes, behind-the-scenes work is more important than the up-front stars. Just ask anyone who’s worked on a special-effects-laden movie or other video project.

In the case of today’s tech business, the “stars” are mobile devices like smartphones and all the capabilities available with them. The real work, however, is happening behind the scenes in massive data centers powering all the services and applications that bring our mobile and other computing devices to life.

Much of the tech world and press focus almost solely on the “stars.” Of course, there are some good reasons for this bias. It’s hard not to notice how quickly many people’s eyes glaze over as soon as a phrase like “data center” is uttered in polite company. For many, it’s just plain boring and, even for interested parties, it can be an extraordinarily complex topic.

Nevertheless, there are some key topics and advancements that are not only worthy of but, frankly, in need of some discussion. One of the biggest data center topics is virtualization. Essentially, virtualization means the ability to run software at a layer that sits above direct contact with the hardware—a process sometimes called hardware abstraction.

Practically speaking, virtualization allows computing devices to do multiple independent things—not just multitasking, but simultaneously running multiple operating systems or functioning as the equivalent of several independent devices. On PCs, not many people have the need to do this, so it’s not a huge market. In servers, however, it’s absolutely essential and has completely revolutionized the architecture of today’s data centers.

VMWare, whose big VMWorld trade show is happening this week, popularized this development about 14 years ago with virtualization on the server itself. Since then, there have been dramatic improvements to the technology by VMWare, Citrix, Microsoft, and others. Even more importantly, virtualization has expanded to other devices as well.

Desktop virtualization—sometimes called VDI (Virtual Desktop Infrastructure)—enables a single server to actually provide multiple independent desktops (complete with operating systems and applications) to hundreds of connected devices. In a classic case of what’s old is new again, this mainframe-like computing model has now become a mainstream part of how computing gets done in business.

Unlike the mainframe model, however, today’s iterations can offer workstation-quality graphics, as the result of things like nVidia’s new Grid 2.0 architecture. Not only does Grid 2.0 enable the usage and virtualization of GPUs in today’s servers, it allows multiple GPUs to work simultaneously on a single task, offering even greater than workstation-quality performance.

In addition, today’s version of virtualized desktops works with much more than desktop PCs or traditional thin clients. In fact, many of today’s biggest mobile applications—such as mapping, personal assistants, and more—are leveraging the same virtualization-driven, cloud-based computing models and turning all of our smartphones into thin clients. That’s why these data center developments are so critical for today’s mobile devices.[pullquote]Today’s biggest mobile applications—such as mapping, personal assistants, and more—are leveraging virtualization-driven, cloud-based computing models and turning all of our smartphones into thin clients.”[/pullquote]

Another key data center technology development is called hyperconvergence. In a sense, you can think of hyperconvergence as taking virtualization to the next extreme, because it involves organizing all of the separate components found in a data center—servers, large Storage Area Networks (SANs), networking routers, etc.—and turning them into a single logical unit that comes under software control.

Like virtualization, hyperconvergence isn’t a brand new technology—although it’s only been around for a few years—but there are some key new developments that are becoming critical for today’s consumers and business users to understand. And, once again, there’s a cloud computing-based connection. In fact, the whole concept of hyperconvergence was largely popularized by megasites like Google, Amazon and Facebook, who quickly realized traditional data center architectures didn’t meet their rapidly expanding needs.

As a result, these companies started to create their own commoditized hardware components and build powerful software to control all of it in a unified way. Now, companies of all types and sizes are looking to create these kind of powerful yet flexible data center architectures, which will help them power the next generation of services and applications to inform, entertain, educate, and transact with us.

To enable those capabilities, traditional server vendors like Dell, HP, and Lenovo are partnering with smaller software-focused companies like Pivot3 and others to deliver hyperconverged data center appliances, which integrate all elements of a data center into a single box. The idea is to offer much simpler solutions that are significantly easier (and less expensive) to manage.

Data center technologies may have bewildering names, but they are playing increasingly essential roles for all the devices and services—both consumer and commercial—that keep us engaged every day. They certainly aren’t as sexy as mobile apps, but they’re at the heart of a revolution that’s bound to be much longer lasting.

Podcast: Apple-China, Facebook M, Obi Smartphone

This week Tim Bajarin, Jan Dawson, Ben Bajarin and Bob O’Donnell debate the impact of the recent China stock market moves on Apple, discuss Facebook’s new M personal assistant, and talk about the new Obi smartphone for developing countries.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Is The Tech Market Hitting Middle Age?

First it was PCs. Now it’s tablets. And very soon, it will be smartphones.

Each of these markets has, or will hit its peak in both revenues and unit shipments in short order. Each has moved (or will soon move) from the soaring grandeur of youth and young adulthood, to the dowdiness of mature, middle age.

As these inevitable market developments occur, important shifts are starting to happen. Not only will device manufacturers, and their key component suppliers, have to evolve their businesses—as many have started to do—but very soon, so will companies offering software and services used by those devices.

While some argue that these software and services companies are taking over the world, it’s naïve to think that their growth can be maintained completely independent of the devices. At a fundamental level, the two are linked, and when the device numbers peak, so too, do the potential users of any software or service. Admittedly, there’s more of a growth opportunity over the short term for these software and service companies, but that won’t last forever either.

As a result, I believe it’s time to look at where the tech market is headed from a different perspective, and to realize that these markets do not have evergreen growth opportunities. Instead, it’s clear that the main tech device markets are quickly settling into more of an automobile-like industry model. Some years you go up, and some years you go down—not every forecast chart is going to offer a hockey stick headed in a north-easterly direction. Additionally, swings in these markets end up being more tied to the economy and other external factors than any technological breakthrough—a classic sign of industry maturation.

One added challenge for the tech markets moving forward: pricing. While automobile average selling prices (ASPs) generally increase—and those increases are widely accepted and expected by consumers—the tech market has created the exact opposite set of expectations. Tech products are always supposed to get cheaper ever year. As a result, as the tech market continues to mature, the rapid decline in average selling prices—particularly for smartphones—is going to make it a real challenge to maintain any kind of revenue growth in tech hardware—even for some of the biggest players.

Software and services companies aren’t immune to revenue challenges either—they just come in a different form. We’ve already seen the almost complete disappearance of the packaged software market, and we continue to see experiments and evolutions in software business models with questionable degrees of success.

For a while, it seemed the app store model was going to be the saving grace for software. Instead of paying hundreds of dollars for general-purpose apps from a tiny number of major providers, you could spend a dollar or two and choose from literally millions of options. But it turns out too much choice can actually be a bad thing, and the app store model is showing signs of imploding. A tiny, tiny percentage of companies actually make money in app stores, and based on very low usage and high abandonment rates, it seems many users aren’t really satisfied either.

Ad-supported models for software aren’t proving to be a cure-all either. Interactions with ads, particularly on mobile devices, are extremely modest. This, in turn, has led to low ad prices and serious challenges to any kind of revenue growth, especially on a global basis, either for application developers or content publishers. Sure, there are exceptions, but they are just that—exceptions.[pullquote]I believe we are starting to enter a very new phase in the tech business—a phase that’s going to be driven by many different developments than the ones that have led us to where we are today.”[/pullquote]

The red-hot—for now, at least—services market seems to offer the brightest opportunity for future growth, because many of its offerings simply leverage the widespread usage of tech devices. As long as people keep using their devices—regardless of the form those devices take—the opportunity should remain. The problem here is that many of these services aren’t really doing anything new—they’re just offering different ways of doing things we already do—find a ride, a place to stay, etc. As a result, they appear to be much more susceptible to business model disruption, whether it comes from local market adaption trials, an abundance of competitors, or even legislative constraints. We’re still in the early days of tech services, but I can’t help thinking that the scenarios (and players) are going to be very different even just a few years from now.

Given all these challenges, one could easily presume that I’m seriously concerned about the future of the tech business. In fact, nothing could be further from the truth. I’m convinced that the tech industry will continue to serve as a critical growth engine for the entire world economy.

Having said that, I do believe we are starting to enter a very new phase in the tech business—a phase that’s going to be driven by many different developments than the ones that have led us to where we are today. As with any major industry transformation, this means that some of the biggest industry players may not survive in their current form (or at all), while others are likely to go through some dramatic transformations. This also means that there will be tremendous opportunities for today’s smaller or even yet-to-be started companies.

The tech industry’s transition to a more mature market does bring with it the opportunity for some potentially boring baggage when it comes to things like stagnant unit growth rates. However, instead of viewing this as a mid-life crisis, smart, innovative companies will figure out ways to see these developments as a mid-life celebration that can open up new opportunities.

Podcast: Intel IDF, Amazon Workplace, Nomophobia

This week Tim Bajarin, Jan Dawson and Bob O’Donnell discuss Intel’s IDF Conference, the controversial NY Times story on Amazon, and nomophobia or the fear of being disconnected from our devices.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Building Vertical Platforms for IoT

To achieve the full IoT vision many companies like to espouse, you would need to, seemingly, connect everything to everything. Realistically of course, that’s not only impossible, it’s actually of questionable practical value. While we may occasionally hear about surprising new combinations of devices and connections, most of the odd combinations won’t do anything useful.

Instead, the focus in the near term is more likely to be about connecting related devices to each other: lights to lights, heating and air conditioning systems to thermostats, wearables to smartphones, and so on. Applying this concept practically to the business world—where many of the most useful and most profitable applications are likely to be found—we’re starting to see a lot of interesting vertical applications for IoT.

Smart buildings, for example, are starting to combine all the core “infrastructure” elements from lighting, HVAC (heating, ventilation, air-conditioning), security, plumbing, electrical, and so on into integrated systems that can be connected, monitored, and controlled from a single source. It’s analogous to the way independent elements of a modern data center—servers, storage, and networking—are all starting to be combined into hyper-converged infrastructure appliances which are under the control of a single management interface.

However, even making these simple types of connections isn’t always as easy as it sounds. In the case of smart buildings, many of these “infrastructure systems” have always been independent of each other and don’t really have a common point of connection. That’s part of the reason we’re starting to see companies like Dell and others start to offer smart building “gateway” devices that potentially connect to some (though not all) of these systems.

But having a gateway isn’t enough. There are also challenges in determining how the systems might connect from a software perspective. While there are some old serial port physical connectors that can be leveraged, these are truly “legacy” systems that don’t always adhere to modern standards. Some buildings leverage BACnet, which is a specific smart building industry protocol used in building automation systems (BAS), but there can be many variations.

As a result, it can be challenging to piece together comprehensive solutions — and that’s just intra-building. If we want to move on to smart cities, we would need buildings to talk to each other and potentially, to city-wide grids of intelligence and capability. While efforts are certainly being made there, they’re likely to be much further off from implementation than some of the IoT hype suggests.[pullquote]There’s a real lack of platforms available to drive the kinds of connections and applications that are possible in most vertical industries.”[/pullquote]

Once again, there’s a real lack of platforms available to drive the kinds of connections and applications that are possible in most of these vertical industries. Throw in smart cars, vehicle-to-vehicle (V2V), and vehicle-to-infrastructure (V2I) connections, and you have an absurdly complex array of communications that needs to be sorted out before even the simplest of IoT visions comes to pass.

This is why discussions around IoT standards are very important on one level but, on another level, are not. The issue is there’s a great deal of work that needs to be done on solving the vertical industry challenges before we can start to worry about making vertical industry-to-vertical industry connections. Yet, that’s where a lot of the discussions around protocol standards battles, such as AllJoyn vs. OIC, are being focused.

In truth, some of the inter-industry work can be done simultaneously to the vertical intra-industry efforts, but how are we going to get, for example, smart shipping containers to also co-exist with smart buildings, when each of these solutions has a great deal of work to do within its own industry?

Additionally, these aren’t exactly the sexiest, coolest concepts for IoT. As a result, they may not get as much focused attention as some of the snazzier IoT concepts, such as consumer-focused smart homes.

Now, it could be that some of the current standards end up becoming the driving factor in vertical industries. The problem is, for example, we could see AllJoyn take hold in smart cars, while OIC wins out in smart buildings. Then we’ll end up facing the need for protocols to talk to each other as these industries start to try and work together.

Eventually, we will probably see the kind of vertical industry platforms connections to vertical industry platform connections the grand IoT visions promise. In the meantime however, we’d be much better off just getting the closely-related vertical industry solutions working well before worrying much about where else they can connect.

The IoT Monetization Problem

While everyone’s talking about the exciting potential of the IoT market, one fundamental question seems to be getting overlooked. Who’s actually going to make money with IoT?

I’d argue the question is actually very difficult to answer on several levels. First, at a core philosophical level, the Internet of Things is supposed to be about making connections between all kinds of different devices. Inherent in that viewpoint is the assumption someone is going to be willing to actually pay for the connectivity between devices—because it isn’t all free. Yes, there will be plenty of essentially “free” WiFi and Bluetooth connections between nearby devices but there are also going to be a lot of cellular and other wide-area connections that are not free.

Taking the argument a step further, lots of different business models are being developed to justify why one side or the other in a physical connection should be burdened with the costs. Essentially, the thought is one side will provide “value” to the other and that will validate the charges one side places on the other.

In certain instances, this will definitely be true but, in many cases, it will not. There will be many situations in which value is a very grey area and either party could reasonably argue the value of the service or data they are providing. For example, when it comes to the tracking of personal information via wearables, the companies collecting the data could (and have) argued the collection and analysis of the data is worth something to the consumer. At the same time, there’s a growing movement which argues if personal data is being collected, the person providing it should be compensated.

The monetization challenges in IoT extend well beyond this simple example, however. Another key application that’s been discussed is the widespread use of very low-cost, sensor-equipped endpoint devices that will generate enormous amounts of data which, in turn, will be analyzed to generate meaningful insights. Creating an enormous number of low-cost endpoints doesn’t sound like a very attractive business for either device or component makers, however. As a result, it’s going to be very difficult for those of companies to make long-term, ongoing investments into this area when there’s little chance for meaningful profitability. On the analytics side, there are also a number of challenges. As I’ve discussed in the past, analytics in IoT isn’t always very easy nor can it necessarily generate an ongoing revenue stream. In many cases, it’s one and done.

Plus, there’s the question of which companies really have all the capabilities to put together a complete solution in-house and then turn it into a profit-generating business. Yes, there are many companies that can offer a piece to the IoT puzzle but only those companies that can offer a complete end-to-end IoT solution will be able to profitably benefit from it. The truth is, at best, there are only a handful of those companies.[pullquote]Yes, there are many companies that can offer a piece to the IoT puzzle, but only those companies that can offer a complete end-to-end IoT solution will be able to profitably benefit from it.”[/pullquote]

And if all that wasn’t tough enough, it’s still not clear a real, demonstrated benefit from a self-created, platform-driven solution will guarantee success. This became clear to me recently when I met with a commercial smart lighting company called Enlighted Inc. The company has a smart lighting solution for commercial buildings they can fairly easily prove will guarantee impressive energy savings (often measured in hundreds of thousands of dollars per year) when installed. In addition, the system can even be used for things like determining traffic flow inside buildings and generating more efficient paths for workers moving around large warehouses all day. On top of that, the company boasts an impressive array of clients including Amazon, HP, Google, and more.

Yet, in spite of these benefits, the company discovered they couldn’t just sell the solution on its own merits because of the upfront costs and challenges of integrating it into existing environments. Instead, they had to develop a clever, though very complicated, business model called Global Energy Optimization (GEO) that involves purchasing/financing energy credits from utilities via financial institutions and then essentially offering their products to their customers at no cost, with the promise of receiving a portion of the energy savings their customers generate. Needless to say, this doesn’t lead to short sales cycles or simple purchase orders (POs). To their credit, Enlighted appears to be making a decent go of it, but it’s not clear how many other companies can or would be willing to follow in their footsteps.

Don’t get me wrong. IoT has the potential for enabling some amazing new products and services. But before we all get caught up in the hype about its potential market size, it’s important to take a realistic look at the financial potential.

Podcast: Windows 10, Samsung Earnings, Apple Watch and Android

Welcome to this week’s Tech.pinions Podcast.

This week Tim Bajarin, Jan Dawson and Bob O’Donnell discuss the release of Microsoft’s new Windows 10 operating system, analyze the recent earnings from Samsung, and debates the potential of making Apple Watch work with Android devices.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Windows 10 Hardware Argument

The release of Windows 10 is bringing with it a range of perspectives on the eagerly awaited operating system and what it means for the future of computing. One of the biggest questions has been around its impact—or lack thereof—on PC sales. As anyone who’s watched tech industry news for the last year or so knows, the PC market has hit tough times, with last quarter’s shipments falling around 10% year-over-year according to market research houses like IDC and Gartner.

As a result, the PC industry is clamoring for something that will help reinvigorate it and drive new sales. In the past, a new Windows OS release was generally cause for celebration in the PC hardware and component business because it typically drove solid boosts in shipments—not always right away, but definitely within a year or so of its release.

This time around, however, things could be different. Microsoft has made it clear Windows 10 will be completely free for one year after its release to anyone owning a PC running a legitimate copy of Windows 7 or Windows 8. Because of this, some industry watchers are presuming that, instead of buying new PCs as they’ve typically done with major OS transitions in the past, many people will simply upgrade their existing PCs.

Microsoft has actually made this pretty simple to do. The hardware requirements for Windows 10 are extremely low by today’s standards. If you’ve purchased a PC over the last 6-7 years, it’s probably capable of running Windows 10. Plus, based on my own experience on several different machines as well as reading the accounts of many others doing upgrades, the company has done a good job of making the upgrade process smooth and relatively carefree. Of course, we won’t really know until the final bits have propagated out to the hundreds of millions who are expected to make the upgrade—a process likely to take several weeks—but early indications seem pretty solid.

Despite this, I’m still hopeful the PC industry will see some decent upside from Windows 10, particularly in the fourth quarter of this year and into 2016. The primary reason for my optimism is Microsoft has actually integrated quite a few new capabilities into Windows 10 that will benefit from new hardware. Some are more well-known and more obvious than others, but here are some of the key new functions I think can (and should) drive new Windows 10 PC hardware purchases:[pullquote]Microsoft has actually integrated quite a few new capabilities into Windows 10 that will benefit from new hardware.”[/pullquote]

  • Windows Hello—The new biometric login feature for Windows 10 points the way to a password-less future, at long last. To take advantage of it, you need to have either a new fingerprint reader or an integrated 3D camera, like Intel’s RealSense, built into your PC. Down the road, Microsoft is expected to support other types of biometric authentication methods, such as iris scan. In addition, the company is also expected to leverage standards efforts with the FIDO Alliance to extend biometric authentication onto other devices and services. Hopefully, it won’t be long before you can digitally authenticate to your Windows 10 PC from a wearable and then use that authentication to transparently log you into your online banking site, e-commerce site, and more.
  • Windows Continuum—The Continuum features will make 2-in-1 devices like Microsoft’s Surface, Dell’s Inspiron 7000 Series, HP’s x360, and Lenovo’s Yoga even more compelling. The OS can automatically adjust the user interface and details like icon sizes, allowing you to easily switch from PC mode to tablet mode. Eventually, Microsoft will also release Continuum-enabled Windows smartphones that will allow you to directly connect your phone to a monitor and keyboard.
  • Array Microphones for Cortana—With Windows 10’s new personal assistant feature, you will likely talk to your computer a lot more than you ever have and a high-quality array microphone—which essentially integrates multiple mics working in tandem across the front of your PC—can make a big difference in the accuracy of speech recognition.
  • DirectX12—The latest iteration of Microsoft’s key gaming API comes bundled with Windows 10 and enables an impressive range of new capabilities for PCs with improved graphics—whether it be dedicated GPUs from nVidia or AMD, or even the graphics enhanced, sixth generation APUs (code-named Carrizo) that AMD just released. Games that support DirecX12 can now fully support multi-core CPUs, as well as better support multiple GPUs, better leverage GPU memory, and much more.
  • GPU Acceleration–The new GPUs and APUs aren’t just for gaming either. Many different elements of the Windows 10 UI, as well as video playback, web page rendering, JavaScript performance, and much more now benefit from hardware GPUs. By themselves, none of these elements are game changing but, taken together, they should provide a much smoother visual experience on new Windows 10 hardware.
  • Display Scaling—Speaking of displays, Microsoft has also made working with multiple displays and/or higher resolution displays much easier. Gone are the days of unreadable icons and text on high-resolution screens.
  • New CPUs—Both Intel and AMD are making important new introductions to their line of CPUs—the upcoming Skylake from Intel and the previously mentioned Carrizo from AMD. As with any new CPU release, the performance will improve but, more importantly, each is expected to offer important improvements in battery life and in the quality of its integrated graphics. Given the growing role of graphics acceleration across Windows 10, these developments are important even for non-gamers.
  • Wireless Charging—An additional benefit that Intel is expected to bring to the table in the early fall is a new chipset platform for its Skylake CPUs that will offer wireless charging using the new Rezence standard on certain higher-end notebook PCs.

Of course, another key benefit of getting a new PC along with a new PC OS is the “clean slate, fresh start”. Most people tend to accumulate lots of “stuff” on their PCs over time—extra applications, files, desktop icons, etc.—and the ability to start over is often one of the nicest benefits of getting a new PC.

Not everyone who upgrades to Windows 10 will need a new PC, obviously, but for those who may be interested and choose to do the research, there are some pretty compelling reasons for buying new hardware. The percentage of those who choose to do so will be a critical metric to closely watch.

Podcast: Tech Earnings from Apple, Qualcomm, Microsoft and Amazon

Welcome to this week’s Tech.pinions Podcast.

This week Jan Dawson, Ben Bajarin, and Bob O’Donnell discuss the quarterly earnings from tech giants Apple, Qualcomm, Microsoft and Amazon and what they mean both for the companies themselves, as well as the industries in which they participate.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Complexity Challenge Drives Shadow IT

We’re on the edge of a precipice in the enterprise tech industry, and most people don’t even realize it.

The problem? The technology developments that have driven some of the greatest advancements in business—things like software as a service (SaaS), virtualization, and analytics—have come with costs. Not just monetary costs, but also what I call complexity costs.

Many of the tools used to create these capabilities in both local and offsite data centers (or private and public clouds, to use the popular vernacular) are now so specialized and so complex, that it’s getting harder and harder to find people with the skill sets necessary to run and/or manage them.

It’s not just the individual tools. It’s the fact that most IT operations now consist of numerous complex tools that are tied together in even more complex webs of connection.

Examples abound. Want to create an app for employees’ smartphones so that they can check the status of a client’s order while visiting that client? Well, it’s likely that the initial order is kept in a sales management tool based in the cloud, and that needs to be linked to an inventory tool managed internally, which, in turn, has multiple connections both to a supplier’s database at an external location, as well as an internal shipping tool. Plus, once the results have been found, they have to be translated and delivered in a mobile-friendly format. If you don’t want the performance of that app to suffer, you’ll need to deliver the results from a site outside the corporate firewall, like a co-located data center or cloud exchange with speedy connections to a service provider. Oh, and if you’re delivering it via a virtualized app to maintain security, you’ve got to deal with desktop and/or app virtualization and connection broker software as well. Finally, if you also want to provide insight into how the customer’s orders have arrived over a period of time versus an agreed upon standard, you’ll need to pull in data from a separate analytics engine so that it can fill out the chart in the mobile app’s dashboard UI.

Throw in the very real possibility of a merger and an acquisition or two, and the need to tie one company’s system into another company’s often completely different set of systems, and you have all the ingredients of an IT disaster.

In theory, there are tools that are supposed to help solve these problems. However, unless you’re willing to throw out every relevant system that you already own and start from scratch, you will likely have to deal with a complex web of connections. Instead, many companies end up outsourcing these kinds of projects, or at least a portion of them, to dedicated consulting firms or the services arms of tech hardware and software vendors.

Not surprisingly, many new IT projects in this kind of environment move at a very slow pace because of the enormous range of potential issues that have to be accounted for and tested. Because of this slow movement, we’ve started to see many line of business managers in organizations of all sizes start to take issues into their own hands and both fund and bootstrap solutions of their own. This creates the dreaded shadow IT.

Shadow IT is essentially defined as skunkworks projects that provide some of the services or capabilities that IT traditionally offers, but that are done without the permission, or even knowledge in some cases, of the IT department. For example, a shadow IT project may leverage a cloud-based service to put together a simplified version of a mobile application like the one described earlier that delivers only, say, 80% of the functionality, but in significantly less time.[pullquote]Many business leaders are eager to exploit simplified data appliances, particularly in light of the almost ridiculous levels of complexity that now surround them within their own IT organizations.”[/pullquote]

How is this happening? Well, ironically, in a world where the previously described complexity has become the norm, a number of large established players as well as nimble startups have created intriguing solutions dedicated to solving some real-world problems. Companies like HP, Dell, Lenovo, VMWare and Citrix, as well as Pivot3, Nutanix, NetApp, and more are creating data center appliances and cloud-based services of various types that can be set-up by relatively sophisticated end-users, without the help of IT. The end result is that non-IT portions of the business are starting to enable their own IT solutions.

Not surprisingly, many in the IT world are horrified at the mere thought of this. Think of the potential security, privacy, regulatory and other issues that could conceivably get created in these kinds of scenarios. Yet, at the same time, as non-IT business leaders have grown more comfortable with some of the basic cloud computing principles that are behind many of these new products, and as vendors have worked hard to make their new tools accessible, there’s an obvious crossing point between these two trends. Many business leaders are eager to exploit this convergence, particularly in light of the almost ridiculous levels of complexity that now surround them within their own IT organizations.

Many established IT vendors as well as IT departments themselves painted themselves into this complexity corner over the last 10-15 years, and the big question now is, how do they get out of it? The truth is there is no easy answer, and as long as companies continue to depend on any older, legacy systems, these kinds of complexity challenges will continue to exist.

But forward-looking CIOs who are willing to take some risks and re-architect some of their systems can potentially benefit from these new simplified data appliances in a number of ways. Only then can they step back from the edge that’s looming before them.

Podcast: Apple Earnings Preview, Xiaomi, Intel Earnings

Welcome to this week’s Tech.pinions Podcast.

This week Tim Bajarin, Ben Bajarin, and Bob O’Donnell preview Apple’s earnings, discuss Xiaomi’s recent comments on copying and their potential entry to the US market, and analyze Intel’s earnings and what it means for the future of the semiconductor industry.

Click here to subscribe in iTunes.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast