The Next Performance Challenge: The Battle for the Burst

on December 2, 2014

In day-to-day use, few people regularly complain about the performance of their tech devices. For the most part, people are content with the experience of using them. Sure, there are some curmudgeonly (or cheap) folks hanging onto older devices that still offer sub-par performance, but they’re becoming the exception instead of the norm.

That doesn’t mean all the performance challenges for tech devices have been solved, however—far from it. In fact, now that the overall bar for performance has been raised to a “good enough” level, component and system designers can finally start to tackle some of the thornier challenges that have been with us for some time.

One of the biggest challenges involves what I’ll call the “bursting” issue. It seems no matter how content we are with a given device’s performance, there almost always comes a moment (or two, or three…) where the performance doesn’t live up to our expectations. Streaming a video, taking multiple photos, playing an online game, and other types of activities can cause a hiccup in an otherwise decent performance experience. These moments may not last long, but they absolutely impact our overall opinion of the device, application or service we’re using.

In virtually all cases, these brief slowdowns involve a burst in activity, or series of bursts, that place strain on an otherwise solid performing system. Interestingly, these bursts can cause challenges in several different device subsystems—CPU, graphics, storage, modem and other connectivity—sometimes individually and sometimes simultaneously.

Regardless, some of the more interesting efforts to increase performance in all of these areas are now directed towards battling these burst issues. In the case of CPUs, it often involves more sophisticated chip architectures, with more simultaneous compute threads and pipelines, better predictive branching, increased caches and other enhancements that can ensure the chip is working as effectively as possible. For graphics, some of these same principles also apply, but there are also improvements in geometry engines, programmable shaders, and more.

For storage, meeting these challenges requires faster types of flash memory, more sophisticated controller chips, and better error correction algorithms. In the case of modems and other radios, new technology standards, like LTE Advanced and 802.11ac and 802.11ad make a difference, but implementing specific technologies within those standards, like carrier aggregation and multi-user MIMO, also have big influence on driving higher levels of throughput.[pullquote]For the device and component industry, where ‘good enough’ performance is becoming an increasing threat to upgrade purchases for existing devices, the trick will be to explain how performance headroom can be a valuable, worthwhile investment.”[/pullquote]

Raw performance improvement is also a factor in all cases, because sometimes it takes raising the overall performance bar in order to be prepared for the sudden spikes that inevitably occur. While there are different ways of reaching new performance levels in each of those respective component areas, general improvements in silicon manufacturing, shrinking of die size to smaller process technologies, and Moore’s Law overall conspire to make performance enhancements possible in all of these areas.

In the world of audio equipment, the ability to handle extremes in signal strength is called headroom. Well designed audio equipment, whether it be used for listening, creating or recording purposes, has plenty of headroom in order to handle the sudden bursts in volume that often occur in music. Not surprisingly, it adds cost to design and build in that extra headroom. There are always debates about how about much headroom is actually necessary and how much it’s worth paying for. While there aren’t necessarily any real right or wrong answers, it’s generally understood that having a decent amount of headroom helps with the overall performance of the audio component (or system) and is worth spending an additional amount on.

For the device and component industry, where “good enough” performance is becoming an increasing threat to upgrade purchases for existing devices, the trick will be to explain how performance headroom can be a valuable, worthwhile investment. Part of the problem is that many existing performance benchmark tests are designed to show off typical tasks and not the bursts in activity that are increasingly the bottleneck for better system performance. As mentioned previously, day-to-day performance on most devices is typically fine for most users, so showing increases in that area can seem like overkill. If new benchmarks were built around the ability to cover (or not cover) the bursts, however, that might provide an entirely new way of looking at today’s performance challenges.

Explaining some of these kinds of concepts in a meaningful way to typical consumers may not be an easy task, but it’s a critical one for future growth.

On a separate and unrelated note, this column marks the one year anniversary of the launch of my company, TECHnalysis Research, as well as the appearance of my weekly column on Tech.pinions.com. I’d just like to give a quick note of thanks for all the support, interest and feedback I’ve received over this past year. It’s been great. Thank you!