How Significant is the Synaptics ThinTouch, ForcePad and ClearPad Technology?

As I have cited previously, human computer interface (HCI) changes have defined the winners of the last decade in the phone, tablet, and premium PC and game console markets. I believe this will continue into the future. As important as the singular technologies are the way that different kinds of controls come together as a system to deliver multi-modal input methods. This is why I am so excited about Synaptics three new technologies, ThinTouch, ForcePad and ClearPad. They have the chance to revolutionize the way notebooks, convertibles and tablets will be used in the future. Over the last few months, I got an insider’s view of the technologies, talked with the designers and HCI experts, and of course, got my hands on the technologies. I’d like to share some insights I’ve gained on the technologies.

Multi-Modal is the Future of HCI

The future of all device interaction will not be governed by a single way of interaction, but via multi-modal interactions. Essentially, devices will take inputs in a myriad of ways, whether they are via keyboard, direct touch, voice, and even through machine vision. To not confuse the user, they will all need to work as a cohesive “system” and therefore need extensive systems integration and software work. In the next few years, this will be especially true on notebooks, convertibles and tablets. Synaptics has some incredible smartphone technologies with its InCell and TDDI (touch display integration), but I want to focus on HCI for notebooks and tablets. Let’s dive into the technologies.

ForcePad Technology

Over the last ten years, Synaptics and Apple have driven the biggest advancements in touchpads. Just look at how theforcepad touchpad has morphed from a small, three button touchpad to a large, button-less touchpad seen today’s premium Ultrabooks and Apple Macbooks.

ForcePad technology removes all moving parts, is pressure sensitive, and at less than 2.8mm, is thinner than a slice of cheese. ForcePad has the ability to perform all the functions users can perform with a ClickPad, even without knowing there is pressure sensitivity, thus reducing any adjustment period or learning curve. Synaptics usability research scientists (with whom I met) have tested this and observed that on average a user with a ForcePad easily adapts to this hingeless touchpad and quickly prefers this experience over the majority of hinged PC designs on the market today. The “gesture continuation” capabilities that the ForcePad pressure sensitivity offers, provides a smoother and easier method to perform the core functions from pointing to scrolling and zooming.

By adding pressure sensing, this enables consistent interactions on any part of the touchpad and it never wears out, making it much more reliable. There’s even an auto-calibration feature that has several benefits. For the OEM it enables a consistent OEM branded feel & behavior across models, that is something that is not in PC notebooks today as the hinge mechanism in today’s ClickPad is developed by the OEM’s choice of ODM, using several ODMs, and OEM may have different feels for the ClickPad even if using the same supplier because the ODM designs the hinge. Next the auto-calibration feature has the ability to compensate for chassis-flex, with thin and light notebooks, some can flex and activate a click just by picking the notebook up with one hand, ForcePad can detect this behavior and reduce accidental clicks. Lastly, while the OEM can pre-select the desired feel, this auto-calibration feature can enable a user to personalize their response, adjusting if for how firm or light they may want.

Its ability to “feel the force” also opens up new usage models where by adding a third, relative dimension, a user could conceivably replace a joystick to play a game, eliminate the need to choose new brush sizes in a paint program, or even eliminate annoying, slow scrolling on long web pages or documents.

The ForcePad, codenamed “Jedeye” in the a User Interface Software and Technology conference (UIST) competition was selected for the student software development and is expected to look at innovative methods to provide innovative methods to HCI through the addition of pressure sensitivity.

The benefits of making the touchpad 40% thinner by eliminating the hinge are straight forward. By making the touchpad thinner, the PC maker has the ability to use that volume to make the device notebook chassis thinner or even use that space for extra battery.

Users will need to get used to the lack of a clicking sound and feel, but as the entire industry learned, clicking is optional. As we learned from Apple and BlackBerry phones, what many thought would be an issue wasn’t at all, and those who sat in the past learned to regret it. Now on to the keyboard.

ThinTouch Technology

There hasn’t been a whole lot of innovation on keyboards over the last 20 years. Even while some of the best OEMthintouch-edge companies like Lenovo and their customers pride themselves with the ThinkPad keyboard performance, it is open to the same issues as all scissor-based mechanisms. That is to say, they are more open to break and become full of gunk. Have you ever had a key pop off and have to replace the entire keyboard due to a scissor mechanism?

ThinTouch removes the entire mechanism and replaces it with a capacitive touch enabled mechanism that is 40% thinner and brings several benefits. First, by removing the scissor mechanism, they keyboard should be lighter, thinner, and backlit keyboards should be more effective, brighter and require less battery draw due to the design without a scissor mechanism and also more reliable and manufacturable according to Synaptics. Secondly, keys are capacitive, there is the potential to enable personalization and turn on pressure based controls and even near-field “air gestures” into the mix. Imagine if you had a gesture to “fling” an image from your laptop to the TV by gesturing a certain way without even touching the keys. Reading a book, air swipe to turn the page. Once you make the keyboard, capacitive, the sky is the limit. Now onto the touchscreen.

ClearPad Technology

Synaptics got a bit of a late start in the touch display controller market but they are making up for it by expanding their controller line that supports windows 8 touch specifications from 12” now up to include the13.3, 14.1, 15 and 17” displays. And, in the future, but evidenced today in the smartphones up to 5”, tablets and convertibles could move to InCell, which removes and entire discrete sensor, thinning the product, improving the optical qualities and increasing performance. The ClearPad Series 4, TDDI smartphone technology, shrinks the number of controller from two to one; integrating Touch with the actual Display Driver IC this will significantly improve responsiveness while lowering cost.

For now, Synaptics biggest advantage in tablet and convertible touch display controllers as it marries up perfectly with ForcePad and ThinTouch. And that happens to be one of Synaptics’ biggest advantage, which is they don’t stop at the hardware. Their offering is the system and total HCI experience that is the blend of hardware, firmware, software, and the test tools to deliver this multi-modal interface solution.

Will PC Manufacturers Spend an Extra Dollar?

Unfortunately for PC makers over the last few years, Apple has run away with the premium-priced notebook and premium-priced all in one market. Ultrabooks could change this, but we won’t know until after the holiday selling season. Ironically, some Ultrabooks still come with cheap touchpads and uninteresting keyboards.

I believe that many PC makers will quickly adopt these Synaptics technologies to differentiate themselves from Apple and from each other, and some will even drive them across mid-range product lines, too. Unfortunately, some OEMs will continue to count pennies as they lose dollars, as the price between high quality touchpads and mediocre quality is around one dollar. When Apple comes out with their next generation of HCI, they will wish they had invested that dollar.

Human-Computer Interface Transitions will Continue to Drive Market Changes

As a former executive and product manager for end consumer products and technologies, I have planned and conducted extensive primary research on Human-Computer Interfaces (HCI) or Human Machine Interface (HMI). A lot of this research was for the industrial design of consumer products and their mice, keyboards, and even buttons. I conducted other research for software and web properties, too. Research was one input that was mixed with gut instinct and experience which led to final decisions, and in those times and companies for which I worked, were #1 in their markets. Only recently have innovations in HCI come to the forefront of the discussion as the iPhone, iPad and XBOX Kinect have led in HCI and in market leadership. I believe there is a connection between HCI and market leadership which needs more exploration.

For years, the keyboard and mouse dominated in HCI. For the previous deskbound compute paradigm, the keyboard was the best way to input text and perform certain shortcuts. The mouse was the best way to open programs, files and also move objects around the desktop. This metaphor even impacted phones. Early texting was done on 12 keys where users either forced it with one, two, or three strikes of a key to represent a different letter which was then improved with T9 text prediction. Thankfully, Blackberry popularized the QWERTY phone keyboard for much improved texting and of course, mobile email. Nokia smartphones then popularized the “joystick”, which served as a mini omni-directional pointer, once the industry shifted to an iconic, smartphone metaphor.

Then Apple changed everything with the iPhone. They both scrapped the physical keyboard and the physical pointer and replaced it with the finger. We can debate all day long if it were the capacitive touch screen, the app ecosystem, or something else that drove Apple to its successful heights, but we can all agree that Apple needed both to make a winner. Just use on of those $99 tablets with a resistive touch screen and you will know what I’m talking about.

The touchpad has gone through many noticeable changes as well. Remember when every notebook had a touchpad and two, sometimes three buttons? Now look at Dell’s XPS 13, the MacBook Air and the MacBook Pro. We are now looking at a world with giant trackpads that can recognize the multitudes of gestures with minimal effort.

Then Microsoft changed the game with the XBOX Kinect. Interestingly enough, like Apple, Microsoft eliminated physical peripherals and replaced with a body part or multiple body parts. Nintendo reinvented gaming with the Wii and its bevvy of physical controllers, and then Microsoft removed them and replaced them with major limbs of the body. In the future, Microsoft could remove the gaming headset microphone, too, once Kinect can differentiate between and separate different player’s voices.

Voice is of course one of the most recent battlegrounds. Microsoft has shipped voice command, control and dictation standard with Windows PC for nearly a decade but has never become mainstream. They do provide a very good “small dictionary” experience on the XBOX Kinect, though. Apple has Siri, of course, and Google has Voice. Microsoft may look like the laggard here based upon what they’ve produced on PCs and phones, but I am not counting them out. They have mountains of IP on voice and I wouldn’t doubt it if the industry ends up paying them a toll for many voice controlled system in a similar way OEMs are paying Microsoft every time they ship Android. This is just one reason Apple licenses Nuance for the front-end of Siri.

We are far from done with physical touch innovations. Just look at Windows 8 notebooks. Windows 8 notebooks are experiencing a dramatic shift, too, with their multiple gestures using multiple fingers. Just look at all the innovations Synaptics is driving for Windows 8. Their Gesture Suite for Windows 8 “modern touchpads” adds supports for the eight core gesture interactions introduced with Windows 8 touch, specifically supporting the new edge swipes to navigate the fundamentals of Windows 8 Metro experience. Interestingly, with the addition of all of the Windows 8 gestures on the trackpad, for certain usage models, the external mouse actually starts to get in the way of the experience. I can see as touch displays and advanced touchpads become commonplace, this could eliminate the need for a mouse. This would be interesting as in previous HCI shifts, it resulted in the elimination of a physical device to improve the experience.

The long-term future holds many, many innovations, too. I attended this year’s annual SIGCHI Conference in Austin, TX, which I like to describe as the “SIGGRAPH for HCI” and it is truly amazing what our future holds. Multiple companies and universities are working on virtual keyboards, near field air touch using stereopsis (2+ cameras), improved audio beam forming for better far-field and a bevy of other HCI techniques that you have to see to believe.

What can we take away from all of this? One very important takeaway here is to realize that those companies I cited who led with major HCI changes ended up leading in their associated market spaces. This was true for Blackberry and Nokia during their haydays, and now it is Apple, Microsoft and maybe Google’s turn. It doesn’t always stand true, though in commercial markets, but in many cases, stands true for consumer companies. Just look at SAP. In the future, it is important to keep your eyes on companies investing heavily in HCI technologies; companies like Microsoft, Google, Apple, and innovators and enablers like Synaptics who I believe will continue to surprise us with advanced HCI techniques which will lead to market shifts.

How Sony can beat Samsung and LG on Smart TV Interfaces

As I wrote last week, Samsung and LG are following Microsoft’s lead in future interfaces for the living room. Both Samsung and LG showed off future voice control and in Samsung’s case, far-field air gestures. Given what Samsung and LG showed at CES, I believe that Sony could actually beat both of them for ease of interaction and satisfaction.

HCI Matters
I have been researching in one way or another, HCI for over 20 years as an OEM, technologist, and now analyst. I’ve conducted in context, in home testing and have sat behind the glass watching consumers struggle, and in many cases breeze though intuitive tasks. Human Computer Interface (HCI) is just the fancy trade name for how humans interact with other electronic devices. Don’t be confused by the word “computer” as it also used for TVs, set top boxes and even remote controls.

Microsoft recently started using the term “natural user interface” and many in the industry have been using this term a lot lately. Whether it’s HCI or NUI doesn’t matter. What does matter is its fundamental game-changing impact on markets, brands and products. Look no farther than the iPhone with direct touch model and Microsoft Kinect with far-field air gestures and voice control. I have been very critical of Siri’s quality but am confident Apple will wring out those issues over time.

At CES 2012 last week, Samsung, Sony, and LG showed three different approaches to advanced TV user interfaces, or HCI.

Samsung took the riskiest approach, integrating a camera and microphone array into each Smart TV. Samsung Smart Interaction can do far field air gestures and voice control. The CES demo I saw did not go well at all; speech had to be repeated multiple times and it performed incorrect functions. The air gestures performed even more poorly in that it was slow and misfired often. The demoer keep repeating that this feature was optional and consumers could fall back to a standard remote. While I expect Smart Interaction to improve before shipment, there’s only so much that can be done.

LG used their Magic Motion Remote to use voice commands and search and to be a virtual mouse pointer. The mouse


pointer for icons went well, but the mouse for keyboard functions didn’t do well at all. Imaging clicking, button by button, “r-e-v-e-n-g-e”. Yes, that hard. Voice command search worked better than Samsung, but not as good as Siri, which has issues. It was smart to place the mic on the remote now as it is closer to the user and the the system knows who to listen to.

Sony, ironically, took the safe route, pairing smart TVs with a remote that reminded me of the Boxee Box remote which has a full keypad one side. Sony implemented a QWERTY keyboard on one side and trackpad on the other side which could be used with a thumb, similar to a smartphone. This approach was reliable in a demo and consumers will use this well after they stop using the Samsung and LG approaches. The Sony remote has microphone, too which I believe will be enabled for smart TV once it improves in reliability. Today the microphone works with a Blu-ray player with a limited command dictionary, a positive for speech control. This is similar to Microsoft Kinect where you “say what you see”.


I believe that Sony will win the 2012 smart TV interface battle due to simplicity. Consumers will be much happier with this more straight forward and reliable approach. I expect Sony to add voice control and far field gestures once the technology works the way it would. Sony hopes that consumers will thank them too as they have thanked Apple for shipping fully completed products. Samsung and LG’s latest interaction models as demonstrated at CES are not ready to be unleashed to the consumers as they are clearly alpha or beta stage. I want to stress that winning the interface battle doesn’t mean winning the war. Apple, your move.

Hello, Tech.pinions

I am pleased to announce that, as one element of building a new high tech industry analyst firm, I will be joining Techpinions as a part of the columnist team and as partner. It is an honor to work with Tim Bajarin, Ben Bajarin, Stephen Wildstrom, Peter Lewis and many of the awesome contributors.

What I Believe
I believe that we, the tech industry, have the largest opportunity in front of us that has ever existed. The intersection of the biowatch, phone, tablet, computer, home appliances, living room electronics, automobiles and the multiple apps, networks and data centers that connect them, will just be the start of a revolution. We will move from where we are now into a “complete” cloud computing model which, after many years, will then morph its way into an ambient computing model. By ambient, I mean all around us, in the background, and automatic.


To interact in this environment, end users will utilize advanced and natural HCI techniques like speech and air gestures in addition to traditional touch, mouse and keyboard. Sure, there are interim steps in the future models, but one must envision where the puck will be before skating there. There are many uncertainties, but what is certain is the amount of power and control end users have.

What to Expect From Me at Tech.pinions
As an industry analyst, I will be analyzing interconnected ecosystems and for some, advising them on how to best address those future market shifts. One of the vehicles I will be using to publicly communicate my views on the emerging “complete” cloud and ambient computing spaces will be through Tech.pinions. While not exclusive to Tech.pinions, it will be a key communication vehicle. Analysis and opinions will span end usage models and behavior, technologies, and business models that shape the technology future.

It’s About the Conversation
Our industry’s learning and insight model has fundamentally changed now that there is a structured way to garner the insights of millions of individuals via social media. I would like there to be a conversation around these opinions at Techpinions, too. You can either comment here, or you can reach me at Twitter, Google Plus, or LinkedIn.

Stay Tuned
This is just one of many announcements I will be making between now and CES as I build a up a technology analyst practice. I look forward to getting your direct, honest, insightful, and non-politically correct input as we move forward.

You can find Patrick’s full bio here.