After I saw the Oculus Rift at a CES, I jokingly told a friend they would probably be bought by Google or Facebook for billions. This was not a prediction but a real joke in my mind since I thought of Occulus Rift’s VR goggles being only good for gaming and not much more. Sure, virtual reality goggles were cool, but software had to be written specifically for it. As far as I was concerned, it would be a niche product. The suggestion of Google or Facebook buying them was driven by the fact they have huge checkbooks and money to burn. Oculus would be as good a purchase as any.
Recently, Facebook bought Occulus Rift for $2 billion and it became clear Facebook saw this as more than a gaming product. Indeed, for them to pay $2 billion for this suggests they see its role expanding to social networks and bought it for some strategic reason most of us can’t comprehend at this time. The logical reason would be they have an idea about how to make VR a key part of Facebook. Perhaps they see it as a way to deliver personalized video communications where Facebook friends put on the Oculus Rift goggles at their locations and, through Facebook, see and talk to each other as if they are in the same place. Imagine being with a friend on a VR beach in Hawaii and have the experience as if you were both lounging on beach chairs sipping Mai Tais.
Or you could be with friends walking around the grounds of the Eiffel Tower or walking through the Louvre as if you were there. If that is what Facebook has in mind it could dramatically change how social media is used and could be worth more than $2 billion to Facebook in the long run.
However, I think the better strategic acquirer would have been Google. A few months back I was at the TED conference in Vancouver, BC and heard former NFL kicker Chris Kluwe give a talk about how Google Glass could be used to bring sports fan into the action on the field as if they were seeing it from the player’s viewpoint. He showed a video where he put on Glass and recorded himself on the field being tackled by a defensive lineman. You saw what he saw and heard at the point of impact. Kluwe pointed out that fans wish they were the quarterback on the field and imagined themselves in that role. Now, put Google Glass on the quarterback and inject that into a VR world of Oculus that would have images of the entire stadium, the broad view of the field from multiple angles and more importantly, allow the viewers to see a wide angle view of the defensive positioning as both teams line up for the next play. Intermixing gaming and a real world Glass viewpoint could change how people view sports forever. This concept could be applied to just about any professional sport played today.
You could apply the marriage of Google Glass and Oculus Rift’s VR to all types of life experiences. Imagine seeing and experiencing life in the Space Station from the eyes of the astronauts working in space today. Even if you are not a certified diver, you could explore the Great Barrier Reefs or explore as if you were there a shipwreck like the Titanic. How about the world of entertainment? What would it be like to have the option to view the movie through the eyes of Leanardo DiCarpio or Cameron Diaz? Or what about using this in the medical arena? Perhaps a surgeon could use this to do even more precise robotic surgery over the Internet.
I recently wrote in a column for PCMag about the impact POV cameras have had on things like sports, with first responders and in business situations where first person recording is important. But, if you bring Google Glass and Oculus RIFT-like VR together, this POV concept gets kicked up hundreds of notches over current POV cameras.
My big concern about Facebook buying Oculus Rift is their focus would clearly be on the social aspect. I fear Oculus could not reach its real world-changing potential under Facebook. Sure, Facebook could do their own Glass product and try to marry them with Oculus Rift. But Facebook’s approach would be for their own interests and probably be proprietary to boot. On the other hand, Google’s approach to something like this would be open source in nature and, if done properly, could revolutionize the sports industry and change not only the gaming market but bring new dimensions to all types of apps and real world circumstances. The recent introduction of their cardboard 3D googles suggest they at least have this on their radar to some degree.
Google, not Facebook, should have bought Oculus Rift. Let’s hope Google is either searching for a similar start up or will do their own version of this product. This idea is powerful and in the right hands could have quite an impact on a lot of people and industries.
5 thoughts on “Why Google should have bought Oculus Rift”
INn light of Google’s Art Project, https://www.google.com/culturalinstitute/project/art-project and the cataloging of many museums, you are right. Oculus would have been a natural fit.
Maybe that’s why FacePlant bought it – to keep it out of google’s mitts. As you say, it would likely have been very powerful and fb saw it before the don’t be evils.
Imagine being with a friend on a VR beach! Imagine walking around the Eiffel Tower! Well, I don’t have to imagine. That sort of thing has been possible for the last eleven years in Second Life. (SL has more than its share of tropical beaches, at any rate.) It hasn’t exactly set the world on fire, although SL is still doing okay. I suppose VR goggles might make the environment feel more immersive, but I have some doubts on whether it will change the kinds of things we can do (or want to do) in a virtual world.
Given how M&A works I’m sure Oculus Rift’s investment bankers contacted Google and others. Either Google didn’t bid high enough to win or it had no interest at all.
I think that Apple will surprise you and do as follows, maybe not first, but they will nail it.
While Occulus Rift is completely immersive, it shuts out the ambient world completely. Google Glass errs in the opposite direction, deprecating the image to monocular 640×360.
Apple will use twin 4K screens, mounted in “Ben Franklin” narrow dark glasses, worn such that you look over the top of the frame with the twin screens occupying the lower third of the field of view.
The screens will not be translucent as are Google Glass but opaque with virtual reality implemented by superimposing digital overlay on real world images from twin cameras. This approach avoids washed out color and lack of true black inherent to translucency.
The cameras will be optionally attached to the frames with magnets but will more often be used detached to avoid Google Glass paranoia. Likewise detached cameras can be mounted to a pole or drone for high angles with any type of remote viewing enabled. Film cameras dictated that lens, sensor and viewfinder were contained in one box. This is obsolete now and the advantage goes to remoting the screen and placing it on the photographer as this finally handles the chronic washout inherent to the screen exposed to direct sunlight. 4K screens will make manual white balance, focus, exposure feasible as the viewfinder will be very close to final presentation quality.
They’ll call it “iDash”, because functionally it’s similar to the dashboard of your car.
To maintain that this is unworkable is to maintain that you can’t drive with the lower third of your field of view obscured by the car dashboard. This is not to say they are recommended for use while driving, although they could be if software disabled all functionality other than maps, rear view and full car computer instrumentation display when Bluetooth LE detects that you are the driver. Differentiating driver from passengers would have to be solved for this to work however,
The glasses will be connected to the phone via a mag safe cable that doubles as a security strap around the back of your head, down the inside of your shirt to the phone. Thus the glasses need only contain displays and gyro, accelerometer and compass to minimize bulk, weight and geekyness.
The phone becomes a track pad which bridges the gap between cursor and touch as follows: up to 5 touch points on the phone screen are rendered in the twin displays and are shown as translucent white circles. This identifies multiple cursor locations. When a gesture is to be executed, the home button is pressed simultaneously with the gesture. Very little modification of existing software is needed to implement this.
Furthermore, there is no reason not to make a tiny box with iPhone guts minus screen and battery, adding an X86 chip and running OS X with iOS functionality occupying the current position of “Widgets” in OS X. Similar to a MacBook Pro which shuts down or brings up a dedicated GPU as needed, one could engage the ARM CPU in iOS mode or the X86 chip when you want to run OS X.
With this setup, all day battery power would be provided by a battery belt with the CPU box clipped to the belt. You could even implement a trackpad as the belt buckle.
The Apple store would be expanded with optical and audio MDs providing prescriptions for custom lenses and individually tailored frequency correction. Such a highly personal platform demands custom fitting for quality assurance and only Apple has the stores to deliver such a service.
Audio would be disrupted through the simple means of using noise canceling earbuds where the controls use twin slider volume controls. One slider controls device volume. The second slider gradually disables phase reversal noise cancelation thus using the ambient microphones to pick up room sound. This is a 2 channel personal mix board obsoleting the need to remove the earbuds to hear the environment.
This platform has the potential to obsolete phones, tablets, laptops, desktops and home theaters. Only Apple has demonstrated the courage to disrupt the industry and their own product lines to such an extent.