IBM’s Watson. Amazon’s Echo+Alexa. Facebook’s and Mark Z’s J.A.R.V.I.S. Google’s Home+Assistant and DeepMind.
AI maturity is accelerating. Here’s a fun fact – these companies have the most advanced AIs running the services that we all use. Think Google Search. Think Facebook. Think Amazon.com. However, what we want to focus on is how these companies are making that AI know-how available to us mere mortals. And so, that’s the focus of this article.
To me, what’s important is the approaches which show initial potential to drive broader, deeper human impact and progress. So what should we look out for? Here are 4 categories that I figure would give us a glimpse of how things are shaping out, and will shape out in years to come :
Cat 1 : Impact on people today. The present state of AI breakthroughs and efforts are extremely important. I wouldn’t go as far as to say that this is an AI bellwether, but it gives us a glimpse of where the most important touch points and markets are. Essentially, I ask questions like Who’s using it today? Do we have mass impact? Is the impact immediate, direct and significant?
Cat 2 : Span of frontier. Next, I look at the breadth, span and complexity of use, and where the AI is being experimented with. This tells us how easily the AI can be adopted across different domains. Is it taking an exclusive form? Or is it easily adopted for various uses?
Cat 3 : Ecosystem & API. Finally, the ecosystem that the technology is surrounding itself with. Ecosystem tells us the variety of application of the AI, and the 2nd degree innovation that is produced by ecosystem partners. It’s a fairly good indicator of how the growth trajectory will be shaped in years to come.
Cat 4 : Cost of adoption today. The cost of adoption today does not equal the cost tomorrow because AI capabilities are bound to improve drastically over time. But what history tells us is this – if it’s too expensive to adopt now, it’s usually too complex to operate, and it won’t build up sufficient capital to stand on it’s own as a product / service eventually. That’s tech.
So, here we go…
I decided to show you the details first – scroll down for some of my own conclusions.
Mike’s Take #1 : While simple, taking a consumer-first approach is strategic for AI.
Amazon and Google have both taken their AI to the consumer through their digital assistants. While the use case is really simple (some may contend that home convenience is trivial), targeting the consumer has strategic benefits. Firstly, they will be the first to ramp up in terms of AI-related revenue. Interfacing with us through sight and sound is a big plus point in adoption rate. Secondly, consumers will provide the most data points needed for AI learning. Thirdly, second-degree innovation will grow at a faster rate, since it’s a simpler system to develop on. While the impact to human progress is less significant now, but the consumer platform is extremely promising.
Mike’s Take #2 : Taking on the business domain is always daunting but profitable.
For emerging digital technology, focusing solely on enterprises and businesses is always risky. Businesses will demand a deeper intimacy in applying the tech, and that always results in customization and deeper integration. That always results in costlier and lengthier time to value. This is IBM‘s approach with Watson. IBM is banking solely on Watson to drive their cognitive / AI / existing analytics business. It remains to be seen if IBM’s Watson approach will be successful but if it is, it will be IBM’s primary profit engine for the next decade.
Amazon is also targeting businesses through AWS. The advantage of this approach is leveraging the uber-cloud and the traction that AWS has right now. From my reading, the AI offerings (Rekognition, Polly and Lex) have strong integration with existing AWS services – which means it may, just may, make it easier for businesses to leverage AI. The trade off from AWS is the relatively fundamental AI services provided, as compared to IBM’s Watson.
Mike’s Take #3 : Research is key for big breakthroughs
Both Facebook’s AI Research and Google’s DeepMind perform mind-bending research on all things AI. For example, having being victorious against Lee Sedol in the game of Go, DeepMind has since used AlphaGo techniques in reducing Google’s data center cooling bills by 40%, considering how green and sophisticated these data centers already are today. I’m a big believer of research efforts – I believe big breakthroughs come from a combination of deep research and rich on the ground data. Here, Google does seem to have a lethal combination in Google Home + DeepMind + Google Search (read : 3.5b searches a day)
Mike’s Take #4 : Being open might just change the game.
Mark Zuckerberg is planning to give away the code for the J.A.R.V.I.S home assistant he built. My initial thoughts about JARVIS was this – it will not catch up with the likes of Echo nor Home, if the intent was another home assistant. But with Mark Zuckerberg voicing his intent on making it another one of his “open” projects, it may just change the game. Lots of 3rd party developers and companies will jump on it like Android. Three of the key capabilities in JARVIS are language processing, speech recognition and face recognition. The combination of the source code + thriving FB’s communities of developers + need for intelligence may just spark a following.
So, what should companies in Asia do?
- Like all tech, expedite exploring the use of AI in the most common places. For AI, start with speech, image, text recognition uses. Those seem to be the most common uses.
- History in tech adoption has taught us that ease of use, ease of customization and breadth of application often triumphs. So, start small and grow incrementally. The options from cloud providers like AWS can be an appealing starting point.
Anyway, I’m all ears. Drop a comment.