Narrative: Apple is Behind in AI
Each narrative page (like this) has a page describing and evaluating the narrative, followed by all the posts on the site tagged with that narrative. Scroll down beyond the introduction to see the posts.
Narrative: Apple is Behind in AI (Dec 26, 2016)
Written: December 26, 2016
One of the more recent Apple-related narratives to have emerged is that the company is behind in developing artificial intelligence relative to peers and competitors like Google and Microsoft, and that this will make it less competitive in future (fatally so, in the extreme version of the narrative).
The basis for the claim has multiple elements. Firstly, Apple has been far less vocal (at least until recently) about its AI and machine learning chops – the terms were barely mentioned by Apple before 2016. Secondly, Apple doesn’t appear to be investing in AI in the same way – there has been relatively little evidence of AI in Apple’s products and services. Thirdly, Apple’s penchant for secrecy about what it’s working on has led it to bar employees from writing publicly about their AI work, something most AI researchers are accustomed to doing before they join Apple, which hampers hiring and retention of top talent. Lastly, there’s a sense that Apple’s AI efforts may be hamstrung by its insistence on not sending personally identifiable data to the cloud, which prevents it from using some AI techniques on an individual user basis off the device with powerful computing infrastructure.
All these claims have some basis in fact, but the reality isn’t quite what the narrative suggests, especially in recent months. It’s true that Apple didn’t talk much about AI or ML for a very long time, but this has begun to change. Its 2016 events, and especially WWDC, featured numerous mentions of both AI and ML. That’s a big shift, because Apple has rightly chosen in the past to show rather than tell when it comes to AI – in other words, to sell the features that make use of AI, rather than the AI itself, since the former matters to users and the latter doesn’t. But the discussion about AI in the tech press recently hasn’t been so much about the user experience but about perceived expertise, something Apple doesn’t usually talk about as readily. However, Apple also opened the kimono a little around AI with reporters this past year, further evidence that it recognizes a need to get out of its comfort zone a little here. To address the second point, which is really related to the first, Apple has been investing in AI all along but hasn’t used the label. Siri, typing suggestions, face recognition and plenty more have made use of AI and ML, and Apple continued to advance its use of AI in 2016 with its WWDC announcements.
On the third point, Apple has now begun to allow its researchers to publish, with both the announcement of that policy change and the first paper coming at the end of 2016. 2016 really was the year in which Apple began to take the public face of its AI research seriously. It also addressed the fourth and final point in 2016, with a discussion of differential privacy at WWDC in the summer, suggesting that it was possible to leverage user-level data without compromising user identity.
Apple still needs to demonstrate that it can compete on AI in future – Amazon’s Echo and Alexa technologies have been somewhat misleadingly described as AI in the media, and this hasn’t helped perceptions of Apple’s AI chops, with Siri considered by many to be inferior. But if Apple can keep adding value to its products and ecosystem with useful applications of AI, that will go a lot further where it really matters than inside baseball commentary about the AI wars.
Apple Has Acquired a Small French Photo Analysis Company (Sep 29, 2017)
Apple has made another one of its characteristic quiet, small acquisitions of a technology company, this one a French business specializing in computer vision for photo analysis. Unlike some other photo analysis tools, however, this one isn’t so much about recognizing the content of photos as determining which photo in a group might be technically best, or which photos are duplicates. It’s easy to see those technologies being used in future version of Apple’s Photos app on the iPhone to select the best picture from a burst of photos, or to manage a photo library on a Mac. Apple has put enormous attention into its cameras almost from day one of the iPhone, but its photo management software hasn’t kept pace for much of that time, though recently it’s began to invest more seriously in it both on the iPhone itself and on the Mac. This small acquisition is a sign that it plans to continue to make incremental improvements, if nothing else.
via TechCrunch
★ Apple Switches Search Back-End for Safari and Siri to Google from Bing (Sep 25, 2017)
Apple has quietly switched the search back end for its Siri voice assistant and what used to be called Spotlight search to Google, after relying on Bing for several years. Bing will continue to provide the image search results in Siri, but is otherwise being replaced by Google. That’s a fascinating turn of events after several years of Apple removing Google from various elements of its built-in systems, from switching to its own maps to elimination the YouTube app to offering a variety of alternative default search providers in Safari, to this use of Bing behind the scenes. Although there’s obviously been some speculation that money was a factor here, and it may well have been, I suspect this ultimately comes down to wanting to provide the best possible experience in these various settings, and that means using Google. That’s ultimately the same reason that Apple hasn’t switched away from Google as the default search engine within Safari in Western markets – Google is the gold standard, and everything else still comes up short. I do wonder if this is part of a quiet renewal of the longstanding relationship between the two companies, which always prompts speculation about Apple replacing Google as the default. That certainly seems less likely now, as Apple in its brief public statement on this news has emphasized the need for consistency across experiences within iOS and macOS, suggesting that Google is here to stay as the default search option in Safari. That’s a big win for Google and a big loss for Microsoft, for which Apple’s partnership was a rare bright spot on mobile, while it continues to take decent share on the desktop by virtue of Windows’ dominance there.
via TechCrunch
Apple Machine Learning Researchers Publish Three Papers in In-House Journal (Aug 23, 2017)
This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.
Apple Launches Machine Learning Journal (Jul 19, 2017)
This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.
★ Apple is Developing a Dedicated AI Chip (May 26, 2017)
This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.
★ Apple Acquires Dark Data Analysis Company Lattice Data, Reportedly for $200m (May 15, 2017)
It emerged over the weekend that Apple has acquired Lattice Data, a company which specializes in analyzing unstructured data like text and images to create structured data (i.e. SQL database tables) which can then be analyzed by other computer programs or human beings. TechCrunch has a single source which puts the price paid at $200 million, and Apple has issued its usual generic statement confirming the acquisition but offering no further details. It’s worth briefly comparing the acquisition to Google’s of DeepMind in 2014: that buy was said to cost $500 million and was for 75 employees including several high profile AI experts, though it was unclear to outside observers exactly what it was working on, while this one reportedly brought 20 engineers to Apple and has several existing public applications and projects to point to. Lattice is the commercialized version of Stanford’s DeepDive project, which has already been used for a number of applications involving large existing but unstructured data sets. Lattice has a technique called Distant Supervision which it claims obviates the need for human training and instead relies on existing databases to establish links between items that can be used as a model for determining additional links in new data sets. It’s not clear to me whether the leader of the DeepDive team at Stanford, Christopher Ré, is joining Apple, but he was a MacArthur Genius Grant winner in 2015 and this video from MacArthur is a great summary of the work DeepDive does (there’s also a 30-minute talk by Ré on the DeepDive tech). Seeing Apple make an acquisition of this scale in AI is an indication that, despite not making lots of noise about its AI ambitions publicly, it really is serious about the field and wants to do better at parsing the data at its disposal to create new features and capabilities in its products. It’s entirely possible that we’ll never know exactly how this technology gets used at Apple, but it’s also possible that a year from now at WWDC we hear about some of the techniques Lattice has brought to Apple and applied to some of its products. Interestingly, the code for DeepDive and related projects is open source and available on GitHub, so I’m guessing Apple is acquiring the ability to make further advances in this area as much as the technology in its current form.
via TechCrunch
Google Develops Federated Machine Learning Method Which Keeps Personal Data on Devices (Apr 6, 2017)
This is an interesting new development from Google, which says it has created a new method for machine learning which combines cloud and local elements in a way which keeps personal data on devices but feeds back the things it learns from training to the cloud, such that many devices operating independently can collectively improve the techniques they’re all working on. This would be better for user privacy as well as efficiency and speed, which would be great for users, and importantly Google is already testing this approach on a commercial product, its Gboard Android keyboard. It’s unusual to see Google focusing on a device-level approach to machine learning, as it’s typically majored on cloud-based approaches, whereas it’s been Apple which has been more focused on device-based techniques. Interestingly, some have suggested that Apple’s approach limits its effectiveness in AI and machine learning, whereas this new technique from Google suggests a sort of best of both worlds is possible. That’s not to say Apple will adopt the same approach, and indeed it has favored differential privacy as a solution to using data from individual devices without attributing it to specific users. But this is both a counterpoint to the usual narrative about Google sacrificing privacy to data gathering and AI capabilities and to the narrative about device-based AI approaches being inherently inferior.
via Google
Apple GPU Supplier Imagination Tech Says Apple Plans to Build its Own GPU in 1-2 Years (Apr 3, 2017)
This already feels likely to be one of the biggest news items of the week (incidentally, you can now use the Like button below to vote for this post if you agree – the posts that get the most votes are more likely to be included in my News Roundup Podcast at the end of the week). There have been ongoing reports that Apple would like to build more of its own in-house technology, and GPUs have seemed at least a candidate given that Apple was said for a while to be mulling an acquisition of the company, and has been bringing Imagination Tech employees on board since the deal didn’t go ahead. The GPU obviously has a number of existing applications, but GPU technology has increasingly been used for AI and machine learning, so that’s an obvious future direction, along with Apple’s reported investment in AR. Apple’s ownership of its A-series chips (and increasingly other chips like its M and W series) is a key source of competitive advantage, and the deeper it gets into other chip categories, the more it’s likely to extend that advantage in these areas. This is, of course, also a unique example of Apple making a direct statement about a future strategy (albeit via a third party): as Apple is IMG’s largest customer, it had to disclose the guidance from Apple because it’s so material to its future prospects – the company’s share price has dropped 62% as of when I’m writing this.
Apple’s Siri learns Shanghainese as voice assistants race to cover languages – Reuters (Mar 9, 2017)
One of the things that’s often missed by US writers covering Amazon’s Alexa and its competitors is how limited it still is in language and geographic terms. It only speaks English and German and the Echo range is only available in a handful of countries. Siri, meanwhile, just got its 21st country and 36th language, which reflects a long-time strength of Apple’s: broad global support. Apple News is a notable exception, which is only available in a few countries and one language, but almost all of Apple’s other products are available in a very long list of countries and territories, often longer than for other competing services. The article here is also interesting for the insights it provides into how each company goes about the process of localization, which is quite a bit more involved than you might surmise.
via Reuters