Xapix @ SXSW 2018: How to make cars talk, faster

Every year, thousands of technologists, innovators, developers, and thinkers flock to Austin, Texas for South by Southwest (SXSW), one of the world’s biggest technology conferences. Attendees often look way into the future to discuss upcoming applications of innovations in technology and society. Amongst the biggest topics this year were artificial intelligence (AI) and autonomous mobility.

As part of a panel at German Haus, Xapix’er Marcel shared our vision around the future of mobility and urban living—and explained how Xapix can drive mobility applications in the area of (conversational) AI. Find out more about the panel and the panelists here.

Screen_Shot_2018-03-19_at_17.38.25.png

Xapix'er Marcel (2nd from left) joining a panel on conversational AI at SXSW 2018

Why data, data quality, and data standards are crucial

Data is essential for any AI application. Think of it this way: Data feeds the brain of any AI, without specific data inputs, AI assistants don’t have anything to talk about. This also implies the following: The better and broader the data sources, the better the AI that makes use of it.

There are two main technical challenges for conversational AI applications. On the one hand, the algorithms in use have to be able to identify the context of the given information. This can be either related to the specific content of what the user is talking about, but also to the situative context. Example: The most relevant topics someone could be having conversation with their car around differ greatly when you compare a family trip to the usual commute to the office. The best algorithms will be able to adapt to the specific situation, but only if they can access the necessary data inputs to come up with relevant answers or reactions.

On the other hand, and this point is partly implied by the example above, the different data inputs have to be made available to the conversational AI engine that is built into the car’s entertainment system. The challenge here is that we are not talking about just a handful of different inputs. We are talking about hundreds, or even thousands of different internal and external data sources. Such sources can be information gathered by smart components (“What’s my tire pressure?”) or provided by external sources (“Lead me to the closest Italian restaurant with a 4-star Yelp review”). The more different data sources the conversational AI can make use of, the more helpful the car as a digital assistant and the AI as a mobility service will be.

WEF_blog_digitalservices.png

Digital voice assistants can transform the way drivers interact with their cars

Conversational AI in action: an everyday example—and two major difficulties

Let’s dive into an even more specific example: If you’ve ever searched for a parking spot in a city (which you most probably will have done), you’ll know that this can be a tedious process. A great application for conversational AI would be if you could just tell your car “guide me to the closest available parking spot” and your car would do so. What sounds super convenient for the consumer actually involves a couple of technological hurdles. So imagine you’re driving a BMW and are looking for a parking spot in Austin, Texas. Your BMW in that case would have to access the data from the city of Austin. In addition, it has to be capable of understanding the data that is being offered. If your car wouldn’t understand the data provided by the city of Austin, your car simply replied “I don’t understand” instead of guiding you to the next available spot.

This is an actual challenge, not only for OEMs (e.g. car and component manufacturers), but also for cities. In the example, the BMW not only has to understand the data input of the city of Austin, but also of any other city that the driver would take their car to. BMW would be confronted with thousands, if not tens of thousands of different data standards and layouts. At the same time, the city of Austin wants to make their data available to more car manufacturers than only BMW—cities are confronted with the challenge of catering their data offering to a multitude of partners. Keep in mind: We’re already talking about thousands of different connections, and we’re only talking about the very specific use case of someone telling their car to look for an available parking spot.

AI_Blog_01.png

There are many challenges for automotive OEMs and cities to make conversational AI a reality.

How Xapix helps OEMs scale up their digital business

You see, making conversational AI a reality in the automotive sector involves some big challenges—and a lot of them are related to data. At the same time, the relevance of conversational AI in automotive is extremely high. Not only can it improve passenger safety and comfort, it also bears the potential to create additional revenue opportunities in the billions. Think of having the best conversational AI in automotive as a competitive advantage and as a deciding factor in the process of purchasing a car.

Xapix offers a technology that helps automotive OEMs to connect their service offerings to a variety of different applications. In addition to that, Xapix helps cities provide standardised data outputs that their partners can actually use. In more simple terms: Xapix enables connected cars to talk to any city in the world, to provide great and innovative services for drivers and riders. Our technology helps make cars talk—faster.