Uptake Technologies Inc. has a clever tag line on its website: “We know what failure looks like.”
And they do. Uptake uses artificial intelligence and machine learning to prevent failures by trucks, locomotives and wind turbines, taking data from connected sensors — the Internet of Things — to detect unusual vibrations or temperature changes that often precede failures and then alerting humans who can arrange maintenance.
Ganesh Bell has been president of Uptake since early 2018. Before joining Uptake, he was General Electric Co.’s first chief digital officer.
He talked with BNEF about Uptake and the rise of data science in an interview in March.
Q: Uptake is a relatively new company. What’s the history, and what’s the plan?
A: Uptake is a little more than 4 1/2 years old. We are trying to apply data science to industries and see if we can predict when machines will break. So if we can stop them from breaking, we can build a world that always works. That was the simplest idea that we had.
We started with a very large customer who was an investor and a partner. We grew really quickly to a team of about 150 people right away. We hit all the problems that most people are going to hit in the next two years.
People are trying to figure out how to make sense of all the sensors of data. How do you make sense of what is an asset, what can you learn from it? It’s easy to get started in data science and do easy problems. We went and tackled the hardest problems because the value was very, very high.
Q: How big is the space for what your company does?
A: The World Economic Forum says there’s $18 trillion in value in the industrial world by applying industrial technologies. It’s huge. For us as a company, it’s about focus, and we’re focusing on industries and financial outcomes. Utilities in particular are doing new things. To do that, they need innovation dollars that they don’t have. We can help them unlock innovation dollars from existing operations.
Q: What kinds of problems are you handling?
A: Pretty much all the problems we tackle are complex and heavy. We started with construction and mining equipment, and locomotives. Immediately complex. All these industries are data-rich and insight-poor. Less than 1% of data is used in most of these industries. We started with use cases in which the value of prediction was incredibly high and the precision also needed to be incredibly high, like predicting the failure of a locomotive.
Progress Rail was one of our early customers, they have thousands of locomotives connected to us, we predict 90% of all the failures now through our machine learning. Berkshire Hathaway connects all their wind turbines to us.
Q: What does it mean to say you predict 90% of the failures?
A: Their uptime is better. They’re also operating on predictive maintenance schedules versus scheduled maintenance — like changing your oil every 3,000 miles. We also learn from the failures. Our machine learning creates a new baseline that informs the operator when similar things are happening in other locomotives.
Q: What are the warning signs?
A: There are vibration sensors and thermal sensors. We might hear a wheel slip. Sometimes that correlates to a weather pattern, because of cold weather on a track for instance, and we track the weather along the tracks. So we can detect false positives. We actually filter out a lot of those and surface the things that matter, and then we’re able to give a window into the prediction to the operator, so they can build a confidence factor over time.
Q: They use data from previous operations?
A: Or current operations. In the past, in real time you’d be getting sensor data and you’d set rules to cause alarms to sound. But there’s no correlation of all those rules. Even in the control center of a power plant, you’ll see hundreds of alarms a day that humans ignore because they know from experience that the alarms mean nothing.
Q: Like a car alarm in a parking lot.
A: Exactly. Now we have 1.2 billion hours of operating data that our engines have learned from over the past 4 1/2 years. It gets better with every hour of operating data you feed it. We’ve used it across construction, mining, locomotives, energy. We’re starting to do trucks, we’re helping the U.S. Army with maintenance on fighting vehicles.
Q: What’s the business case?
A: In most cases, customers start with a simple financial outcome, which is, “Help me improve my uptime, help me reduce my maintenance costs.”
Then it starts to extend from that use case to “Can you help me optimize my maintenance interval windows? If you can do that, can you optimize my parts and inventory? What other use cases can you help me optimize?”
Q: You mentioned a number of industries that are asset-intensive. Is that the low-hanging fruit for machine learning and analytics?
A: It’s the most relevant problem and the highest-value problem. It’s a tough problem that we’re good at. Where we see the value over time is that if you do predictions in the place where the value is high and the precision needs to be incredibly high, then you can do more volume of predictions in other areas.
In trucking, we started with predictive maintenance on a truck, which is not a very expensive asset. From there, you can do other use cases like fuel savings or driver safety violations because you have telematics data. You can start chasing smaller, higher-volume cases that still deliver business value. Over time we’ll get to a business decision automation system.
Q: What’s the use case for a truck?
A: Predictive maintenance on the truck itself, fuel consumption, driver safety. Over time it will get to other systems as people start adding cameras. The use cases will start adding up. We’re helping customers tackle those use cases but in the industries like energy, these are big-equipment industries where it’s really about coverage models for all the predictive maintenance.
The sensorization of these networks is happening even faster. There are utilities that are still deploying new kinds of sensors. We were talking to a prospect that has 30,000 sensors, and they’re going to go to 120,000 next year. They’ll tell you they don’t use the data from any of the sensors today.
Q: Why are they putting in sensors that collect data that they’re not using?
A: Because they don’t have infrastructure to do that. They believe they need to optimize the energy mix so they need to understand how gas performs versus renewables, and how do you optimize the energy mix. Many states have a renewable-energy requirement, and therefore they need to understand their energy mix. Grids, for example — people are already starting to fly drones to collect point-cloud data, to understand vegetation and growth, where to take action, versus humans having to climb poles and do checks, they are flying drones to collect the data, but somebody has to process all that data.
Q: It sounds like we are entering a new age here.
A: All of the past three or four decades of software has been humans entering data. It’s no longer about humans entering data. Robots and industrial machines have been generating lots of data. That data is valuable. Executives love to say data is the new oil, and we’ll ask them what data they mean, and they have no idea. We’ll say, “Wouldn’t you like to have an idea what that data means?”
Q: Why do they say data is the new oil? That doesn’t make any sense. Data is valuable, oil is valuable. But the way that data moves is not the way that oil moves.
A: I don’t disagree, but it’s an analogy for an industry that understands that it’s as precious as the commodity itself. Now we’re entering a phase where people actually understand how you can unlock value from that data. We’re educating people that the most valuable data they have is the operational data they’ve been ignoring for the past decades.
Q: Have they been ignoring it or unable to harness it?
A: Both. The data gets past the human comprehension very quickly. Only in the past few years have we had technologies like machine learning that can look at the entire volume of data and construct asset models. A modern power plant might have 3,000 sensors. How do you take every bit of sensor data? In the wind industry, the standard used to be 10 minutes of data to get an alarm. Now we have subsecond data, which is incredibly valuable in machine learning.
Q: How much of the issue is taking all this data and then translating it into terms humans can understand?
A: It is a very hard problem to actually get insights from that data. It’s not as simple as “put it in the cloud and you’ll get the insight.” This is where building complex data-science engines comes in, which is what we’ve done. The reason human understanding is important is that at the end of the day, a human is in the loop to decide whether I act on an insight. You need to build human trust in the insights.