Description
Real-time data collection, processing and visualization in industries as diverse as retail, energy and product design are pushing companies to rethink their compute strategy. Join this session to learn how compute at the edge is a natural response - providing the flexibility of the Cloud with the performance of local compute. Discover how the laws of physics affecting bandwidth, power and storage make this more than a theory – and hear how HP is working with customers at the edge.
Key Learnings
- Understand key trends driving the future of computing.
- Explain what is meant by hybrid edge/cloud computing.
- Understand, recognize and predict new solutions that are being enabled by new compute models.
- Understand how to start applying hybrid edge/cloud compute to their work.
Speaker
BRUCE BLAHO: Hi, everybody. This is Bruce Blaho and welcome to this breakout session on the future of compute where cloud meets edge. Today we're going to spend some time looking into the future. There have been a lot of really interesting trends in the industry and the world the last couple of years that are going to have a dramatic impact on the way that we do computing in the future. So today we're going to spend a few minutes and explore some of those implications and polish up the crystal ball and take a look at where we think the world of compute is going to head.
So a little bit about myself. So my name is Bruce Blaho. I'm with HP Inc, so not Hewlett Packard Enterprise. In the split HP Inc is the company that does the endpoint side computing or edge compute and in print, so we're inherently an endpoint computing IT company. So it makes a lot of sense that we'd be really interested in where will personal computing be going, as well as we'll talk a little bit about computing for embedded solutions. So, me, I'm the chief technologist for the Z by HP part of HP Inc. So that's the advanced compute and solution part of HP so it includes the legacy business workstation business, that's compute for professionals, movie makers, engineers, scientists. And my role as chief technologist is to sort of be the fire starter. So my job is to create new initiatives, understand new emerging trends, both in technology and with customers, and help people get their work done.
A few things that I've worked on, a few examples of some fires that I started, HP remote boost, or formerly known as HP RGS, is a remote desktop technology which won a technical Emmy for HP last year so that was exciting. In the past, when I first came to HBS a few years ago, I worked on 3D, GPUs, drivers for GPUs. We did the first 3D graphics system to run on a Windows system so it was exciting. Also worked on displays and the DreamColor display was 1 of the projects that I got started, which won the engineering Oscar for HP, so a couple of highlights in my career. I love Autodesk University, it's my favorite conference. I can't wait till we can get back to doing them live again. I've been a presenter at AU for, I can't remember how many years, at least 10 years. I've been doing the most of the tech trends in technical main stage, they used to call it, for HP. So I appreciate you checking out this breakout session and let's dive in.
So the future of compute, what do I mean by that? There's sort of 2 flavors of computing that I'm going to talk about today. On the left side of this slide is what we call compute for humans. So this is probably the more familiar personal computing. This is a knowledge worker, say for instance, an architect, a movie maker, somebody that sits down in front of say their personal workstation today to create things, to get their job done, and to make buildings and movies. Traditionally, that's been done with a PC, maybe a powerful PC like a workstation, and the applications, historically, were tightly bound to the operating system and the device that was running and that's why it's a personal computer, it's yours. I'm going to talk today a little bit about the future trend we see go into more distributed compute. And when I say distributed I don't just mean, let's run everything in the cloud, I think we've experimented with that a bit and there are some issues with running everything in the cloud.
So we'll talk about that, a little bit a little bit of myth busting, a little bit of heresy. On my part we'll talk about why I don't think the answer is everything stays on your PC. I also don't think the answer is everything moves to the cloud and I'll give you some data to back up those assertions. The second part of the session today is going to be about compute for machines. So if you look at, like say for instance, our business at HP, historically, the lion's share of it was compute for humans but there's also a significant amount of work done in embedded solutions and so today, that might be, say taking a workstation embedding it in an MRI or other medical imaging machine to create a solution. And so there's powerful compute going on, but there's not a person sitting down in front of the computer using it, it's embedded or in the future we may call that ambient.
You know, there's compute that's powering the experiences or the services that you're using and there's quite a revolution going on in that space as well. So we're going to talk about that. The example that I've shown here for instance from sort of an industry 4.0 example, you know today if you like say for instance, how manufacturing is done. It's static, preprogrammed, we use computers, of course, we have for a long time. But they're static, they're pre-programmed and there's human driven insights in terms of how it's performing and when it needs maintenance and things like that. Well, the way the world is moving is to more intelligent and autonomous machines machine driven and AI driven insights to tell us things like, this machine is performing well, this machine doesn't feel well, it's not performing well today and probably needs to be scheduled for maintenance or this machine is about to fail and, perhaps catastrophically, you need to shut it down now
So it really is a complete transformation of all of our workflow processes, both for people, and for automated processes. So that's the gist of what we're going to cover today. So let me start with some of the trends that are driving this. And if you've seen me present before you may have seen this slide before or 1 like it. So my apologies, but it's so fundamental, I always feel like I need to start here. And this is a little bit of a myth busting that I promised I would get to. So in the last several years, and actually decades really, the production of data in the world has been growing at a very rapid pace. In fact, I mentioned working on RGS remote technology. 1 of the things that drove that remote technology was the fact that at that time most remoting systems move data, but when we looked at the explosive growth of growth of data creation, and the size of models we realized it was going to make more sense to move pixels going forward.
At the time, it was rather radical, now it seems obvious. And that was a long time ago, It was decades ago. And that has continued. We see exponential growth of data and most critically the growth of that data far outstrips our ability to move it. So this chart that I've provided here, the tall bars, and this is from an IDC, IDC data, age 2025. The tall bars show the amount of data being created each year from all sources, so that's everything from video cameras, security cameras, factory machines, IoT data, the PowerPoint slides that I created, everything in the world that's being created goes into that. And a couple of things to take away from that. 1 is it's exponential growth and 2, the absolute value of this is enormous. So the y-axis in this chart is measured in zettabytes, so in 2025 that says that we anticipate creating 163 zettabytes of data.
How much is that? Well, that's over a million petabytes. You know zettabytes is 1,000 exabytes, an exabyte is 1,000 petabytes, that's 1,000 terabyte, so this is a huge amount of data. The other thing to take away from this then, if you look at the little blue bars down at the base of each of those, that's the available IP bandwidth. And this was from a report by Cisco. Their estimate of what the total bandwidth, globally, there would be available. So Yes, this includes 5G, it includes what we think comes after that. I think there's been a perception, at least that I've heard for the last several years, that with the advent of 5G you're going to have so much bandwidth that we can just move all the data to the cloud, I'll get out my phone you know and and I can just do everything that I need to.
And that's one of the big myths I want to blow up. The fact of the matter is, most of the data in the world is created at the edge, over 80% of it is created at the edge, meaning outside of a central data center, outside of the cloud. Yes, there's lots of data that's created in the cloud, but most of the data is created far, far from a data center, far from cloud data centers, and there's just not bandwidth enough to move it. And so that means that most of that data will never make it to the cloud, it can't in terms of bandwidth. On the right side of the slide it's fun to take a look at this through a couple of different lenses. So if you say, I don't believe you, we'll do 6G and 7G and maybe we'll get faster. The other way to look at this is in terms of cost, dollars, and power.
If you look at how much energy it takes to move, say a gigabyte of data, it's actually pretty significant. And if you look at the cost of that energy to move it. And so again, looking out to 2025, if we wanted to move all that data, say to the cloud, it would take $92,000,000,000 which exceeds the gross domestic product of the whole planet, just a couple of years ago. If you look at the amount of energy needed that's 835 terawatts hours, which greatly exceeds our generating capacity, not to mention you know that that would be one tremendous carbon footprint as well. And both from an economics, from a power feasibility, not to mention from a sustainability, and climate impact, it takes a lot of power and money to move all this data around. And I think going forward, the inescapable conclusion is we're going to have to move the compute to the data. We can't afford to keep moving all this data around, it's not sustainable.
And so then the question is, what are you saying Bruce? That we're going to do everything at the edge? Well, no of course not. Cloud is doing quite well, is growing rapidly, will continue. Cloud is wonderful. We'll do lots and lots of things in the cloud, just not everything. And so then the question is, OK, well, how do I know what should run at the edge and what should run at the cloud in terms of computing? And so I think the cloud will continue to be the hub, the center, that's the source of truth for your data. But where you actually do the computing on it, especially for data that is born at the edge, like say, from video cameras. I think we'll apply AI data science processing to get meaning. Analytics and other processing at the edge to get meaning from that data and then store the results in the cloud.
And so if you look at the kind of compute jobs that need to run at the edge, this is 1 way to look at it, what we call the laws of the edge. So there's some laws of physics that drive-- and I was just talking about that. Do I have the bandwidth? Can I afford to wait? Is the latency acceptable? Can I afford to wait to move that data to the cloud? There's laws of economics that say how it's expensive. I was just talking about it costs a lot of money to move you know petabytes of data. In addition to taking a long time, there's the cost of moving it, there's the cost of storing it, there's the cost of egress out of the cloud. So there's a lot of cost associated and economics that drive, in some instances, it may be dramatically less expensive to do your computing at the edge.
And then finally, there's the laws of the land. There's privacy laws as well as just corporate rules in terms of, there's some data that can't move outside of say, a firewall, like say, private patient data, you may not be allowed to move it out of the medical facility. You may not be allowed to move certain data across borders of countries. And there may be some data let's say, next year's great hit movie, there may be data that you don't want or don't allow from a corporate point of view to get outside of your firewalls. So these are all some of the factors that would drive you to say, hey, I need to do some of this computing at the edge.
So one of the exciting things about this is this new, what I'll call hybrid edge cloud computing, I have a tendency to say edge forgive me. Every time I say that know that what I really mean is hybrid edge cloud, because things will run in the cloud, things run at the edge and they need to talk to each other. It's just a bit of a mouthful so I tend to shorten that down to edge. What's really exciting, to me, about this is that this enables some real breakthroughs in terms of computing. I'll talk a little bit about breakthroughs from 3 markets. These are 3 that are adopting the fastest. There's many more out there. Creative, retail, and manufacturing, yes, there are many others, health care, smart city, but these are 3 that I'll focus on today. For creative workflows it's all about flexibility.
I'll talk in a minute in more detail about the ability to change transition from having my own finite set of compute resources that I'm using, say my PC, to being able to access distributed compute. Compute services are happening all over the world and I can tap into those, so it allows me to scale the compute that's available to me personally in an on demand way and tap into many computing resources they can't get to today. It also allows us to elevate the abstraction layer. So when I can access remote compute seamlessly with a great experience and don't really know or care where that computing is happening. Why would I then limit myself to saying, oh, I'll take a PC with these specs. Why not raise the level of abstraction? Say, hey, here's the job I'm trying to get done, is there something curated for me that will let me execute this workflow more efficiently? So we'll talk about that on the next slide.
Retail is very exciting. We've already seen some examples of new solutions like frictionless checkout systems. Amazon made these famous with their go stores. We're seeing a number of startups rush in now for the rest of us, for the rest of the world, to be able to automate and simplify the checkout process. Certainly the events of the pandemic has accelerated this. We're all very even more excited than we used to be about being able to walk into a store and get what we want and walk out again. I never enjoyed waiting in line and now it's an opportunity to have less interaction during this time of COVID. Inventory and loss prevention is another great area. There's billions of dollars lost every year in theft and shrinkage, as well as just management of inventory has never been more challenging.
And so there are ways I can automate this for instance, to use cameras pointed at store shelves raising alert I'm almost out of Diet Coke and it's time to go refill those shelves. And then the third area we'll that I'll talk about is manufacturing. I mentioned this a little bit earlier.
So industry 4.0 collects a whole host of intelligent and automation-- intelligence automation and bringing I to bear. A couple of examples here. Visual inspection is a good one. So when I'm producing goods on a production line, we've always had very simple-- computer vision has been around for a while, but with the advent of deep learning In recent years, this has really become significantly more accurate, fewer false positives, and much more robust. And so that's really going through a revolution.
Predictive analytics is another one that I mentioned earlier. We can now use machine learning to monitor the health of machines and their capabilities. When you put these together, what I really get excited about them is the overall business outcome. For instance, if you look at the overall effectiveness of manufacturing of factories, say, today, if you look at one of the measures called OEE, overall effectiveness, that tends to be in the 45% to 50% range, meaning that only about half of the time, out of a 24 hour day, only about half of those hours are spent actually producing usable goods. So it's a combination of uptime as well as the quality of what's being produced. With the combination of things like visual inspection, predictive analytics, and industry 4.0, we think we can get that up to over 80%, which would be a world class. Imagine going from 50% output to 80%. This is revolutionary. So it's so it's pretty exciting.
So now let me, having introduced these, let me now go into a little bit more depth on both the compute for humans as well as compute for machines. So if we look ahead for personal computing or compute for humans, I'll bring out a couple of trends that we think are going to be most significant. The first is the idea that I talked about that we're moving from my PC to my compute resources, my distributed compute resources. And you kind of do this a little bit on your-- it's kind of how it works on your smartphone today, right? Your phone is really a portal to the world, right? You're opening apps and websites and a lot of the apps are basically a fancy front end to a website behind it.
You don't know where those are and you don't care. You're tapping into servers all over the world that are working on your behalf. And you don't really know or care where that's happening, and yet today, for a lot of the personal computing, I'm either running everything on my endpoint device or I'm running it in the cloud. We think there are many better options for that. So we think in the future, your endpoint device becomes more of a portal to compute services. And you'll go through, you'll have software that can help you, things like brokers and remote connectors that can basically, in a seamless and invisible way, get you connected to the right resource for the job that you're doing right now and also the context. So for instance, hybrid work has really accelerated this.
So now the answer in terms of what compute should I use may be different. If I'm sitting at my desk at work like I used to do a lot, and maybe I have a very powerful desktop computer right there, well I'd be crazy not to use that. But now if I am working at home for the next few days and I have either a very low powered laptop or maybe I have a Chromebook or something like that, well now when I want to do, say I was doing, running Revit or some other large-- working on a really large data set, well maybe that machine that I'm sitting in front of at home isn't powerful enough. So I need to connect to something. And where and where would I connect? Well maybe I'd connect back to my computer that's at work. That powerful machine. We think over time, there will be more and more choices that develop to where maybe I have edge based compute resources. Could be part of my employers IT solution. Could be in a corporate data center. Or it might be in a hosted facility. As we get away from computers to computing, we think we'll see more and more compute facilities available that can provide, perhaps, specialized compute. You're doing video editing today. OK, here's the right resource to use for that. You're doing your email. Fine.
You know and by the way, we still think, even though I talked about the endpoint as a portal, that doesn't mean that it has to be a Chromebook or a thin client. That's not what I'm saying. Endpoint devices are very capable. They're getting more capable all the time. Performance continues to go up. Performance per watt goes up. So there are very powerful endpoint devices available. We'd be crazy not to take advantage of that. So the endpoint device is still a very viable compute source in this view of the world, of the future. And of course the public cloud is there as well. This provides I think the ultimate backstop in terms of scaling and, for say for instance data that was created in the cloud, it's the perfect place to work. So we think we'll go from, I've got one or a couple of resources I use today, to many that I tap into in a very seamless, invisible way.
The other concept I want to talk about is this idea of the abstraction layer. Today what I want to search for or shop for a PC it's like, well I'll take an Intel I7 processor or an AMD Ryzen, and I want 32 gig of RAM, and I want this GPU, and I want a 1 terabyte solid state drive. Well when we get to a world where I'm accessing all these compute services remotely, that would be a silly way to-- why would I need to continue to do that? And that's kind of typical when you look at any, from my observations, any process or service that digitizes over time. In version one of that, we reproduce everything that we did in the last generation.
And so right now I go get an Amazon instance and it's like, oh what's the CPU, or how much RAM do I have? How much storage do I have? And it doesn't seem to me like that makes a lot of sense going forward. Nobody really cares-- if I'm an artist or an architect or an engineer, I really don't I really don't care about those things, unless I like to geek out on technology. But for the most part I've got a job to do or I'm a creative person. I don't really know or care all this IT stuff. So why not move to selling something, in terms of compute services, that is closer to what the customers want and what people use in their workflow. So curated bundles, for instance, where I now have-- the correctly configured hardware is coupled with the software that I'm going to use. I've preloaded the right Autodesk applications that I'm going to need to do to get my job done.
But we don't have to stop there either. We can raise the level again up to more managed solutions, by which I mean maybe there are multiple machines available to me, or multiple different pieces of hardware. So for instance, if I'm a data scientist, while I'm editing my Jupyter notebook, I can do that just fine on my endpoint device. But now I want to train a model. Why limit myself to what's on my endpoint device when perhaps there's a whole departmental cluster of machines available to go work on training that? Or if I need to do a major render, why not go take advantage? And maybe I go all the way to the cloud for that but maybe I don't.
And then ultimately raise it up even further. We foresee selling whole workflows as a service. So I think it makes a lot more sense to say, hey, I'm a video editor. I need to do-- here's the rough sizing of my job. I need to edit full length. I need to edit-- I'm a hobbyist. I'm going to edit 10 minute home movies. I'm a professional. I'm going to edit 8k feature length films. And basically size the solution, provide multiple pieces of hardware and software available to you, and services that go with that. So if you're, say, a movie maker you're probably going to want to share that with somebody. So how do I perhaps include a file sharing service. So solutions for collaboration, solutions for sharing. So we think that makes a lot more sense than just saying I'll take one of those instances.
And then finally, let me wrap up with future of compute for machines. And so I talked about before this notion of embedded solutions-- say taking a workstation embedding it in a medical imaging device. Well in our vision of the future, we see multiple machines being used to run solutions for whole facilities. So a cluster of machines, for instance, could run-- or multiple clusters could run an entire smart factory. Or perhaps a single cluster of machines could run an entire retail store, or a hospital. Today when you look at a lot of these solutions, like, say I mentioned some earlier, they all tend to come with their own compute. We think that'll change over time, right? It's like, here's my friction-less checkout system. And here's the computers to run it.
Here's my inventory management system and here's the computers to run it. We think we'll get to the point where it will be very common-- become very common that these kind of facilities all will have a little mini data center. Sort of edge compute resources available to them, which will run a whole set of services on top of that. It seems that that's going to follow, I think, the cloud model. So this is really about creating the equivalent of a local cloud that's running on prem-- could be near prem, but so far most cases seems like for latency reasons, a lot of times these need to be on prem. So for instance you own a retail store and friction-less checkout. I'll have cameras that are tracking purchases.
Multiple customers that need to keep track of their purchases, and may be doing other things, like I mentioned, like inventory management. And so we think it makes sense to run these on-- today, the state of the art for that would be a Kubernetes cluster. We'll see if that evolves over time. But creating that edge stack, if you will, based on a cloud native way of doing things makes the most sense to us. And then stacking up multiple applications and services.
So on the left hand of the slide, we think this is what the stack looks like on each of those machines in the clusters. And then most importantly, I think what's really going to be interesting is if you look at how-- it really is a new form of compute. If you look at how personal computers are managed today, or if you look at how servers and data centers are managed, this edge compute beast really is something quite different, right? In a data center I have full time it staff. I have a relatively small number of data centers with a very large number of machines. In this edge world, this future of compute for machines, we see many facilities with relatively small clusters.
So for instance, for grocery stores, I may have hundreds, thousands, even tens of thousands of stores in my fleet that I need to manage. And maybe each of them has half a dozen machines. So it really isn't like a data center anymore. So I can't use the same management tool. So I think that there will be a premium on understanding how to automate the management of these fleets of many clusters, both the clusters themselves as well as the applications that run on them. And so we think when that can happen then it will really be a new age. Imagine if we'd had, say, this facility before the pandemic hit. So if you had these edge clusters running in your grocery store and you were using them today for inventory management and friction-less checkout.
Now the pandemic comes along. It would have been very straightforward for us to say, hey, let me add some new services that can run on this that will do things like, say, count the number of people in the store, measure the distance between them. Are they wearing masks or not? Do they have a fever? You could imagine having that kind of flexible facility-- the ability to quickly generate that kind of software, push it out to all of your stores, through say a cloud based portal. That really would have been quite fantastic as we went into the pandemic. So that's sort of the future that we see.
So with that, I'm going to wrap it up. Thank you very much for taking a few minutes to listen to this. Again it's Bruce Braho at hp.com. Feel free to drop me a line if you have questions or interest. And feel free to come back and take a look at this whenever you like. And, like I said questions, comments, just drop me a line. Thanks again.
Tags
Product | |
Industries | |
Topics |