설명
주요 학습
- Learn about advanced digitally driven workflows using genetic algorithms for rapid design iteration, optimization, and decarbonization.
- Learn about the industrialization of data-center design to maximize efficiency and achieve mass customization with economies of scale.
- Learn about future trends in data center design, including intensification and adoption of new technologies to reduce water and energy consumption.
발표자
- Jaimie Johnston MBEJaimie joined Bryden Wood – an integrated practice of architects, analysts, engineers, creative technologists and industrial designers – shortly after its formation in 1995. Jaimie leads the application of systems to the delivery and operation of high performing assets. This includes design for manufacture and assembly (DfMA) solutions and new data-led, digital workflows for government and private sector clients in the UK, US, Europe and Asia. Jaimie was the co-author of the benchmark strategy documents, ‘Delivery Platforms for Government Assets’, and ‘Platforms: Bridging the gap between construction + manufacturing’. These have been adopted as a foundation for the UK Government’s initiative to create a more productive, value-driven construction sector. Jaimie is the Design Lead for the Construction Innovation Hub, which was established to drive innovation and technological advances in the UK construction and infrastructure sectors. In June 2021 Jaimie was awarded an MBE for Services to Construction.
JAIMIE JOHNSTON: Hello, everyone, and thanks for joining this session. My name is Jaimie Johnston. I'm a Board Director at Bryden Wood. We're an integrated designer headquartered in London, but we're based around the world. I'm going to be talking about advances in data center design and delivery, which is one of the topics which is driving quite a few initiatives which are touching the entire industry.
So before I get into the depth of data, I'm going to start quite broadly. This issue has been talked about quite a lot at AU over the years. It's been mentioned by Andrew Anagnost in some of the opening sessions for a number of years now, this concept that the way we traditionally design and deliver buildings is no longer keeping up with the needs of the people that use them and live in them and work in them and heal in them.
So over the next few years, given population growth and people moving and well-documented things around the climate emergency, we need to fundamentally change the way we design and deliver our assets. We need to industrialize construction and data centers I think is a good example of where this is really starting to happen. And I think there are lessons happening in the data center market that we're already seeing starting to move out into some of the other sectors.
So one of the things we're seeing-- we work a lot in data centers, but we do work in all sorts of other sectors. There's something really interesting, I think, happening at the moment where what used to be traditional construction sectors are now starting to become much more interdependent, and they're sort of bleeding into each other much more. So I'll give you some examples. So data centers is the topic in the bottom right-hand corner.
Over the last few years, we've seen sciences, life sciences, and pharmaceuticals, becoming increasingly linked to how they use data. So some of the new drugs that are coming out are being discovered through artificial intelligence. Some of the methodologies, the modeling that's happening can only happen because of this reliance on data. Data centers themselves are becoming increasingly energy intensive and are at the point where they can't be built at the pace we need to build them because the energy infrastructure simply can't keep up. There isn't enough power in the existing grid to power these data centers.
And increasingly, there's obviously a massive drive now to decarbonize not just the grid but the data centers, the buildings, so not just the embodied carbon, but the operational system. So we're now starting to see this network of different sectors that are very, very interlinked. So as I talk about data centers, you'll see how some of the things that are affecting them are bleeding into some of the parallel industries.
And I'll give you a quick example of the scale at which this is happening. There's a very important thing in pharmaceuticals around how proteins fold. So these are the things that make up the parts of your body. Targeting drugs towards these things is a very, very important part of pharmaceuticals. Understanding how proteins fold has taken a huge amount of effort over the last 60, 70 years.
Over 60 years, humans did this manually and found the code of about 170,000 of these things. Google DeepMind launched a piece of AI that was specifically geared at unraveling proteins. And in a period of about six years, we went from 170,000 being solved to 200 million of these things being solved. So the scale at which this has happened has been astronomical. And it's unleashed a whole world of new types of medicines, more targeted medicines.
And so I think there's a really interesting thing where a piece of AI in a data center is fundamentally changing the way that we deliver health care. So the link between the kind of digital and physical is very, very real, and it's happening at pace now.
And one of the things that, as I mentioned, is troubling all of the data center providers is these sorts of numbers. So back in the day, we used to measure data watts in the megawatts. Currently, we still measure data centers in the tens of megawatts, so 80, 120, maybe, megawatts for data center. All of the big players are now talking about gigawatt, 2 gigawatt data centers for AI. The intensity of the power these things need, the amount of capacity is absolutely order of magnitude bigger than anything we've seen before.
And you can see here, this is a report by the International Energy Agency analysis-- International Energy Agency team. Not making long-term predictions, but they're saying over just over the next two years, the amount of electricity that we need for data centers will double. So if you look at this curve from decades ago, it's been on a fairly steep increase. Over the next two years, that amount of electricity is going to double at a time when we can't physically provide enough electricity on the grids.
UK has come up with a similar conclusion. So our National Grid has done some studies on this. They reckon that within the next decade, the amount of energy we need for data centers will go up by six times. So we're looking at a point where these things are escalating so quickly, this will become a blocker on any further development in terms of data centers and therefore AI and therefore all the things that are attendant to it around pharmaceuticals, around all the different use cases we're seeing.
And one of the things that's interesting that's come off the back of this is these companies recently put out an open RFI. So Google, Microsoft, and Nucor-- so if you don't know Nucor, they are the biggest steel producer in the US-- are joining forces to start to propagate new technologies, so new advanced nuclear power stations, geothermal, clean hydrogen. So this would have been unthinkable even a few years ago that, firstly, two big competitors in this space, Google and Microsoft, are teaming up, but also joining a steel manufacturer.
And the reason for this is that steel is, obviously, a massive creator of embodied carbon. The only way we can decarbonize steel is to use electric arc furnaces. The only way we can do those is by having readily available clean power. So again, when I talk about the fact that all of these different sectors are bleeding into each other, the idea that two big tech companies and a steel manufacturer have geared together to propagate a market for small modular reactors that they will actually generate their own power, again, is something that wouldn't have been thinkable 6, 7, 8, 10 years ago.
And suddenly, we're at a point where these big tech companies are having to step into the utility space. They're having to step into the big infrastructure space simply to keep demand. So again, all of these things are starting to play into each other now. And I think we're going to see some very, very interesting changes in the way we think about technology in these companies coming up in the next 5, 10 years.
So this is the point that's been made a number of times at AU. The only chance we've got to keep up with not just data centers but lots of these other sectors is finding better, faster ways to design, to deliver, to industrialize, to do things at scale. The old idea of developing a one-off project, creating a new team, developing a single project, building it bespokely, using the old manual processes and moving on to the next one can't possibly keep up now.
And so we're seeing a massive shift that data center clients seem to be driving around standardized, repeatable designs, using kit of parts, using that repeatability to automate some of the processes around design, delivery, manufacture. And we're starting to see some dramatic increases in the speed and the way in which these things are put together that I think are going to start to ripple very quickly because of that sort of bleeding of sectors together into all sorts of other walks of life.
So the sort of fundamental driving thing is that if we don't do this, then we can't keep up with the needs of humanity. So it sounds quite dramatic to say it, but these sorts of initiatives are the way we will save the planet, and they're the way we'll save the people on it. Or put it another way, if we don't adopt these technologies, then we won't possibly be able to build enough data centers, health care, roads, education, living accommodation for the 4 billion people that are on the way.
The only chance we've got to address those big, big global challenges is through these sorts of approaches. So hopefully some of the things you'll see from this talk will give you an insight into things that you might start to deploy in all sorts of other sectors.
So we developed a technique, initially in pharmaceuticals, but we now do it in health care, we do it in data centers, we call it chip thinking. It's a way of breaking down projects, breaking down designs into manageable elements, repeatable elements that we can start to configure and start to speed up this process.
So the analogy was chips on a motherboard, that if you had a collection of these pieces, you could place them together. You could very quickly develop a solution, see the implications, and then start to drive some of these processes much, much, much quicker.
And so one thing we found a long time ago is that whenever you deal with these complex building types or complex client organizations, there's lots and lots of different interested parties. They all have a slightly different area of expertise. They very often don't speak the same language. They very often have competing interests. And trying to develop designs which optimize for multiple stakeholders simultaneously is very, very hard.
So what we started to do was to break down a building or a process into groups of spaces or groups of steps of a process that always happen together, drag everyone's knowledge about that particular topic or those particular spaces into the chip that they became a repository for every piece of business intelligence that anyone had around a particular topic. And then as you start to plug these chips together, you start to get a much more holistic view of how these things work.
So the example I use in a health care context is you never get an operating theater that exists on its own. It always has an anesthetic room and a scrub room and some mechanical, electrical equipment, and nursing staff, clinical staff, consumables, clean and dirty circulation. So a theater is never a one-off space. It has all of these things that are linked to it.
Lots of different experts have lots of different inputs into that. So a surgeon has very, very specific understanding of a theater. The cleaning staff have a completely different view. The facilities maintenance staff have a different view. So by aggregating all of their knowledge into these repeatable chips, we can then start to get much, much more broad engagement. Everyone knows where to place their knowledge, and we start to consider things much, much more holistically.
So typically, the process looks like this. We'll start to develop the building blocks, the chips that capture particular spaces. Each of these on the top right-hand side-- that's a pharmaceutical example-- each one of those little blobs represents a chip, behind which sits an awful lot of data. But you can imagine at that level of modeling, it's very, very quick to add chips, to remove chips, to try different layouts, to try different flows.
So we use these geometrically, very simple building blocks with a vast amount of data that sits behind them. We can very, very quickly develop and test lots of different scenarios. And out of it spits all of the data. So the top-right, we would do 40, 50, 100, 200 different iterations of using this to plan a building, to plan a facility. Every time we generate one, it automatically tells us the costs, the dollars per megawatt, the OPEX cost, the number of staff, the schedule, all of these sorts of things.
So we use the kind of agility of these chips to test lots and lots of different things. And we use the data that sits behind it to get very highly optimized business decisions that balance the needs lots of different people. So one user group will want the biggest facility possible because it gives them lots of future flexibility. Another user group will want the smallest possible facility because it's cheaper and it's quicker to build. It's easy to manage. So balancing all of these different competing needs, we can start to converge on very highly optimized designs that do everything that everyone in the business needs.
So this is a data center example. We will typically have the big block, which is a typical configuration for a data center that will be made up of smaller building blocks. So in this case, the data hall, the gantry, the mechanical, electrical services. Those, in turn, will be made up of smaller things. So the generators, the air handling units. And eventually, you get down to the individual kits of parts.
So we'll have developed solutions at each one of these scales. The fact that they're nested means that you can make choices or make selections at the different scales and understand the implications right up at the top. So as you're choosing something in terms of the individual electrical equipment, it's automatically telling you something at the top level about what that means for the overall data center, the running, the cost, the dollars per megawatt, those sorts of things.
So rather than just design the building as a blank sheet of paper, we design it as a series of configurable components that are made up of configurable components. So it's like designing LEGO building blocks. You don't just start with a blank sheet of paper and a sketch. We start with the intent of being able to turn these buildings into repeatable components that we can configure in lots of different ways.
So once we have those building blocks, we can then test different configurations. So very often, clients will have their idealized sort of perfect data center that if you had no site constraints, you had no local context, this is what you would build. Almost certainly that's never true. And so you always have existing buildings. There are utilities underground. There are foundations in the way. There's the shape of the site itself.
So very often, you have to be able to have not just the kind of one-off idealized data center but a way of configuring these spaces to suit different locations, different jurisdictions, different building codes and things. So having developed our building blocks, we'll then start to try lots of different configurations to make sure that we have the right level of flexibility to suit the client's needs while still maintaining the repeatability of the components and of the chips.
Also, we tend to-- like an automotive manufacturer, we tend to have mass customizations or changes to specifications. On this sort of diagram, you can see for one client, we'll typically have a range of different configurations of data centers depending on whether it's single or multi-story, whether the data halls are back to back, whether it's a single-sided, single aspect unit, these sorts of things.
So we'll have different configurations of the data center itself. And then we'll have bolt-ons as part of the solution. So do you want to have a green roof to increase the sustainability on the biodiversity of the site? Do you want to have PV on the top to do some power generation? What are the kind of customizations or increase in specification things that you might need on different sites?
And again, they come as part of the preconfigured kit of parts, so it doesn't become a special, it doesn't become a bespoke item. It becomes a configuration again of preunderstood, preconsidered elements that a different client might need depending on the different geographies they're building in, the different environmental conditions, the different jurisdictions, these sorts of things.
And then we document all of them as a series of very well-defined links or choice points. So at the very left-hand side of this diagram is a data center that will be made up of these key components of the data hall, the gantries, these sorts of things, which, in turn, are made up of the racks and the distribution made up of these, the individual component parts.
So we'll have mapped all of the different parts of the design and the way in which they are interlinked. So if you pick a particular type of cooling system, it will automatically pick the right sorts of components. If you have air cooled, it will automatically prepopulate the downstream tree elements or switch off the ones that you don't have. So this becomes a very clear, concise way to understand the logic by which you put a data center together.
So in a normal building, this doesn't exist through a traditional design process. There is no logic that underpins how all these different choices are made. We're able to literally link every single component from the data center right down to the valves that we've selected and understand the implications through this tree. So this logical structure then underpins a whole load of other things that we do around design automation, around the kit of parts, around procurement supply chain.
So we're, again, moving from a bespoke design to a particular configuration of components that we can start to get into the manufacturing space with.
So the next big topic that I want to talk about is using that kind of structure. Once you have that structure and you understand the relationships between the different components, you can then start to automate the rules. So you know that if you choose one of these, you also need these things. For each one of these, you'll need these components that are attached to it. So having that structure means that we can start to then turn what's currently a manual process into a series of basic choices.
And as you make the choice, an algorithm behind the scenes is selecting all the parts that you need for your particular data center. So one thing we start to look at is those individual chips and start to say, well, how much freedom do you have to configure them? So for the data hall, do you get to stretch it? Do you get to elongate it? Make it wider? Do you get to change the floor-to-floor height? What are the different types of choices you want to do? Where do you want choice to allow you to meet particular site constraints? Where don't you want choice because you want something to be very, very locked down and repeatable and manufacturable?
So we'll have these sorts of conversations around, within each of the components, how much flexibility do you want? What sort of flexibility do you want? We also look at the relationship between chips. So as you pick-- if you make your data center bigger, does your air handling unit get bigger? Or do you buy more air handling units? These sorts of choices.
So we start to look at the way that the choice of one chip influences the other ones. Do you buy more of them? Do you buy bigger ones? Which ones are affected by a choice you make in your data center hall? So again, we document all of these things so that we can then turn this into the rule set that sits behind some of these configurators.
So typically, we will have a workflow like this. We will have a piece of analysis that looks at the site, understands the site and understands the site constraints. So where can you build? Where can't you build? Where are there existing wayleaves, utilities, cut and fill considerations?
So typically, one workflow is analyzing the site and understanding its the relative ease of building on any particular site. We then typically configure using the chips that I've showed you, these basic building blocks, so from rooms, departments, buildings. They literally look like this. As I said, they're geometrically very simple building blocks.
So we'll use that simplicity, as I said, to generate lots of different potential options. When we have an option that we're pleased with that we think meets all the criteria, we then literally tell the algorithm to go and fetch all the LOD 400, the detailed model pieces that it needs to actually put the model together. So one thing that I think is interesting about our workflow, we try not to create the very heavy federated model until as late as possible.
So the typical output is a Revit model in the way that most people in the industry would recognize. We try not to generate the model until the last possible minute because as soon as you have, it becomes very heavy, it's very cumbersome. It's subject then to people making manual tweaks, and you start to lose some of the fidelity of the design automation. So we typically stay in the chip space, configuring for as late as possible.
Only once we've got a configuration we're happy with do we then go and get the detailed model components, put them all together. And the output is then your federated model from which you generate your sheets and your build materials and your quants and all the kind of normal downstream deliverables.
So we have a team now within the office that does a lot of this. We typically have-- in the middle is the sort of configuration engine. We typically have a generalized approach to rules capture. So for each individual client, the rules and the content will be specific to them. So we'll capture their particular rules, their particular requirements, develop their particular chips. They're all slightly different.
The configurator, the engine that drives it is fairly generalized. And we tend to get project input. So we'll take the site model, any geospatial data we've got, site constraints, run the algorithm, test some of these configurations. And what comes out is then the project data around the individual drawings, the sheets, the calculations, these sorts of things.
So we tend to have a sort of general function in the middle. The data and the rules coming in are client-specific. And the inputs and outputs are project-specific. But it changes the team structure. So rather than have necessarily a design team as such, we will have a design team sort of capturing the client requirements but typically across a range of projects.
So they'll be looking at the client business level rules at the bottom. We'll have the design automation team running that algorithmic workflow. And then the project team is simply plugging things into the configurator, taking the outputs out. So rather than a traditional design team, we have a slightly different group of specialisms and group of teams.
But you'll see the next slide I'll show is a video that shows one of these configurators working. So for obvious reasons, we're not allowed to show client-specific configurators. We've developed a number of these for different clients. Obviously, their content is proprietary and what they do is proprietary. So we've taken a generalized context, which, in this case, is the moon. But everything I'll show you is actual functionality we've developed but applied to a false context. So it's a moon base being configured on the moon, but you'll understand the kind of different steps that we go through.
So this plays. So the first part of the algorithm, you choose your site context. Typically, we've got LiDAR data, point cloud survey data. The algorithm is looking at every pixel, 10-foot pixel. And it analyzes that particular pixel for topology, topography, site constraints. You then pick your very high-level brief. So for a data center client, this is saying, I want x number of megawatts. It needs to be this type of cooling. I want it to be this type of data center, whether it's a colo, cloud, AI.
A genetic algorithm will then, behind the scenes, generate tens, potentially hundreds of thousands of completely viable solutions. So it can only generate a solution which complies all the rules. But then it gives each one of them a score. So along the top of this graph are the different constraints or the different considerations, so cost, schedule, timing, accountability, these sorts of things.
You can compare each of your schemes. Once you have one, you can then start to investigate it in more detail. So you can select a scheme that you like. It will generate the key metrics for you. You can delve into it. You can investigate the local conditions, the specific design aspects. So all of this is happening at the chip level. So you're able to do lots of investigation, lots of pretesting, lots of consideration of different potential options.
Once you have one that you like, then you tell it to go and generate the BIM model. So it will go into the library of all the Revit components, assemble those, and generate the whole LOD 400 BIM model. So we have clients now where we can take this workflow from blank site to this kind of quality of model in a couple of days. So a couple of hours to assess the site. You might spend a day doing the assessment. It might take another few hours to generate the model.
But this is not a slightly incrementally better version of a normal design process. This is a very different design process where it's generating hundreds of thousands of solutions. So rather than develop one or two concepts, you develop hundreds of thousands, and you down select by the ones that make most sense. And then you start to generate the information.
The next part of this-- so if the design process has now been largely automated for the right sorts of clients, the next part is, how do we deliver these things, leveraging all the benefits of industrialized construction? So one of the ways we're looking at this group of chips is as manufacturable components. So we're typically thinking of them as kits of parts that can be manufactured off site, commissioned off site, transported to site, and typically assembled with a lot less labor hours.
So some of the big things the industry is facing at the moment, skills gap, aging demographic, lack of skilled trades, particularly electricians around the data center market. Cost of labor is increasing all the time. So this idea that you can not just displace it and do construction in a shed but actually manufacture some of these components and then assemble them on site is having a transformational impact on the speed and the cost with which we can deliver these things.
So we talk a lot to our clients about how they choose to procure them. So conscious that there's a lot of text on these slides, you can download a version of the slides from the AU website. But broadly, we talk about things that are off the shelf. So these will be air handling units, generators, and things. And a lot of our clients have found that if you just bought the kind of completely standard air handling units, you might be able to get the lead time down by quite a lot. But typically, clients-- or in the past, clients have had their own requirements, their own minor tweaks, and it's been increasing the lead time.
So increasingly, clients are designing towards standard products now. They're designing around completely standard transformers, inverters, air handling units, generators, these sorts of things, so off the shelf. Any innovation in that space kind of stays with the supply chain, but it's very low risk for the client. They're literally buying a standardized, commoditized product.
There are parts in the middle we would call developed by the market. So we will develop a particular solution up to quite a high level of detail. And then we would hand that to the supply chain and say, finish this off with your actual components, your actual valves and things. It's where we're designing particular solutions that would slot in nicely into an existing-- or an emerging design for data center clients, but they don't want to go the full way of actually developing their own system. We have had clients where they've literally gone as far as saying we will develop this as a product and deliver it ourselves.
So for certain clients, where an idea is so valuable, it becomes so part of their embedded IP in their data centers, they will actually develop something as a complete product, own it themselves. And that, as you can see from the little arrow, it's a high-risk thing to do, but all the benefit stays with the client. They get complete transparency. They own all the benefit of developing a piece of IP, potentially. So not all clients need to do the developing products. But thinking in these terms has been quite helpful for a number of our clients.
And this is something-- I can't show you a data center one. This is from a health care client. Again, conscious you probably can't read that, but you can download the slides. But this was something we did for a company called Circle Health a number of years ago. So in the yellow at the top, we developed a list of all of the components they would ever need to build all of their potential hospitals. So we said everything you would ever need to build, the complete LEGO kit, IKEA kit of parts is listed out.
In the red ones, we then said, which are ones where you should go to the market? So air handling units, lifts, and things. There were certain things that were so potentially valuable to them as a client, we said you should develop this as a product because not only could you use it on your own health care facilities, you might be able to sell it to other people. There might be a global market for some of these ideas. So you should actually turn some of these ideas into a productized, protected piece of IP. Other things, you go to the market.
Then the blue, we said, at what pace do you start to deliver them? You don't do everything on project one. But over a rollout, you might start to implement these things. So some things, you'll do on the first hospital. Some things you'll do on hospitals two to five. Some things you'll do hospitals 5 to 10. And so we went through this process with them, and we're doing it with a number of clients now, where we list out all of the things they might need in their kit of parts, how they might start to deploy them, how quickly they might start to adopt them.
And they're on this sort of roadmap now of moving from being a serial client but doing things in a manual bespoke way to doing things in a much more repeatable, standardized design way to doing things in a manufactured and delivered in a particular offsite fashion. And so people are on this journey now, but they're starting to get quicker and quicker at doing these things. And the benefits they're seeing is starting to-- well, you can see from some of the earlier slides, in some instances, it's not possible to build these things traditionally quickly enough. Like, this industrialization path is the only sort of viable means of getting things done.
And these are some of the examples of things we've done for people. So these were where we develop the product up to a certain level and hand it out to the market. On the top left is a particular cooling approach that we developed for one of our clients. But you can see the numbers are quite interesting. So 1.5 meter reduction in floor height because there was less distribution and ducting and things.
That's not a big concern in the US, where you tend to do single-story data centers. That's a big deal in Europe, where most data centers now are multistory because our sites are smaller, certainly city sites are very expensive. And so being able to get more floors of a data center within a planning volume, that's a massively powerful statistic.
And you can see some of the other figures here, 90% quicker installation, 38% cost reduction for an airport client. So the numbers are not 5%, 10% increases. These are sort of 30%, 40% increases in yield, cost reduction, reduced embodied energy, these sorts of things. So these are making a big, big difference. Certainly if you're a repeat serial client and you're doing lots of data centers that are hundreds of millions of dollars, these are quite big numbers.
This was that example. You can see on the left-hand side is the prototype that we developed for this particular client. On the right-hand side is the in-operation version of this. So it moved from being a thing that we developed and prototyped with a supplier to a thing that's now manufactured and installed fairly standardly across their program.
These are other examples where people have gone even further and actually developed their own product and delivered these things as their own products. And again, some of the numbers are very, very dramatic. We're not suggesting every client needs to go there, but some clients are starting to understand the benefit of being able to control their supply chain and own their design in this way.
And this is where I'll finish. This is the kind of accumulation. Again, I'm conscious that we're not allowed to show huge amount of the actual inside the data centers or real client examples, but this is one particular client. This has been the impact of all these things coming together. So they're seeing a 30% to 40% cost reduction in dollars per megawatt, which, again, if you're a serial client, that's a massive number.
40% increase in IT yield-- that's only going to get more important. So as we move to liquid cooling and the intensity of data centers, AI data centers gets higher and higher, that kind of increase in IT yield is going to get much, much bigger. And so these sorts of considerations are going to be really important.
Much, much smaller footprint. Again, at a time when things are getting more intense and data centers are numbers are getting higher, being able to build them on smaller, more viable sites, again, makes a dramatic impact. Placing a data center in the right place because you could fit it on a small available site potentially fundamentally makes it a much more connected piece of infrastructure and much less embodied carbon through design optimization, through controlling the systems and things.
And last thing I'll touch on is, obviously, sustainability is a big, big driver now. It has been in Europe for a decade. I think it's increasingly important to US customers now, particularly as everyone starts to have their corporate social goals and everyone's on this path to net zero, at a time when power needs are going up and things.
So we take a very holistic view. This is a snapshot of all the sorts of things you can look at. So it's not just material selection. We would look at driving waste off sites by prefabrication. We look at transportation and supply chain. We'll look at all of the energy systems, all of the whole-life systems. So I think sustainability has moved from being a thing around purely thinking about carbon, potentially embodied carbon, operational carbon. There's now a much, much broader remit. And I think people are embracing many more things and starting to understand there's a lot more they can do in the design of these things to foster much, much more than simply getting a certain carbon emission number down.
Most of our clients are now on this journey, where they're saying, let's not just have a retrospective we'll make some material choices. They're trying to do everything they can. So they're starting by saying, What are the passive things I can do right at the front end? How can I design the thing to use less materials in the first place? then start to make those choices, then start to apply technology, and only at the back end do I start to offset these things.
So again, this idea that sustainability isn't a retrofit idea, it's something that you embed right in the front end of the project and say every aspect of the design, every stage, How can we start to make these things leaner, more efficient, more effective, more materially sensitive? and then start to apply these technologies.
So we've been doing this for a number of years now. I think our first data centers were back before the dotcom bubble, so back in 2000, 1999, 2000 times. We've continued to go around this figure of eight diagram. So as the technology has changed, as the types of data center have changed, as the materials have changed, as the context around them has changed, we've started to-- we've never stopped.
So one thing we've said to our clients is you never get to the end of this process. You never get to the complete perfect data center, and we're done. The explosion in AI recently has triggered a whole load of new things around liquid cooling that will change all the design of the chips and everything. So we can never be complacent and sit and think our job is done.
What we do have is a way of continually evolving these things. So like automotive, we're now into a continual improvement cycle rather than a constant reinvention cycle. But yeah, hopefully you can see from this there's some very, very interesting things happening in this space. I think we have some of the tools that we will need if we can start to leverage these across the industry, if we can start to scale some of these approaches.
This, I think, is how we're going to tackle some of the things that I mentioned at the front end are definitely coming in this sector and will have implications across a much, much broader range of sectors and building types.
So I'll pause there. Hopefully that was interesting. There's more material on the website. There will be the handout and things. And yeah, hopefully that was of interest. And thank you for your attention.