Video: AI for Grid Operations: Smarter Planning and Modeling with Microsoft & ThinkLabs | Duration: 2768s | Summary: AI for Grid Operations: Smarter Planning and Modeling with Microsoft & ThinkLabs | Chapters: AI for Utilities (3.84s), Unlocking Grid Capacity (126.795s), AI-Driven Infrastructure Investments (204.995s), AI in Energy Utilities (327.955s), AI in Utilities (477.2s), Grid Simulation Challenges (587.23s), AI-Driven Grid Planning (764.5s), AI in Energy Systems (1194.815s), Microsoft's AI Role (1450.765s), Enterprise-Grade Innovation Tools (1541s), AI Agents in Utilities (1663.16s), AI in Utilities (1891.47s), AI Adoption Challenges (2137.89s), Validating AI Simulations (2262.285s), Synthetic Inertia Research (2625.9749s), Closing Remarks (2689.815s)
Transcript for "AI for Grid Operations: Smarter Planning and Modeling with Microsoft & ThinkLabs": Thank you, everyone, for joining us for this really exciting webinar from Microsoft. And, really, today, we're gonna talk a little bit about AI, but more specifically, how we're using AI to solve some of the power utility challenges today. You know, to really have this conversation, one, bring the best of the sector itself. And so here we have, Georgia Power. We have, Joshua with ThinkLabs. We also have EPRI, and I'm really excited to get this conversation going. Before diving into it, I just wanna set a little bit of context in what the power utility sector really means to Microsoft. And, really, you know, the thing I can say that we are so interdependent of each other. The success of the power utility industry will indicate the success of Microsoft overall. And this really comes from two perspectives. The first perspective is electricity for AI. And so we require clean available electricity to our existing infrastructure. And as well unlocking the capacity that we need for new data centers, to really scale out our AI solutions today, which in turn will unlock that emerging digital and AI economy. Now the second perspective and the one that we're gonna focus on in this conversation is AI for electricity. And so because of the importance, once again, we are investing in permanently focused AI solutions to build a grid faster, to respond to extreme weather conditions. And this includes our Gen AI for permitting application, showcased in the Lowell Truth journal and our Aurora weather forecasting model as spotlighted in the New York Times. These are AI solutions that will have transformative impacts, to the grid, which is why we are partnering and innovating alongside all the brilliant utilities, out there and on this call as well. And we're gonna scale these solutions at no licensing cost because of the fact of the importance of getting a clean and reliable grid. And so we're here to meaningfully partner with you to solve today's, grid challenges if you have any of these ones that you ever are going to share with us. So today's webinar, we're gonna touch upon some other challenges we see today, which is really unlocking new capacity. This is the speed and the scale at which we can model and simulate conditions of the grid, including eighty seven sixty power flow. We're also gonna share some big ideas and visions around a how AI can unlock the workforce of the future. Because, really, to unlock this new capacity, we need at scale, the best and the brightest and the most productive, productive workers out there. So, with that said, I'm gonna start introducing our esteemed panelists. First up, we have Raman Lanier, the director of grid strategy and solutions at Georgia Power Company. Next up, we have Joshua Wong, CEO and founder of ThinkLabs AI. We have Cameron Riley, the senior team lead for applications at EPRI. And finally, we have Harry Lawton, director of Azure Cloud. And I am Pat Lowe. I am America's power utilities leader here at Microsoft. And so we are excited to get this one going. Robin, first question is gonna come to you. And, we had a chance to chat about the future of AI's potential and transformational impact. What's your vision of how Southern can leverage AI to support the workforce development? Yeah. Thanks, Pat. So I'm gonna work under the assumption that, most everybody here is familiar with Southern Company. So jumping into, you know ultimately, we have a commitment to making investments to ensure our customers are provided with clean, safe, reliable, and affordable energy. And as you mentioned earlier, to keep up with the growing demand for energy, we're always working to deploy the best technologies and then ultimately strategically investing in our infrastructure to make it both as reliable and dependable as possible. And a lot of that is leveraging AI for workforce development. And I wanted to share just a couple of examples of way that we are ways that we're already advancing meaningful solutions and then go into maybe some thought, provocation that maybe helps further rest of this discussion. And so one of the categories is around data driven decision making ultimately to enhance analysis and improve workflows. So we use tools and resources like HData to help us with FERC data searches and insights. We had actually stood up and, started our own applications, called RAMP, r a m p, which is a reliability and analytics metrics and performance cloud based technology, really focused on, improvements, finding insights of root causes of reliability issues, and helping us solve those in, like, a more preventative way. We also stood up SPEAR, s p e a r, which is, storm planning, ETR, and reporting, which is really helping us predict potential impacts of severe weather events on the power grid. We also think about things like operational readiness and using tools and resources to help us identify, and enhance worker safety opportunities, as well as things in training and operations with using digital twins, to help give us insights on our infrastructure and help really allowing our workers to conduct, to conduct some, virtual, like, visual walk throughs and simulations. But, you know, for us, energy utilities, like many other businesses that have been around for decades, have a huge opportunity and continuously growing opportunity in the AI space as technology continues, to develop, and we start seeing additional and new use cases. I would, I would be remiss if I didn't say that especially because we're in the energy industry, we also have to make sure that we are understanding that risk associated with it and advancing those things in a pragmatic way. But I see opportunities in workforce development such as personalized, upskilling. So AI can certainly tailor training to individual needs and help workers, help our employees ultimately quickly gain new and different skills that are more relevant, that are most relevant to their roles and even career goals. It could be things like enhanced efficiency and productivity, helping to automate routine tasks or processes, as well as knowledge transfer and preservation. And and this piece actually is what spawned some discussions, between Joshua Joshua and I because, when I first started at Georgia Power, I first started in distribution engineering. And I remember sitting with our, distribution control center operators who had been working in their roles, for decades. And I remember how his name is Glenn, like, at the time, he talked about the grid and could visualize the grid, and this was all, like I said, on the distribution side, really like the back of his hand, and he knew the intricacies. And as we migrated and, you know, he ultimately, went into his next chapter of life and retirement, We wound up, you know, getting new people that were back filling those roles that had a different level of understanding. And so ways that we could potentially capture some of that knowledge and help people that might not have been operating that same grid for, you know, thirty plus years to ultimately be able to operate it like it has like like they have done it for thirty years. And so this vision around having a copilot for the grid to sit with you and not ultimately make the decisions because we train and, like, expect our employees to make sure that it's operating in a safe and reliable way, but to help drive optimal outcomes, especially as the grid becomes more and more complex. And that to me is one key takeaway and, like, one key opportunity that we are exploring with company, like ThinkLabs and working with Joshua on that. And that's brilliant, Rob. And I have to say Southern Company, Georgia Power, your entire, organization is doing such transformational, impressive work. I think, just this morning, I saw a ramp and spear of those technologies also won an award as well. And so it's it's just brilliant to see what you folks are doing in that space. And so kudos to everybody who worked on those solutions. The next question and I'm gonna tee up EPRI and Cameron. But, you know, Microsoft on our side, we're actually we're really excited about that partnership we now have with EPRI. And we're gonna be doing a lot more together. We're gonna be focusing on AI collaborations, but more specifically, we're gonna look at how we can get the best of, both the leveraging, and technologies and insights that EPRI has, as well as Microsoft tech stacks and how we can really collaborate together. So, Cameron, what are you seeing as key areas of focus and some examples of how you've seen AI being leveraged in utilities today? Yeah. I would say that probably the number one area of focus right now is the agentic use of AI. So allowing AI to sort of help someone do something, in programming or running software, that they may not known to be able to do, but they could speak to an AI natural language and have the AI assist with that. I just see a lot of opportunities in the power sector in general, and I think we're really just scratching the surface as far as what it can do. And one of the good things about EPRI is we we don't have any skin in the game, so to say, so we're really just interested in, improving the reliability and resiliency of the grid, in any way necessary. And I think that AI shows a lot of a lot of promise, at that, and, we're very excited to see where it can go. Thanks, Cameron. So right now, I do wanna kinda shift focus and really focus on a high valued use case. Not one that specifically when I collaborate and work with utilities, I hear over and over again. And this is around crude simulation and planning. More specifically, eighty seven sixty power flow studies. So to unlock capacity, connect new load resources, we just simply need to get faster at this. And this is a a use case that, you know, we're proud to be working with ThinkLabs on. So, Joshua, maybe over to you. Tell us a little bit about ThinkLabs and what problems you are solving right now in the grid simulation space. Yeah. Thank thank you, Pat. Thank you, Microsoft, and great to be partnering with the fellow SD panelists. So, ThinkLabs really came as as Robin and I were were brainstorming a while ago. It's how do we drive that, grid of the future? And and, yes, we can talk about renewable DERs, data centers, etcetera, sensors, etcetera. But, really at the core is how can we drive greater autonomy. So as the grid is getting so complex that traditional processes cannot keep up with, but also so unpredictable because seems like it's a massive, like, Rubik's cube, like, nth dimension Rubik's cube that's changing continuously. So we can no longer rely on fixed worst case scenario studies. And, a lot of those studies actually today are not powerful driven because power flow is really challenging to to use, especially for distribution today. Convergence, data readiness, and also configuring, simulations and running those simulations typically take a very long time. We don't have the ADMS for planning yet where it's continuously analyzing, assessing, and making suggestions off the grid. And as we look at, like, probability probabilistic scenarios moving forward with low growth, DER penetration, switching, plan work, adverse weather, etcetera, we do need to study and have this predictive, proactive view on how we prepare for anything and everything that could happen on the grid. So if we look at planning, I think this this whole power flow driven simulation becomes center stage. So we see planning started with a humble power flow, but also layering to many higher ordered functions. So we ask ourselves to drive autonomy, one, we need AI because if we look at any autonomous systems for complex systems, not rules based, we need AI. But two is AI itself with the it's called the off the shelf LLMs cannot understand deep enough with such a complex system as the grid. We do call it the most complex machine for a reason. And so we set off to to say, can we teach AI how to do power flow? Now because of all the complex physics, formulas, we can't just rely on SCADA or or smart meter and information. We needed to teach AI from engineering. So our engineering degrees and education is not, sort of sort of, replaced by AI, but it's used to teach AI how the world works, how physics worked. So we set off on a course to do physics informed AI, so, again, engineering to teach, AI how to do power flow. And we train really high performance, what we call surrogate models of, transmission and distribution power flow. So we see this in, like, three levels. So the first is really an AI digital twin. So a series of deep machine learning models that's very highly fine tuned to do power flow analysis per circuit transmission and distribution. Second is to do, AI driven tasks. So these tasks, as you mentioned, start up with the eighty seven sixty power flow. Why eighty seven sixty? I think that's becoming the norm expectation now because, everything's changing. It's a very dynamic grid. We can't set and forget once and for the next, like, twenty, forty years. Everything is time series based, but that takes compute power and hence we need AI. So eighty seven sixty power flow, t and d cosimulation, identifying all the thermals and voltage constraints, dynamic ratings, model validation. Solution generation is a big one, especially if we look at, call it knowledge retention and upscaling of of workforce because currently, how we fix the grid is very much, policy rules or experience driven. So these are the various AI tasks that can can, be trained. And then we actually orchestrate, not across the grid, but orchestrate across these tasks to form AI agents. So Cameron mentioned agents. I think chatting is a great start as especially as retrieval, reporting, interface because we have so much data. But the end to end agentic processes, can I tell AI to run any any connection study within a minute? Can I do an ISP, which is a very rapidly merging, field, integrated system plans and any integrated resource plans, t and t co simulation in a couple of minutes? Can I refresh that on a daily or even hourly basis? Can I do flex planning? So these end to end workflows are some of the agents that's, we're looking at for planning. But because of the speed and the adaptability, we also see an opportunity to orchestrate across the traditional silos of the utility and the grid. So across planning and operations, because now the same power flow models can serve up and be fast enough to do both long term and real time. It's dynamic eighty seven sixty planning anyways. The other is t and d co simulation as well. So we see this as becoming more of an enterprise service, an AI driven analytics service that can serve up to a multitude of use cases. Yeah. And so what's really fascinating too that you brought up, it's it's the speed. And there's just a lot of connections, a lot of Internet connections, happening these days. And it always comes back. We don't have the tools right now in place to do those rapid summation studies. What I'm really interested in, how have, the development for early proof of concepts, Joshua? Yeah. So the speed in particular, just call it overall performance. So we value performance off a number of variables or or or or vectors. One is accuracy. So is AI trustworthy? If AI hallucinates about power flow, then that will imply have huge implications around, like, reliability, safety, even, ability to interconnect, and then potentially millions and millions of capital. So AI must be able to be trusted to do engineering well. And on that first, we actually make, train it so well that AI is, like, worst case scenario, 99.7 of any powerful state parameters across any node, any phase, 87, 60. So we have really, call it, mirrored engineering with AI very, very well. It's very highly fine tuned. Then speed wise, I think it shocks us all the time. So the early, discussions that some of you have may have seen it from DistributeTech, etcetera, our project with the SCE as well, and we do similar with Southern Company and and and EPRI, around eighty seven sixty powerful speeds. And initially, we think, hey. Like, $87.60 for distribution with existing tools may take many hours. If we really optimize it on the cloud, maybe we can do one eighty seven sixty in two hours, four hours. But if we look at thousands of feeders, that's a lot of compute. So, fast forward to now, for us, we have just run the eighty seven sixty power flow for three years across over a 100 circuits within five minutes. That's every single phase, every single node, every single time. So we talk about, like, sub millisecond type of snapshot power flows, and it's also red completely ready to do t and d cosimulation. Now we get asked, like, do we need this, like, Tesla ludicrous mode or or, like, do we need that much speed? But what we found out as well as the the the oxymoron with AI is speed actually pays, and this is something that we we never really get to grasp. We think, hey. We want speed. We want lots of GPUs, and we gotta pay a lot to Microsoft. But, but, we actually found that the faster we go, the cheaper it gets Because the GPU hours now seconds, it's it's really paying off. So we're actually getting more and more efficient both cost wise and performance wise. So that's why I think AI really disrupts the game. So not only are we doing, like, again, eighty seven, sixty three years in, like, less than five minutes for over a 100 theaters and but we're also being able to calculate millions of scenarios within ten minutes. So in the past and then as an engineer, I settled with a distribution engineer as well, similar to Robin. So so I only have time to do one scenario. Maybe if the boss pressures me, I can do, like, three scenarios and not sleep overnight. But, but now we are running, like, large scale, even probabilistic scenarios fairly autonomously. And so we are being able to pick up, like, blind spots that engineers never had time to to see. For example, like, coincidence between various time series profiles of loads in generation, different switching combinations because it's a very highly dimensional problem once you look at switching. So so I think this this scale, these higher order functions on top of existing, like, fast power flow underneath is is really paying off. So I know, Robin and Cameron, do you also collaborating with Thinklabs? What have you discovered so far in this collaboration in terms of AI and the role in grid planning? Yeah. So maybe I'll get started, and let Cameron jump in. So I love always hearing Joshua talk about, the potential for the future and the things that he's learning, and, you know, the things that he and his team are doing At the same time, you know, we because we are dealing with the energy system, again, safety and reliability are absolutely forefront. I would I would describe our strategy particularly around, using this type of AI, and, you know, some of the more generative and, like, copilot stuff in nature. You probably all heard the the phrasing crawl, walk, run. Well, I would add one additional layer to that and it's like look then crawl, walk, run, and sometimes you have to look at each at each one before you get to each step. And so, we we really wanted to understand this technology. We really wanted to understand it in a place that that was safe, so that we could really pressure test it and really start to get comfortable with it. And so that's why we approached EPRI. EPRI with with their grid modeling and kind of virtual grid, enabled us to create really a sandbox environment, so that we could start learning and really answer the question, can we trust AI to run Powerflow models? That's, like, the basis of the question. And so we're still in progress of of all of that, but earlier results are are good, and we're still reviewing that with with our technical experts and of course EPRI is still reviewing it, as we go through this process but, I continue to be excited, I continue to be passionate about it, And I think, like like I said, things things are going very well. And so as as, like, the past a AI models will continue to learn, its ability to ultimately improve and then capture that potential, I think is real. And so, again, like, it is it is an iterative process. It is a crawl, walk, run, and in some cases, it's look first. And and in most cases, it needs to be look first in the in the energy system. And so, that's where we are today. And like I said, I think things are looking very promising. And I, for one, am certainly excited to take some of these, you know, in some cases manual processes. And what we've done, we've done so much work across Southern Company Company to help, prepare, our region and not only our existing customer base to continue to invest to serve them in better ways, but also in this rare opportunity that is around load growth and serving those customers. But we are having to rethink the way that we're doing things. We're changing processes in in a very pragmatic way, but we're doing things to gain more efficiencies. And now we've gotten to the point where we can really start plugging in, I think, some technologies, some additional technologies to help us, grow in those efficiencies and ultimately better serve our existing customers as well as this moment with with new customers and new load growth. So, I continue to be impressed and excited and passionate about this, and want to continue our our journey in that crawl, walk, run status. Yeah. And I'll just say, so we've loved working with Southern Company, Southern Company for a long time, and we've been very excited to work with ThinkLabs as well. The entire landscape of AI is just really exciting, and there's so much potential. Like Robin said, what we really need to do is develop a sense of trust, and understand, does it work? And does it work means something different on the grid than it might mean in other cases, because the stakes are extremely high. And so at EPRI, what we would do is we would really look to put some of these models really through their cases. We would, develop a wide variety of scenarios. We would look at how the integration of DER, impacts things, how, topology changes in the distribution system impacts things, looking at very wide, time horizons. Just really trying to dig into not just a typical power flow, but all the edge cases that, utility engineers deal with every day when developing, trust around that because it's gonna be critical. And that's brilliant. And I think, you're absolutely right. AI is such a new technology, but in power system or power utility system, it is really that security, understand the governance of data, and how do you truly unlock the value while protecting, the customers that you have today and you serve. And so this kinda, like, brings me over to, the Microsoft side. And so, Harry, I know you've been working with some of our most innovative and transformational customers. What role does Microsoft play enabling scalable AI for grid planning, such as ThinkLabs? Yeah. It's a great, great question, Pat. I mean, I'm excited to be here with this team. Lots of fantastic innovation happening. But from Microsoft side, I mean, fundamentally, outside of really making sure we're there to assist up from a customer side and from a partner side, you know, Joshua and his team from ThinkLabs. You know, I think about this in in a few main areas. Firstly, one is scale. You know, how do we give the infrastructure for Joshua team to be able to run these advanced planning tools, simulations? You know, working with Joshua and the team also, you know, how how do we make sure you're right using the right GPUs? How do we use them cost effectively? Yeah. How do we make sure that we're we're giving that compute that fundamentally for our partners and for the utilities out there, it's generally unattainable to have those as assets that they own to run this level of compute that's needed to not only create the models to derive that and then continue to iterate as we move forward. So from a scales perspective, that's that's where I'm excited that we can really bring the value there. Secondly, I think we've touched on it. Cameron mentioned it. Pat, you were just mentioning it there is security, governance, making sure that when we're building these tools, these assets, innovating that new technologies across our grid, is that we do have this in an enterprise grade solutions. Yeah. That we do think through, you know, how is this done responsibly? How do we start, you know, injecting security across every layer of our whether it's our AA applications, our agentic flows, making sure that we fought through that and giving that data back to the different teams that need it to understand that we truly are building and looking to build these responsible solutions out in the market. And then lastly for me, it's really what tools that are we offering to actually build this. So whether it's our partners developing or our utility customers developing these solutions, you know, how do we give tools that can actually build these model innovations? Whether it's, you know, building, you know, agents as Joshua was talking about. Like, how do we make sure that we can start orchestrating different agents, whether it's for interconnect studies, whatever it might be. You know, how do you get those tools to not only take some of our more simple workflows, which we've been focusing a lot of great use cases, you know, whether that was originally some of the, like, general rate case use cases through to these more complex, multi agent, fully orchestrated environments where we need thought, solid, robust tools, which, you know, obviously, this environment is continuing to grow. We're rapidly advancing what tools we can offer. The industry is rapidly advancing the ideas and and, you know, areas that we we need to innovate on. And for me, for Microsoft, the excitement is how do we continue to ensure that we do keep ahead of that and build those different tools for our our developers, our partners, and our customers to actually deliver value across the grid. So these are the areas that I'm I'm really excited for, Pat, and where where we're helping. And I think that's really the opportunity for us to continue to drive. I think the other part from a Microsoft standpoint, and it's not always thought about is that when we can be in the room with our, you know, utilities, with the partners and Microsoft, we can help drive value from that kind of technical advisory. But also, a lot of our, you know, employees like Pat, for example, have been in the industry. So how do we integrate ourselves into the innovation to actually share feedback, what's working across the industry, what's not, so that we can speed up this innovation across the our utility sector. I'm gonna take one, and, Harry, I I loved your, your response there too. Agents has been brought up a couple times here too as well. And so for those are that are, you know, listening and wanna get a little bit of a better overview of what an agent is, Harry, Harry, do you wanna kind of do a deep dive? Because this is now the evolution of, AI and its agent and a agentic workflow. Yeah. I mean, I think I'll start with, like, the the value of what we've built before, you know, if we're building tools technologies before, you know, a agentic became into the into the limelight here, you know, that value doesn't go away. You know, what we've already built is still if it's delivering value today, it's still delivering value tomorrow. But, you know, where can we look at agents to potentially continue to build on that and what we've already started, you know, over the last year or two in in driving this. So for me, when when I think about agents, it's really just a way that we can give, you know, our agent a domain specific task to automate and, you know, and execute business tasks and processes. You know, fundamentally, how do we give an agent some form of input? You know, whether that is a system event, whether that is from a user, whether that's an agent talking to an agent. How do we give an input? And then, fundamentally, it has a description of what task it needs to execute, and it's given the ability to go out and utilize the tools and the systems it needs to reason over that to get us a response so we can fundamentally get an output. And as we look through this, this will be seen in different ways. Some of those, especially in utilities, we're gonna be asking our agents and and asking these flows to reason over this and give us the outputs, and then we will look to actually action that once we know it through till we can get to fully autonomous. But, you know, for me, it's really how do we start thinking about those domain specific. You know, that could be, you know, before we had a single agent for just this is our smart meter agent. And then, ultimately, when we think about multi agents, we could bring in more complex workflows. Is that how do we think about emergency management? How do we think about group planning? Where do those different pieces come in of the puzzle? So for me, it's really just building on what we've done in the past, but giving us more precision as we drive. And that's what I'm excited for as we improve the models, as we improve our agentic workflows, is when, you know, things happen and that is continuing to drive that precision to give the right data back to the business. No. That's brilliant. So I know I love the insights that you give, Erin, especially the fact that you know, you kinda mentioned that while I I have worked in the industry. I think, you've been working so closely with utility partners that, you're trying to get pretty good handle to what's kind of going on in the sector as well. But with that, I I wanna kind of, like, pivot a little bit more on AI in the part to a sector overall. And so Microsoft, we work with a lot of different industries. But what I thought was really interesting and unique, a study by HG Insights, they show that comparing all the various industries, the electricity and utility sector ranks sixteenth in terms of AI readiness and adoption. And so what do we, think are the barriers today? What's really holding us back from broader AI adoption? And so, Robin, I'm gonna ask you first because you are in the utility sector, and so having your perspective lens is always great. Feel like that's, like, a little bit of a pick on, but that's okay. But no. Look. Like, I think we are seeing productivity gains, and and we are absolutely especially at the individual level. So thinking about specific use cases that we can show, and get people comfortable, and and, you know, ultimately advance the technology. That's what we're doing today. I gave a whole litany of examples, earlier about things that we are doing today. We also do have to balance that. Like, we're not like other industries, not like many other industries, especially just in terms of having that safe and reliable component of something that everybody expects in their life, but maybe nobody remembers that how, like, you know, I don't know, potentially, like, harmful people that don't deal with energy in the right way could be. Right? Like and so making sure that that, energy is managed in a very safe way is incredibly important. And so it doesn't, scare me that we're not, number one, but that also it also does reinforce because we have been looking at individual use cases, a need to really start thinking about things, at an enterprise level. And so to me, that means, like a lot of our systems and and I'm sure across many other industries, there's a lot of individualized systems. There are ways for it to talk to one another, but it is in a very logical and linear, a lot of cases, ways. And so, as we're thinking about, different, data fields that we have coming in, different data fields that we have that are existing out on, you know, our our grid and our infrastructure, and different different decisions that we have to make. Having that common repository and and centralized place so that we can, manage, centralized, and ultimately utilize that data, is gonna be paramount. You know, we we are a century old plus, organization, and we do have legacy infrastructure. Right? Like, there's not a lot of businesses. There's not a lot of entities that exist today that that can say that, hey. They're they're over a century old. And so, you know, our our systems are not are not that old, but, because we have updated and we continue to invest, but we we still do have some legacy infrastructure, many that were not necessarily built with AI in mind. And so we are still working on, you know, improving those and investing in those so that we can build out those capabilities while we're thinking through trust and governance and, you know, understanding and so also working through adaptation and testing. And so, you know, I I think the potential is real. I think you will continue to see uptick in terms of utilities and utilization, especially as we continue to do things like we're doing with EPRI to find those safe environments so that we can test out this new technology to address skill gaps too because we are asking people, asking our employees, even asking in some cases our customers to interact with tools and technologies in very different ways than they have in the past and so we do have to think about those and paramount above all else because it is critical infrastructure, we cannot forget about the cybersecurity components. Yeah. And so making sure that we take the time to look at impacts onto our system, to review, you know, opening up systems in some cases, so that we can keep those, vulnerabilities and, you know, potential challenge on our critical infrastructure completely at bay, so that we can continue to serve our customers and communities. So, I think we're poised, to be in a great place and to continue to grow in terms of our capabilities, knowledge, and then ultimately grow that customer value, because that's what we're in for at the end of the day, is trying to find ways that we can increase our customer value. Robin, that's such a brilliant answer, and it's very pragmatic as well. And so, it's great to hear. And once again, I always say Southern Company is setting the gold standard of how you think about in approaching AI. But once again, safety and reliability and, at the end of the day, the customer's rate appears are the most key and important. So, really appreciate, your thought leadership on this question. Cameron, I'll quickly, throw this over to you as well, and I see there's also some questions in chat that we'll tackle. But, Cameron, you work with a lot of, utilities. And what do you see as some of the, the challenges for adoption or maybe, specifically, the opportunities that how we can start thinking about AI and better adoption of it? Yeah. As far as barriers to adoption, utilities can be very slow moving organizations. It takes a while to develop, trust in things. And when they develop trust, they need to really 100% trust it. So like I said earlier, the stakes are so high. So doing evaluations of these technologies and developing that trust, I think is gonna be critical to that. And as far as opportunities, I think there are some things, that only AI can do. As Josh mentioned, the power grid is an extremely complex system. And machine learning is the only technology I know of that can really handle very complex, systems and and find the optimums. So that's where I would see a lot of opportunity is in, just finding optimums. Whereas before, you're just sort of trying to find, find a a good answer. Maybe, AI can help you find the best answer. And then I think maybe blurring some of the, lines between the silos in the utility industry, is another place where, I see a lot of opportunity, between operations and planning, transmission and distribution. A lot of times, these are very siloed organizations, and I do think that there's opportunity to sort of help them, integrate a little more. Thanks, Cameron. This this is good insight. And I think, the key I think is is really the opportunity. Right? And we're still seeing a lot of value proposition, value statements from AI starting to merge as well. Joshua actually shared a great one here as well. And so I think, with the trust and, you know, the, and just even every testing things in a safer environment, it's gonna go a long way in really advancing the industry, in terms of AI. We've got a couple of questions. I wanna make sure that we have time to answer some of them. So Joshua, this is gonna be over to you. During the eighty seven sixty evaluations, what were lively considerate being evaluated? I n minus one, n minus one g. Yeah. Thank thank you, Tyler, for that question. Really appreciate it. So, because of the the the speed, the scale, but also the, the ability for AI models to actually switch as well, we are actually pretraining our models on even topologies and contingencies. So you can run n minus one, you can run n minus one minus one, or minus g type combinations. That really doubles down into, like, the the the the large scenario simulations that we talked about. I think that is one of the key constraints of the especially transmission system since, contingency planning takes so long. But also, this also goes into risk analysis. So, if we are taking, plant outages, for capital work or maintenance work, we need to like, if we don't do all these contingency analysis and prescriptive enough in 8760, then we're just getting more and more conservative to the point where we're not even, sort of becoming a barrier to to connections of loads and generators, but also to plan work to maintain, and upgrade, expand, reinforce the system. So the ability to do, probabilistic scenarios, like even running a Monte Carlo on, on contingency type analysis, I think that's really a key opportunity for AI. With that, Pat, if you don't mind, I'm just gonna even, continue with, like, Mitch's question as well. So, yes, that that is the contingency is actually especially important for transmission. Distribution contingency is a lot easier. You don't have that much, but it's a lot more nodes as well. But, we are running, transmission distribution and then t and d co simulation to to Cameron's point. If you look at traditional engineering software, it is very sort of challenging interoperability wise to cosimilate between transmission distribution planning. But if you see transmission AI digital twins and distribution AI digital twins, we can consimulate cosimulate those again within seconds. Awesome. And, Josh, why don't I let you take care of the last question here too. But, exciting changes in the using AI to run grid suites given the complexity, how are these simulations validated? Yeah. Hey, Eric. Good to hear from you. So yes. So so validation, that's actually Cameron's specialty. But, so I'll pass it over to Cameron afterwards. But internally within us or any type of, datasets to use to train the AI models, this is very typical. This is not a ThinkLabs special. That's pretty much industry practice. We break up the datasets into around sixty, twenty, and 20. So or 70 15 to 15. So, that's, 70% of the datasets used for training or pre training DEI models, 15% for, testing, 15% for validation including unseen data. That's how we we validate those within error metrics. But, Cameron, anything you wanna add with that? Yeah. So I'd say we run traditional, physics based models, and we compare it to that, and we try to do it we make sure we do it in a way that we are testing on data, that the AI model hasn't seen in training. And then in addition to that, we try to go through many different dimensions of the grid. So different load flow cases, different amounts of DER, different topology changes, and really dive into all the possible edge cases and permutations of those edge cases, to understand the how how well it works. Mhmm. So so to to to actually second that, I think for any time there is doubts, we can spot check as well, pull a traditional powerful simulation with engineering software and pull an AI simulation. They should be very much like for like. So I think that's a a great audit or spot check that we can have anytime. Two minutes left. I think we can try one more question, for the wrap up. And this is from Lester Loud. What network sizes have been tested? Is this AC or DC power flow? Have major topology and loading changes calculated, discrete changes, top changes, switch shunt, etcetera? Oh, and I love the technical details. Yes. This is full three phase AC and balanced power flow. Because we're data driven, we haven't done any DC approximation or even network reduction. We have, topologies because the AI can switch as long as we have trained them to switch, or the various permutations of, and we have even looked at, and and utilized things such as graph neural networks. Loading changes, zero to, like, ten, twenty, 30% year up year over year, low growth, we have done. Even DER is complete net zero, reverse powerful, we have done as well. The tap changes and and switch zones, that's a that's an interesting very recent discussion between us and EPRI as well. There's various ways to model them. For time series wise, if we treat these as the sweet steps, we can absolutely model them today. But if we look at the time series dependencies or the the hysteresis between time steps, that we haven't done. But, those are things that, we have been working with Appreon. Awesome. One minute. I'm debating. Should we go one more? Okay. Joshua, I'll pass. Can you talk about this? Or maybe we'll have that thing. But, FDA Dursey says, synthetic inertia in higher image intermittency grids, any way you're seeing AI support? Thanks to narrows like the recent, Aperon. It's Yeah. That's a that's a very interesting one. So we have seen AI research go into synthetic inertia around, like, dynamic and transient stability. We have not done that yet because we are looking much more commercially. So everything we do are, like, TRL six to eight. Of course, we bring it up. We reach down to, like, TRL four or three four and bring it up to commercial. But, looking at, inertia or transient stability analysis, that's I would say probably TRL one to two. So we have dabbled in it. We are aware of it. We have been doing some lightweight research on it, but we do think it's about two, three years before commercialization. So grateful research projects, but, we're working on making sure that this kind of a pathway towards product. Awesome. So this is actually a wrap. I I do see more questions coming on. What I encourage is feel free to reach out. I think we're gonna drop an email, and an email will go out tomorrow as well. And for any of all questions, feel free to reach out, and we, will oh, make sure we get to them. With that said, I wanna thank every one of our panelists here today. You were all brilliant. You all showcased, really why your thought leaders here. And more to come on, locking the grid of the future folks. So Yeah. And, hey, Pat. Just just really quick. Like, I I loved hearing the q and a today, and I think this is, like, a tremendous platform and example of how when we bring, you know, problems and opportunities to the market that, you know, the market and, like, we can work with partnerships to help us solve these things. You know, we know that the grid and just energy ecosystem is becoming more complex, so let's all create these opportunities so that we can collaborate more frequently. So thank you to Pat and the Microsoft team for allowing this opportunity. Absolutely, Robin. I love how you ended that note bell. With that said, let's, keep it innovating together. And, Robin, once again, I'd love that insight there too. Have a great afternoon, everyone. Cheers. Thanks a lot. Thank you so much. Cheers. Bye.