Getty Images

Managing and predicting wildfires with machine learning

CTO Guy Bayes shares his experience developing AI software to predict and manage wildfires at startup Vibrant Planet and offers tips for getting started in climate tech.

Guy Bayes has had a successful career in tech, leading data engineering and analytics teams at organizations like Lyft and Facebook. But after a harrowing experience in Oregon's Cascade mountains, where a wind-driven fire narrowly missed him but devastated neighboring towns, Bayes began seeking a way to apply his technical skills to climate problems.

The climate tech startup Vibrant Planet, where Bayes currently serves as CTO, emerged out of this experience and Bayes' subsequent collaborations with tech and forestry experts. Vibrant Planet's software models forests in western North America and aims to help users manage wildfires, suggesting treatment plans based on their needs and resources.

In this interview with TechTarget Editorial, Bayes discusses his team's work to reduce the impact of wildfires on communities and ecosystems alike, the importance of returning forests to their condition before settlement disrupted indigenous controlled burning practices, and how tech professionals can contribute to climate initiatives.

Can you tell me about your background and how it led you to Vibrant Planet?

Guy Bayes: This story starts about three and a half years ago, when I semi-retired after the Lyft IPO. I was in a cabin up in the Cascades in southern Oregon trying to regenerate, because it'd been a long slog. A fire ripped through there -- a really fast, wind-driven fire, similar to the one that hit Lahaina [in Hawaii]. That missed me by a couple of miles, and it burned down a couple of towns next to me. It went from somebody's cigarette ash to towns on fire in about three hours.

It was scary. I'm up there watching all this happen, and I've got friends in these towns. We're all trying to figure out what's going on, and everything is collapsing. First responders are so busy getting in front of it and evacuating people that there's no communication. When it was all over, I got missed, and I had climate change refugees living in my house and on my property for a few days. Even once the fire is gone, even if your house doesn't get burned down, the electrical and water grid are usually trashed for quite a while, so the towns aren't livable.

When it was all done, I was like, 'I know what my next project's going to be.' It was obvious there was a lot of room for technology and data to help. I was thinking, maybe I'll do some kind of early warning thing. But I talked to my friend Maria Tran, a product manager, and she was like, 'No, you don't want to do any of that. The real game here is actually in prevention.' There's that old saying -- an ounce of prevention is worth a pound of cure. There's not very many people actually working on stopping the fires from happening; there's a lot of people working on trying to put them out. That's basically where Vibrant Planet came from.

What is the key problem you're trying to solve?

Bayes: The goal is to mitigate fires in the Western U.S. It's not to stop them. One reason we have so many [wildfires] is we've been stopping all the fires, and fire needs to occur naturally in the western United States to keep the forests healthy. To understand how we got into this mess, you have to take it back to the days when there were no Europeans and the indigenous did a lot of controlled burning. Part of the cycle was to light the whole place up every autumn.

Those controlled burns were very different from the fires we experience now. They were very low intensity, about ankle high, and they would just burn along the ground. They would clean up underbrush and keep the trees from becoming too close together and too dense. They basically kept the forest healthy. Forests in those days looked a lot different than they do now, with many more big trees and a lot more space between them.

When Europeans rolled in, we started managing forests the way we were used to managing them in Europe: suppress all the fire and manage the forest like it's a plantation for lumber production. We introduced a bunch of new species, started going for younger, denser configurations of trees, and actively suppressed fire for 80 or 90 years.

As a result, we've kind of built up a tinderbox. You throw a spark into this thing now and it blows up. When you get these high-intensity fires, they aren't ankle [height] -- they're building height. They kill even the mature trees. You end up with a wasteland: All the trees dead, the soil scorched, sometimes burnt so bad you can't even grow stuff anymore. So, we have to get the forest back to a more natural state. We have a suite of things we do called forest treatments: controlled burns, establishing firebreaks, various kinds of selective thinning, pile burning. There's all sorts of stuff you can do to thin the forest out and get it back to a more natural place.

What are the obstacles to doing that?

Bayes: First, state, federal and local governments have been funding this; there's a fair amount of money flowing in from Biden's infrastructure bill. But there's still not enough money to do everything. The forests are pretty big. Maybe somebody gives you $50 million to go fix it, but it would cost you a billion dollars to treat the whole thing. You've got to pick your battles; you've got to optimize.

Second, it has to happen really fast. These projects have occurred in the past, but there's been enough time that you could take 10 years on a project. Now, we need to move quickly because everything's burning down, and if we wait 10 years, it's not going to be there anymore. The scale of it is also kind of new. There's a lot more treatments that need to happen, and it's straining that whole ecosystem.

How are you addressing those problems?

Bayes: We use machine learning and data and other technologies to plan the optimal treatment across an arbitrary area of forest, and to monitor and manage the success of that treatment. You can sit down in front of our software, put in your area and describe how much money you have and your priorities. And a pretty detailed plan pops out of what you need to do -- down to the individual grove of trees -- to maximize the money you would spend. We also let the customer set different priorities, because what you want to do depends on what you want to accomplish, and people don't always agree on that. One of the biggest things that sinks these projects is disagreement on priorities.

Can you say more about those tradeoffs?

Bayes: If you were to do a project entirely around protecting towns, you're likely to step on, say, an endangered species' habitat. [Vibrant Planet's software] can do multiple scenarios, one for each stakeholder group: This is my scenario if I want to optimize for town protection [versus] endangered species and critical wildlife. We've found a lot of those scenarios have more in common than they have different.

You can look at all the different outputs, and you can find the commonalities. You can say, 'Okay, everybody, we all agree that we should go do projects A, B and C. Now, projects D and E are a little more contentious, because while they help some people, they don't help others. But we can put them off to the side and have a year's worth of conversations around that, because as long as we do A and B and C, we're making a lot of progress.' That aligns the community around common goals that they can all agree are useful, and it can avoid some of the minefields that keep things from getting done.

It's a complicated problem. It's a complicated optimization engine, but fortunately, Silicon Valley has gotten really good at doing complicated optimizations. A lot of the technology developed to serve ads and do things like ChatGPT is pretty directly applicable to these optimizations and tradeoff calculations.

Can you walk through the process of training models for this kind of problem?

Bayes: First, we needed a nice, map-driven UI we could show to a user, where they could be very flexible about defining their projects. We settled on Elixir, this up-and-coming framework we really like. And then we have to basically nail four different kinds of data.

The first kind of data, which is the foundation for a lot of it, is forest structure. We wanted to build a three-dimensional model of every tree in the Western U.S. -- how tall it is, how much wood is in it, whether it's alive or dead, what kind of species it is. That's a hard problem. Nothing like that existed, I think, until we built it.

We used a lot of the transformers developed to do things like Dall-E and ChatGPT, which turned out to work pretty well for [our use case]. We started off with Split-U-Net, a neural net architecture kind of model, but then we quickly moved on to something called monodepth. Then we ended up adding Midas [depth estimation models], Swin [computer vision transformers], BEITs [pretrained vision transformers], all this stuff.

We were able to get some three-dimensional imagery of commercial devices that was actually done for commercial purposes. They trained depth models against it, what they call monodepth estimation, which means you're trying to do estimation without bifocal. You've just got a single image, so to speak. And that stuff, while not directly applicable, can provide input into the tree estimation problem. At the end of the day, we're looking down at a tree and trying to figure out how tall it is.

Fixed-wing Lidar is a huge deal. That data set comes from similar Lidar to what you see on self-driving cars. They strap it to the bottom of planes and fly over swaths of forest, which gives us pretty high-fidelity, three-dimensional data about that area. If we had fixed-wing Lidar data everywhere, it'd be an easy problem -- we'd just be done. But the fixed-wing Lidar flights are expensive and patchy, so they don't exist everywhere and are usually not up to date. But they make really good training data.

We train against satellite imagery from multiple space agencies, like Sentinel-2 and some Landsat, and also photogrammetry from photo flights the Department of Agriculture flies every two years. We can use the sophisticated transformers to train a model using the Lidar three-dimensional data to then infer three-dimensionality on top of these two-dimensional images. And it actually works pretty good.

The last thing we throw into the mix is this project called Gedi, which hung a Lidar instrument off the bottom of the International Space Station. As the space station crosses over an area, you get a strip of Lidar, and then the next time it comes by, you get another one. We can take all those cross-hatching strips and use it as another calibration and training data set.

None of these things are good enough to rely on by themselves, but when you throw them all together into a huge model and press go, you end up with a pretty good three-dimensional image of a tree. We have billions and billions of trees [modeled] now that are all pretty accurate.

There are a lot of moving parts there, obviously, and this is a really high-stakes task. What are some of the technical challenges you encountered when putting this together?

Bayes: It's compute intensive, and one of the biggest things we struggle with is just availability of GPUs. We're in the same boat as everybody else -- you just can't provision them. Fortunately, we got into an accelerator program inside Amazon aimed at climate tech startups, and they managed to find us some. That helped a lot.

The second is [establishing] ground truth. Even though we're training against Lidar and we kind of treated it as ground truth, how do you know the Lidar's right? We ended up having to partner with a professor friend to fly a whole bunch of drone flights, and also do physical measurements of different plots of land across the western U.S.

Getting the models to work was hard. There was a lot of work just getting them to converge and figuring out which of these prepackaged transformers did a good job. And all of this is just to produce the first of four sets of data.

After modeling the trees, what are the next steps?

Bayes: The second piece is, if you want to understand risk, you want to understand what's at risk -- what's out there in the forest that we don't want to burn up. Some of that is human, like roads, towns or reservoirs. Some is ecological, like endangered species.

Then you have to assign some kind of common currency to those things so you can trade them off against each other. We have an econometric model that assigns some phony money to each of those things so we can make tradeoffs about, say, how much a swimming pool is worth as opposed to two miles of road.

Third, you want to understand the risk profile -- mostly fire risk, but also other risks that happen with fires, like flooding and detritus in reservoirs. So, understanding the risks, measuring each pixel of the landscape and assigning a risk number. One way we do that is big Monte Carlo simulations, where we just drop a bunch of [simulated] fires and let them spread and see what gets burned up.

Once you have those three -- the risk, the trees and the assets -- you need to understand what effect your treatments will have, which we call response functions. Say I have this plot of land. I know it's highly at risk from fire. I can do treatment A against it. It'll cost me X dollars. How much will that reduce the risk? Oftentimes, it doesn't reduce it immediately but over a time horizon -- three, five, 10 years.

We have response functions that need to be associated with everything we care about and bumped up against the bad stuff that can happen to it. Those come out of academia. Scientists have done a lot of work on response functions; sometimes we can go get one, but sometimes we have to develop them ourselves.

Once we've got all four of those, then we can mash it all together. When the user's looking at the [software] interface, they can do an optimization, and they don't even realize all the stuff that's happening. All they know is they get back a plan. Even that optimization is pretty hard computationally because it has to happen in real time against all these different variables.

How do you collaborate with the people who are actually implementing these plans?

Bayes: It's super important to collaborate directly with the people on the ground that are going to be doing the work. We have a set of alpha and beta customers that have signed up to be our guinea pigs and help us figure out [whether] what we're proposing makes sense or not.

An individual engineer can have a big impact. There's so much good science and so much good research, but a lot of times, they're starved for engineering partners.
Guy BayesCTO, Vibrant Planet

A lot of times, these [projects] go wrong when they don't pay attention to the people on the ground and build products to satisfy senior leadership. They end up with a bunch of recommendations that don't make any sense when you try to actually go implement them. We started with this ground-up approach since the very beginning. They'll tell us when we produce something that's garbage, that's for sure.

There's also a lot of tie-in with academia and various government research labs. A bunch of our algorithms, approaches, sometimes even software were originally developed in academia or a research lab. Some things we have to build ourselves, but we almost always look to make sure there's not something there waiting for us already. There's a bunch of really good science that needs some engineering up to make it scale and get it out to people.

You're currently focused on western North America. Is that something you'd like to expand in the future?

Bayes: Yeah, that's only phase one. Phase two would be taking this international, to other fire-prone forests. Our stuff works better with forests that need to experience natural fire. Places like Canada, Australia, southern Europe, Turkey are natural next steps for us. Obviously, there's some complexity, and every time you go and do a different bioregion, you've got to tweak it. There's work there, but we can do it, we're pretty sure.

Then, if you think about this more abstractly -- there's a bad thing happening to a large landscape and you have interventions, but you want to optimize those interventions -- we think our platform can [address] that kind of problem in general. It doesn't necessarily have to be fires, right? It might be agricultural degradation or flooding.

On more of a personal note, you have a technical background but haven't always focused on climate tech. What advice would you give to folks looking to pivot to climate tech?

Bayes: Firstly, I'm very supportive. It's a world where an individual engineer can have a big impact. There's so much good science and so much good research, but a lot of times, they're starved for engineering partners to help them implement or scale the things they're building. You can make a big difference, I think, so do it.

The other thing I tell them is to be pretty picky about what you work on. There is still a lot of FUD [fear, uncertainty and doubt] out there. There's a lot of not-real companies and people that don't really have a plan. Exercise good judgment on whether the thing is actually going to work and isn't just smoke and mirrors.

Also, try to find something that you think your engineering skills will be very important to implementing. Like, I really firmly believe in solar power. I think it's one of the most promising things happening in the world today as far as climate goes. But I just don't know how to apply my skills to that problem in a really meaningful way. Choose [problems to solve] based on not just how important is the thing, but how much can I actually improve the probability of the thing occurring.

Last, think hard about whether you want to do moonshots or near-term stuff. They're both valuable, but moonshots can be frustrating when you figure out that you may not live to see the thing that you're working on actually come about. A lot of them have high failure rates, and they know it. As a species, it makes sense to invest in a hundred things, any one of which will save your bacon. As long as there's a 1% probability, yeah, you should do that. But as an engineer, you're going into something with a 99% failure rate. Are you emotionally able to handle it when it fails and go on to the next one?

Lev Craig covers AI and machine learning as the site editor for TechTarget Enterprise AI. Craig graduated from Harvard University and has previously written about enterprise IT, software development and cybersecurity.

Dig Deeper on Sustainable IT

CIO
HRSoftware
ERP
Data Center
Mobile Computing
Close