# Evaluating rockfall frequency from natural slopes (multiple methods)

## Detailed Description

Understanding of rockfall frequency-magnitude relationships is important for managing rockfall hazards, but characterizing these relationships is a challenging problem due to limited data, limited access, and the difficulty of accurately dating historic rockfalls. Developing frequency-magnitude relationships can be particularly difficult for natural rock slopes, which can still present significant hazards, but where data is often sparser due to greater distance from roads and buildings. This talk will provide an overview of the topic of rockfall frequency measurement by briefly discussing the relevant literature and presenting examples from the application of two different methods to the same study sites. The literature review will present a summary of methods previously applied to measuring rockfall frequency, including advantages and disadvantages of various approaches. The two case studies will come from original research in Glenwood Canyon, CO, an area where natural slopes present significant rockfall hazard to Interstate 70, but where previous knowledge of rockfall behavior consists mainly of the anecdotal insights gained by highway management personnel, without the benefit of systematic study. The first case study was conducted using lichenometry, and the second is based on an ongoing drone-based monitoring campaign. Comparisons between the results for the two methods will be made, and implications of these results for rockfall behavior in Glenwood Canyon will be discussed.

Graber A (2021). Evaluating rockfall frequency from natural slopes at multiple timescales using multiple methods; Examples from Glenwood Canyon, CO. USGS Landslide Hazards Program Seminar Series, 20 October 2021.

## Details

Date Taken:

Length: 00:52:39

Location Taken: Glenwood Canyon, CO, US

## Video Credits

Video thumbnail: Aerial view of Glenwood Canyon overlooking the Hanging Lake exit and the west portal of the Hanging Lake Tunnel. Photo taken July 14, 2021, Andrew Graber, Colorado School of Mines.

## Transcript

[silence]

My name is Stephen Slaughter,

and I’m today’s temporary host

for the USGS Landslide Hazards

Program seminar series.

Your usually host, Matt Thomas, is in

the field and will return next week.

During today’s presentation,

please remember to keep your

microphones muted and video off.

We ask the speakers to reserve

10 minutes for the top of

the hour for questions.

So, following the presentation,

you can submit questions

via the chat function or, preferably,

use the raise-the-hand feature

to ask questions using your

microphone and camera.

Today’s speaker is Andrea Graber.

Andrew completed his B.S. in geology

at Wheaton College in Illinois and is

currently a geological engineering

Ph.D. candidate at the Colorado School

of Mines working with Paul Santi.

His research thesis,

which he’ll be discussing today,

focuses on understanding rockfall

frequency and depositional behavior

from natural rock slopes.

In addition to his thesis work,

Andrew has published papers on

design methods for post-wildfire

erosion mitigation and on modeling-

based investigations on critical

groundwater conditions for

large landslides in southern Peru.

Andrew received the 2019 AEG

Marliave scholarship and has received

research and field work grants

from AEG, GSA, the Wyoming

Geological Association, and the

Colorado Scientific Society.

So welcome, Andrew,

and the virtual stage is yours.

And I turned on

the transcript.

Let me turn the transcript off here.

There we go.

Okay, Andrew.

Sorry, go ahead.

- Sounds good. No worries.

Thanks, Stephen.

Just to check as I get started, are you

seeing my mouse pointer on the slides?

- I am not seeing it.

- Laser pointer should work then.

- You showing it?

- Yes.

- I don’t see it, unfortunately.

- Cursor.

Well, we can probably get away

without that. Thanks.

- Okay.

- Well, thanks, Stephen.

And thanks to the USGS Landslides

group for inviting me to talk here today.

So what I’m hoping to accomplish

in this talk is to talk a little bit about

my thesis research. But I also want to

talk about this subject of rockfall

frequency because it’s kind of

a niche topic, but it’s an interesting one

from a geomorphic perspective,

and it’s an important one

from a hazards perspective.

And so I hope to be able to give

an overview of a little bit of the theory,

some applications that other people

have done to try to assess

rockfall frequency in various ways,

and then to talk about the

particular case studies

in Glenwood Canyon

that I’ve been a part of.

So, to start off with some

acknowledgements, I wanted to

thank my adviser, Paul Santi,

whom I’ve been working with

for several years now,

and his help has been invaluable.

And then some folks at CDOT –

Matt Tello, Ty Ortiz, and Bob Group –

who have been helpful organizing

funding for me, but also making sure

I have access to facilities in the canyon.

Folks who have helped me in the field –

Claire Graber – my wife, Julia Payne,

Josh Clyne, and Kyle Radach,

who are all School of Mines alums

who helped me with field work.

And then, for funding, School

of Mines, Colorado Scientific –

or, sorry – Colorado Department of

Transportation, Colorado Scientific

Society, AEG Foundation, Wyoming

Geological Association, and GSA.

So we’ll talk about

rockfall frequency as a topic.

I’m going to talk about some established

methods that people have already

published to evaluate rockfall

frequency – to quantify it.

And then I’ll give some background

on my study area, and I’ll talk about

these two case studies from my thesis

research, both in Glenwood Canyon,

looking at rockfall frequency

at the same sites but using

two different methods.

So rockfall is this erosional geomorphic

process, and it’s also a hazard.

And so, when we talk about rockfall,

there are a couple of ways

to think about characterizing it.

So one is this parameter of frequency.

And, as you’d expect, that refers to

the frequency of how many rockfalls

are happening per year.

But, like with other geologic events,

like earthquakes and floods,

rockfall – and landslides – rockfall

frequency is tied to rockfall volume.

So the way that we quantify it –

the best way to quantify it is to

quantify it in that context in a way

that includes volume.

So the way that people have usually –

or, have often done this –

not every study, but a common way

to do this is to use

a frequency-magnitude curve

like you might see for

floods or earthquakes.

So what that plots is magnitude –

in this case, volume – on the X axis

versus frequency on the Y axis.

And you can see a bunch of different

kinds of parameters on the Y axis

here – cumulative frequency,

frequency density – which is just

frequency normalized by the

bin width of your histogram,

probability of exceedance.

A number of different parameters,

but this relationship still holds up.

So what people have observed when

they plot up rockfall data this way is

that there – in log-log space,

there’s this linear relationship,

which is quantified by a power law.

It’s not a linear regression.

It’s a power law regression.

And what that does is, you can use the

coefficients of the power law fit to that

data to describe the trend of rockfall

between frequency and magnitude.

And this plot also shows something

that people have often observed,

which is that there’s frequently sort of

a drop-off of data towards the

lower end of the volume spectrum.

And that’s related to – what people

usually attributed that to is,

in their rockfall databases,

the small events get under-sampled.

And so that’s why they fall away

from this linear trend, where,

in theory, the linear trend

should continue all the

way down to zero.

So other options for quantifying

rockfall frequency – people would

use a recurrence interval,

like with floods or earthquakes,

but those also have to be tied with

magnitude. And, as you probably

all know, recurrence interval is

basically just the inverse of frequency.

And so then the other side of this is,

where is rockfall happening –

on the talus deposit? Is it running

out beyond the talus deposit?

And so that’s another piece of this topic

that’s important to consider and is

usually a part of what people are

looking at in these frequency studies.

So I’m going to talk about a little bit

of literature review of studies where

people have looked at rockfall

frequency and depositional behavior

in order to get timing information,

magnitude information, to better

understand these processes.

And note that a lot of studies use

a combination of methods,

but I’ve broken the big methods

into a couple of categories.

So one is to look back at some kind of

a historical record of rockfall

and to be able to make calculations

based on a database that way.

So using actual historical records

where people have written down

the times and locations and volumes

of rockfall, dendrochronology using

tree rings, cosmogenic isotopes,

lichenometry – all to date old events.

And your other main option is to

directly measure rockfall and

create an inventory that way.

So change detection methods are

becoming more and more popular.

Seismic monitoring is

another one that’s been used

in at least some fields,

and we’ll talk about that.

And then there are a few other options

that are out there that are published.

I’m not going to talk about them

in any detail in this presentation,

but I can send you some

references if you’re interested.

Feel free to contact me – terrestrial

SAR, rockfall collectors, using

radiocarbon under select conditions –

so those are some of the other options.

So historical records are great

for computing rockfall frequency,

but they’re hard to get. So they’re easier

to find along transportation corridors

or if there’s a specific site of interest.

Yosemite Valley is a good example

of a place where there’s a pretty

extensive rockfall record.

But the detail and availability of

these records can be highly variable,

and sometimes a record will have –

for some events, it’ll have all the

information you want, and it

won’t have that for other events.

And these are often biased towards

larger events simply because those

are the ones that are more likely to

make it all the way down the talus

and impact a road or a facility

that somebody cares about.

So this plot on the right is from

an example in British Columbia where

they had two transportation corridors,

and along each of them, there was –

there were rockfall records

from both a road and a railroad.

And so they could – they could plot up

frequency-magnitude data and be able

to compare them along two corridors

and have two data sets to look at

each corridor. And so they got the

sort of plot that we’re talking about –

this linear relationship in log-log space.

There’s a little bit of this tailing off

towards the smaller-magnitude events.

But they found some pretty nice

agreement between their first corridor

of the highway and railroad and

the second corridor of

their highway and railroad.

And so, if we look at these

B parameters, to put these in context,

if the – if the B parameter is greater,

that means your curve is steeper,

and that means the relative

contribution of small events is greater.

And so that’s how you can start

to compare these curves

between databases. And this is a case

where they had some independent dates

from some rock avalanches in the area,

and so they were actually able to

extend up to some very large

volumes in their plots that way.

So dendrochronology – well, I guess

I’ll start off by saying, what if you don’t

have historical records of rockfall?

How do you build a database using

some other kind of record?

So one example is using

dendrochronology. That’s basically

counting ages with tree rings.

And this is a pretty

well-established age relationship.

And, when you take cores of trees,

if your – if your core intersects any

sort of disturbance that might have

been caused by rockfall, like a scar

in the bark or a year where the growth

ring is thicker because the tree was

reacting to the trauma, you can use

that to build up a rockfall database.

So you need to have relatively

dense long-lived trees.

And it takes a lot of effort,

but it gives you this opportunity to

look at spatial distribution of rockfall.

So this example is from Switzerland.

And they had a database of trees –

those are these little dots in

the upper-left plot in

kind of the colored area.

There’s some black and gray dots.

So those are marking locations of trees.

And the – so the color ramp in

this central part of the map

is showing the recurrence interval

between growth disturbances.

And so that’s what they’re using

as the proxy for rockfall events.

And it means that they’re able to

develop recurrence intervals for growth

disturbances at each tree and then

contour those values so that they have

a plot of recurrence interval as it

varies across the talus deposit.

So what they showed – and if you

look down at the plot below that,

they were able to identify the same

damaging event in a whole bunch of

trees all the way down this

north end of the talus deposit.

So they said, while most of our events

are relatively small incremental

rockfalls, or fragmental rockfalls,

we have record of one large event

that made it all the way down

this talus deposit and damaged

a lot of trees

on the way down.

So this is a method that’s not

precise enough to capture

year-to-year variations in rockfall,

but these more decadal-type variations,

you can detect with

dendrochronology.

So lichenometry is another option for

trying to build a database of old events.

This is a Quaternary dating

method from glaciology.

And it uses calibrated lichen

growth rates to infer

minimum surface exposure ages.

So what that means is, you develop

a calibration curve that characterizes

the relationship between lichen size

and lichen age. And people do that

by dating surfaces where they

know how old the surface is,

and they can measure a lichen on it.

And then you go to places where you

don’t know the age of the surface,

and you measure the lichen size

and compare it to that curve

to get your age estimate.

The accuracy can be a bit limited – and

the age ranges as well – to a little bit

longer than dendrochronology, but still

in the last 1,000- to 2,000-year range.

The upside is, this is a method that lets

you collect a lot of data very quickly

because you’re just collecting

lichen measurements.

It assumes that the rock surface

is clean prior to growth.

And there have been some

concerns raised by lichenometry.

You know, these are such long-lived

organisms that people are concerned,

how well do we actually

understand the lichen growth?

How linear is it –

or, how consistent is it, I guess.

And so that’s worth

acknowledging here.

But there is a fairly well-established

relationship between age and size.

And so you can still say some things

about, well, larger lichens are older.

So this is a case study from the

Austrian Alps where they had

a really large lichenometry data set

for a whole bunch of talus deposits.

And the map on the right is

showing those deposits there.

And I think someone is unmuted.

I’m going to mute – there we go.

We’ve still got someone unmuted here.

I’m getting some …

- Yeah. Could someone – everyone

please check and make sure

you’re muted?

- I’m just getting some feedback

on my speakers.

- Yeah.

[silence]

- Okay. That’s

better now. Thanks.

So, in this case study, they had

lichenometry measurements from

a whole bunch of talus deposits.

And what they’re able to show is

that the – they were looking at

lichen coverage to say, well,

at the tops of these talus deposits,

there’s a lot more rocks that are free,

or almost totally free, of lichen.

So we’re using that as a proxy

for greater activity

at the tops.

But they also found an association

between these more lichen-free fans

and permafrost. So they were

able to associate, well,

these permafrost-affected slopes

are causing more rockfall in certain

places around our study area.

And they also looked at

the calibrated lichen ages.

And, looking back, they were able to

find this peak – if you look at the

left-hand histogram down at the

bottom of the slide, so that peak

around 1890 or so, they associate that

with increased rockfall activity

at the end of the little ice age,

giving you more lichens that were

from that time period. And so they

were able to show a little bit of

temporal variation that way as well.

So another way you can get the ages

of previous rockfall is with

cosmogenic isotopes.

This gives you a good age range.

It also makes that assumption that

you have a boulder that was –

it was buried so deep, it was not having

any cosmogenic ray interactions.

And then the rockfall happened,

and it was exposed.

This is a method that’s more expensive,

and it takes a lot more effort to get

your samples and sample –

and do your sample processing.

And so that’s why it can be

difficult to get a big enough data set

to actually build a

frequency-magnitude curve.

But there’s still some

things you can learn.

So this is a case study from

Greg Stock’s work in Yosemite.

This is from a paper that’s

kind of a park-wide hazard

assessment for Yosemite.

And so they collected beryllium-10

cosmogenic ages for outlying boulders.

And so what they meant were boulders

that were beyond the edge of the

talus deposit, but were still obviously

rockfall-related and were also –

tended to be in and among

some of the campgrounds and,

like, facilities that already existed

in Yosemite Valley. And some of

these are really quite large.

There’s a picture of

one on the upper right.

So they collected these beryllium-10

ages and plotted them up.

And what they found –

they found a couple of things.

So one is that all of the ages fell below

their 15,000-year expected cutoff

because that’s the inferred age

for the deglaciation of the valley.

So all the rockfalls came out as

younger than that, which makes sense.

And that allowed them to use that

15,000-year upper age estimate to

get a recurrence interval for their

total data set of outlying boulders.

And so, if you take 15,000

and divide it by 258,

you get 50 to 60 years,

on average.

Okay.

So, if we switch over to the idea of

directly measuring rockfall with

change detection – or, rather,

change detection is a method

that’s becoming a lot more popular

these days as technology becomes

more available and more usable.

So what I’m referring to with change

detection is taking two point clouds –

if you look at the graphic on the lower

right, taking two point clouds where, in

the left panel, it’s showing computing

a normal to the first point cloud using

some-size neighborhood of points.

And then, in the right panel,

it’s showing how you then align

a cylinder along that normal

and average the points in the

two clouds so that you can

actually calculate the difference

between those two

clouds along that line.

So, when you repeat this kind of

calculation thousands or

tens of thousands or – of times for

an entire point cloud, it means you can

map change across a large surface.

And so you – I’m sure many of you

are already aware of how

powerful of a technique this is.

So you can do change detection on

point clouds developed using Lidar or

structure-from-motion photogrammetry.

For those of you that don’t know,

photogrammetry is this technique

visualized in the upper right,

where you take repeated overlapping

photos of a subject with a moving

camera, and then a computer

algorithm reconstructs the shape

of your subject based on

the motion of those cameras

and identifying the features

in multiple frames.

So photogrammetry is a cheaper

version of preparing point clouds,

but you’ve got issues with moving

vegetation, and you’ve also got to

be very careful about how you

scale the point cloud – if you use

scale bars or GPS ground control –

to tell the model what the

dimensions should be

in real life.

Lidar gives you very

precise measurements,

but it’s more expensive, and so it

can be more difficult to apply.

But both of these give you point clouds

that let you detect changes down to the

centimeter scale, which can be really

powerful for detecting rockfall,

since all the methods we talked about

before pretty much leave out all

the small – the things at the small

end of the volume spectrum.

So this is a photogrammetry case study

from a pit mine in Australia where they

had several monitoring periods where –

so the top image is showing the

pit slope they

were monitoring.

And the colors are representing

different periods of rockfall monitoring.

So you can see all the rockfalls that

happened in periods A1, A2, etc.

So they detected a total of

645 events on the small end

of the volume spectrum from

February to March 2018.

And so then they were able to

prepare frequency-magnitude plots.

And this is a point where you can –

you can start to see a few

of the differences here.

So this red line is showing a period of

lower activity while the blue

and orange lines are showing periods

of higher activity. And they associated

the higher activity with wet periods and

then also with extremely dry periods

when they had these weak siltstone and

claystones drying out and contracting

and fracturing more because of that.

These figures are from a Lidar

change detection study

where they had several monitoring

periods, but I think about a year of it,

they took hourly Lidar scans of

this coastal bluff in England.

And what that allowed

them to do was compute

their frequency-magnitude plot –

that’s the left of the upper graphs –

at several different

time intervals.

So, and by the way, on the Y axis,

you’re seeing the probability of

exceedance instead of what I put on –

or, what was on the previous plots

with frequency, but the power law

relationship still holds.

So this is showing how, if you

monitor at a shorter time interval,

your frequency-magnitude curve

gets steeper because your relative

contribution of small events is

greater and the large events is less.

And that’s because, when you have

frequent rockfalls at a slope, often

rockfalls will happen from the same

location on the slope repeatedly.

And so, if you scan those at – a year

apart, it might look like just one event.

But if you were to scan it every month

of the year, you might see it was,

in fact, five different events that

all just occurred in the same place.

So you see – with more frequent

scanning, you see more smaller events.

And then the plot on the right is

showing the change in the B parameter

for the power fit with the

change in the monitoring interval.

So I wanted to say just a quick word on

seismic sensing for rockfall monitoring.

A lot of the case studies that

I’ve looked at for this apply

seismic monitoring of rockfall

in volcanic settings, where they

were looking at a lava dome and

monitoring rockfalls from the lava

dome to use as a measure of volcanic

activity, whether that’s the extrusion

rate of the magma or if it’s related to

seismicity in the magma chamber.

So this plot in the lower left is showing

how rockfall activity was increasing,

and they were able to correlate

that with some other signs of

volcanic activity. I’m not aware of any

examples that took a seismic plot –

or, a seismic record of rockfall

and created a frequency-magnitude

plot from it. And I think that’s because

of some concern over estimating the

volume from the seismic amplitude.

Because that’s – the amplitude is

dependent on how close you are to the

seismic event and some issues like that.

So now that we talked about the

literature review, that’s kind of

a quick view of a whole variety of

ways to look at rockfall frequency.

And so, with those things in mind,

I want to switch gears and talk about

these case studies that I worked on

in Glenwood Canyon.

So what I’ll start with is to talk about

background on the study area itself.

So any of you that are already

in Colorado or have explored

the west may have some familiarity

with Glenwood Canyon.

So this is a transportation corridor

for Interstate 70 as well as the

Denver & Rio Grande Railroad

in western Colorado.

It’s a 13-mile-long stretch

of the interstate.

And Glenwood Canyon suffers from

really frequent rockfall problems.

And I’m sure you all heard, as well,

about the debris flows that have

happened in Glenwood Canyon

this past summer.

And rockfall has been a problem

in Glenwood Canyon ever since people

started putting roads through there.

So the geology of this area – we’re just

going to talk about the geology of

the lower part of the canyon

because that’s what’s relevant to

the rockfall studies I’ve been doing.

So, at the – at the base, there’s

the Precambrian basement.

These are metagranitoids,

sometimes intruded with some

other younger granite dikes.

And then, above that,

there’s two Cambrian formations –

the Sawatch orthoquartzite

and the

Dotsero dolomites.

And these form the base of

the White River Uplift.

So that’s what the Colorado River cuts

through to form Glenwood Canyon.

So last year, the Grizzly Creek Fire

burned part of the canyon itself

as well as Grizzly Creek Canyon

and much of the White River Uplift

in the vicinity of Glenwood Canyon.

And we’ll talk a little bit more about

that later because it’s more relevant

to the drone scanning study.

But that’s another piece

of our context here.

So I like to include this picture

because it shows some of the

motivation for studying rockfall

in Glenwood Canyon at all.

So there are plenty of cut slopes

in Glenwood Canyon and,

as you all know, cut slopes are

a frequent source of rockfall.

But this is a place where

we actually have a lot of rockfall

from natural slopes. And you can

see the proximity of these –

so the big slope above the

tunnels to the right,

that’s all the Precambrian

granites and metagranites.

You can see the proximity

of these kinds of slopes

to our transportation facilities

here in the canyon.

So, for the first case study,

we studied six cliff and talus deposit

systems in the Glenwood Canyon area.

Those locations are marked by

the orange points in this map.

And then the background shading

is just a slope map, and it’s really

just to show the relief of the canyon.

So you can get an appreciation for it.

So these slopes – four of them

are granite rock masses.

There’s one orthoquartzite

rock mass and then another one

that has – another talus deposit

that has sources that are both

granitic and orthoquartzite.

And we choose the locations primarily

for access because they were

near to places where it was possible

to park and get out and walk to –

walk to these slopes

to collect data.

So we chose to use lichenometry

to quantify rockfall frequency

because we don’t have much

of a historical database.

Even with all the focus on Glenwood

Canyon to clean up rockfall hazards,

the database is really limited.

I have a copy of part of it,

and most of the events, there’s

no volume recorded for it.

So we have timing

information but no volume.

So that doesn’t help us

for frequency-magnitude.

We don’t have enough large long-lived

trees for dendrochronology.

We’re in a deep canyon, so cosmic

ray shielding is problematic

for cosmogenic dating.

And the other upside of lichenometry

is it’s just a lot cheaper than any

sort of direct methods that

require advanced equipment.

And, in addition, this was –

so I did the field work for this in 2018

and, even in the few years since then,

photogrammetry technology, especially,

has been getting a lot cheaper.

So, to collect the data for this,

I measured lichens on boulders

at each slope and sampling

along the contours.

And we’ll look a little bit

more at that in a moment.

So these images are showing

the study sites from this project.

The red dashed lines are

outlining the source areas.

And then the talus is

below in each photo.

and then the letters are

indicating lithology –

so G for granite,

Q for quartzite.

So, as I alluded to earlier,

with lichenometric studies,

to get your estimated ages,

you have to have some understanding

of what the local growth rate is

for your lichen species.

Because, since lichens are

living organisms, they’re affected

by climatic factors that can

change their growth rate.

So theoretically, you can –

you can get calibration data from

any surface that you know the age of,

which could be a foundation,

or a monument, or a quarry wall, or –

there are a whole bunch of possibilities.

But most frequently, the known

age surface information

comes from tombstones

in local cemeteries,

where you have a date right

there on the gravestone.

There’s some uncertainty with

gravestone ages just because they might

have been put up a few years later after

someone died, or a few years early.

But it gives an okay

approximation of age.

And so then you can start to develop

these curves, which I’ve put in the

lower right, that are plotting the

relationship between surface age

of these known-age surfaces

and then our lichen long axis.

So, to get our calibration curve,

we plot a linear fit to the

fastest-growing lichens. So those

are marked by the little crosses.

And you can – you can see that

this data is a little bit limited.

And so that’s worth bearing

in mind that our calibration curves

are not very well-constrained.

And so it’s just – so that’s

an uncertainty in

our calculated ages.

So, for the data set, there’s –

I have 1,200 lichen measurements

from these deposits.

And so these histograms on the

right are summarizing –

oh, and I forgot to mention,

the reason I have two columns

over here on this slide

is because I have two species.

So the idea behind choosing

two species was that we could collect

two semi-independent lichenometry

data sets and then be able to compare,

do they show the same picture

about rockfall or do they not.

And we’ll have sort of

a semi-independent

check on our results.

So that’s why these and some

other subsequent plots are

color-coded where the brown is

for one species, Lecidea atrobrunnea,

and the green ones are

for Lecanora novomexicana.

So the histograms are showing our

lichen data after we’ve converted

the size measurements to ages.

The data set represents

over 1,000 boulders with

a few that are – have volumes

up to 24 cubic meters.

That was the largest one.

And then a note about

an assumption in this method.

This method is assuming that

every block in the talus is

its own rockfall event.

And, as you can imagine,

that is an assumption that’s more likely

to be true for small blocks, which

theoretically are less likely to shatter,

and less true for large blocks.

So that’s another assumption.

We have to make it with the

precision of lichenometric dating.

It’s not really possible to date

five adjacent boulders and tell

for sure whether they all came from

the same rockfall event or not.

So that’s an assumption to list here.

So this gives us frequency-magnitude

curves where we’ve plotted

our rockfall volumes. Those are

from the boulder size measurements.

And then, on the Y axis

is frequency density.

So that’s our frequency where

we’ve computed – we binned the

lichen sizes based on volume,

divided the oldest age in that bin

by the number of measurements

in the bin, and then further normalized

that by the width of the bin.

So that’s why it’s frequency

density instead of

just frequency.

So that gives us our

frequency-magnitude curves.

With some of these, there’s pretty good

agreement between the two species at

a given slope – Slope 1 and Slope 2.

With others, there’s a little bit

more divergence. And another thing

that you can observe about these plots

is that the data start to fall away from

the curve towards larger magnitudes.

And we attribute that to the under-

sampling of the large volume rockfalls.

Because we’re assuming every boulder

is its own rockfall and larger volumes

are just going to be more likely

to disaggregate – you know,

a 24-cubic-meter boulder

is pretty large for this area,

given typical rock

mass characteristics.

So, from the frequency-magnitude

curve, we can calculate recurrence

interval, which is just

1 over our frequency.

And that lets us get recurrence intervals

for different volumes of rockfalls.

So these are color-coded –

so the blue is for a 0.1-cubic-meter

rockfall – 1, 10, 100 cubic meters.

And then, each of these is grouped

because we’ve got our

two species’ estimates again.

So the picture is pretty consistent

at the smaller volumes.

Our recurrence intervals in years

are in the 1- to 10-year range.

But, once we get into larger volumes,

there starts to be a lot more scatter.

There starts to be less agreement

between the species.

And that goes back to the mismatch

in our frequency-magnitude curves.

One thing that’s curious about this

is that the – so Slope 1 is one of

our quartzite slopes. And that

looks very similar to Slope 2,

which is a granitic rock mass.

And just from looking at them,

you might – it would be reasonable

to conclude, well, the quartzite

one should have more frequent

rockfall, right, because it’s

much more frequently jointed.

And so that’s one of the things that

was a little bit curious and that

I’ll come back to with the drone study

because that’s one of the things

we’re looking to check is,

does this result

actually make sense.

If we plot up the lichen ages spatially,

it gives an opportunity to try to look

for patterns in deposition to see,

are we seeing older ages at the

top of the deposit and young –

or, sorry – younger ages

at the top and

older at the bottom?

And the patterns are not

very clear in this data.

So we conclude that to indicate that

our rockfall activity over the time

span that we can look at here, which is,

you know, going back – in these plots,

the oldest age is about 500 years.

In that time span, rockfall at these

slopes is dominated by these smaller

incremental rockfalls instead of

a large event that might bury

a whole side of a talus deposit.

So then, to put these talus deposits

into a longer-term, like, geomorphic

context, we wanted to know, how does

the volume of the deposit relate to the –

relate to the frequency-magnitude

information that we have?

Basically, to back out, if this is our

frequency-magnitude curve,

how long would it take us to get

all of the volume of talus that

we’re observing at these deposits?

So we were looking at

Glenwood Canyon here –

this is an oblique image –

to illustrate how talus tends to

fill these chutes and gullies

in these granitic bedrock slopes

rather than forming cones

or big aprons of material.

There’s some of those kind of cases,

but we didn’t really find

a volume estimation

in the literature that

matched the slopes that we had.

And so we came up with some

new ones to try to model,

if we have this gully, and we

fill it with talus, and it’s basically

bounded by bedrock, what kind of

volume does that give us?

And so, once we get the volume

estimate, then we can – we can

back it out to find out what kind of time

span of deposition are we looking at.

So, to illustrate how we get the volume,

this plot on the right is showing –

here’s the outline of our deposit.

Here’s the DEM.

In Part B, here’s the DEM

of the deposit surface.

And so, if we – if we estimate what this

channel steepness is from the bedrock

slopes around the deposit, and we get –

we use the interquartile range of these

bedrock slope measurements to get

a steeper channel – or, sorry,

a steeper chute and a shallower

chute so that we have

an upper- and a

lower-bound volume.

And then, in the lower group, it’s just

showing the DEM put together with

the lower surface, and that’s what

we use as our deposit volume model.

So this upper-right plot is showing

the volume ranges that we estimated

for each deposit. So those are varying

between about 10,000 cubic meters

all the way up to between

400,000 and 700,000 cubic meters.

That’s a much larger, more

extensive deposit – Slope 6.

And so we found an equation that

can use the frequency-magnitude

parameters, A and B, an estimate

of your largest and smallest possible

rockfall sizes, to calculate a volumetric

flux, which we then – we took that,

and we divided our total

volume estimate by that flux

to get our accumulation time.

And we constrained this max volume

using some measured events.

So there’s a rockfall that hit

the highway in 2004 from the quartzite

that was about 1,100 cubic meters and

then a 2010 rockfall of about

450 cubic meters from the granitoids.

And so these are our estimated

accumulation times from those.

And we have some good overlap

with some of these and some

not-so-great overlap [chuckles], or lack

thereof, for some of the other slopes.

And so that goes back to the shape

of the frequency-magnitude curve.

And where we have mismatch between

the two species in those curves,

we get a bigger possible range

for our accumulation time.

So we have these

recurrence interval estimates.

Some of them seem to

make possible sense.

A 1-cubic-meter rockfall

every few years seems reasonable

given the amount of lichen coverage

on these deposits, the amount of,

like, scars on boulders from,

you know, recent rockfalls.

So we have those estimates.

We have this picture that

the smaller rockfalls

are dominating over

our evaluated time scale.

And that it’s a fairly random

depositional pattern at that,

you know,

five-century-type time scale.

And we have these times that

we estimated to accumulate

the talus volumes.

But these left us with some

more questions about, well,

how do we know for sure that

these results actually make sense?

How do we validate this with

an independent method that’s

not dependent on lichen growth?

And so that’s how we got to doing a

drone study in Glenwood Canyon.

And I’ll talk about that in a minute.

I just wanted to acknowledge

some of these uncertainties about,

when you measure a lichen date,

you’re assuming that it grew since

the surface was exposed, but maybe

there was a lag in that time.

And we also have – we know

there’s some uncertainty in

our calibrated growth rates.

And our larger events are

probably under-sampled.

And so we have

some issues there.

So, for the second case study,

the Grizzly Creek Fire happened

in 2020 before we started

a drone monitoring campaign.

So that’s – the colors on this map

are showing the burn intensity.

And so we ended up trying to design

this study to be a hybrid, where,

one, we look at, how does drone –

what does drone scanning tell us

about the slopes that we have

lichenometric data for, but also, what

happened after the Grizzly Creek Fire?

Do we have more rockfall from

these granitic slopes because

of the wildfire damage?

So we reused three sites from the

lichenometry study, and then we have

one new one that’s our burn

slope, which I’ll show here.

So, on the left side,

this is our burn slope.

You can see the blackened trees.

And we chose it because it

was relatively close to a slope

where we had lichenometric data.

That’s the one

on the right.

And these two have similar aspects,

similar rock masses, and so we felt

they would be a reasonable

choice for comparing them to see,

do we see more rockfall

at one than the other.

Or, do we see – do we have

a similar picture from those two?

So, for the second case study,

it’s in progress.

These are a lot of goals that we

haven’t gotten to completing yet.

But we’re wanting to

look at the comparison

of burned and

unburned slopes.

We’re going back to that quartzite

and the granite slope –

the Slopes 1 and 2 in along

the highway that were –

that gave such similar rockfall

recurrence estimates and see,

does that picture actually make

sense based on drone scanning.

So that’ll help us with our soft

validation of the lichenometric study.

And then we’ll also be able to look at

change detection results from our

source zone versus the talus.

Do we get the same number of

rockfalls if we do change detection

on the talus as we do on the source?

Because, in the lichenometry study,

we’re using the talus as a proxy

for our rockfall activity.

So, like I said, this is a year-long

scanning campaign.

We’re aiming for monthly scans.

It’s been interrupted by the summer

debris flows that you’ve

probably all heard about.

But I’ll be continuing to collect

data at least through next spring.

I’m using a DJI Mavic 2 Pro.

And I had a ground control network

that I set up at each slope for the

first scan, and then I’ll be referencing

every subsequent scan

to that first point cloud.

And I put these figures in to illustrate

the flight automation that we’re using.

So, to try to make our data collection

as consistent as possible, we’re trying

to fly the same flight paths.

So automation helps with that.

These are just – I wanted to show

a couple photos of the debris flows

because they’re quite dramatic.

But we’ve been able to get back

in the canyon. And so,

once we process these scans,

we’ll be looking at, did that rainfall

have any effect on rockfall.

For the sake of time, I’m going to

skip beyond the workflow slide

and just talk a little bit more

about our preliminary results.

So we – so far, I have

24 scans of our four study sites.

So, Slopes 1, 2, and 3, those are

reused from the lichenometry study,

and Slope 4 is our new one.

I’ve got a table to summarize

all the many photographs taken so far,

and there will be many more.

So this is showing a couple of the

change detection calculations.

So the output you get is

a point cloud where every point

has a distance value

associated with it.

It’s the distance between

the two clouds at that point.

So blue is into the page as we’re

looking at it, and red is out of the page.

So what this is illustrating is that,

when you get these results,

you’ve got to look through

and so a little interpretation

to see what do these

results actually represent.

So, in this case, there’s some

vegetation that’s showing up

as a change out of the page.

So that’s not what we’re interested in.

But we do have a rockfall

over here that I’ll zoom in on.

So, if we zoom in, we’re seeing,

all right, here’s our blue marking

that there’s a change

into the slope here.

And so, if we look at the photographs,

what we see is this sort of

crescent-shaped block marked

by the pink arrow in the upper right

is gone in the second image in July.

And a quick volume estimate of that

is about half a cubic meter.

So that’s the one rockfall I’ve seen

so far in the data processing.

Like I said, there’s – I have a lot

of scans that I haven’t

even touched yet.

So there’s plenty more

to do to look at the data.

So, in addition to that rockfall

at Slope 2, when I was setting up

the ground control points,

I saw there was a rockfall that

happened while I was standing there

watching it, which is pretty interesting.

And it was also in that

about half-meter,

maybe a little bit bigger –

half-cubic-meter size range.

We haven’t seen any rockfall

at the other slopes yet.

There’s a lot of data to process,

but so far, we’re not seeing a lot of

rockfall in the data that I’ve looked at.

And this is – even though these change

detection intervals are spanning at least

part of our thawing season – so what

that might imply is that thawing is not –

it might say that, at these slopes,

under their current geotechnical

conditions, thawing is not as

important for initiating rockfall.

Or it might just mean that we’re at

a low background rate, and we’re

not seeing a lot because it’s

just happening at a –

maybe a few-year recurrence interval

at these particular slopes.

And so we might see some more after

those September and October scans

related to rainfall rather than thawing.

So the rockfalls we’ve seen at this

Slope 2 so far are actually fairly

consistent with the results that

we got from the lichenometry.

So, if we go back to that recurrence

interval plot, the half-meter size

would put us at a recurrence interval

around three to four years.

And so, if we’re only seeing one or two

of those in that kind of size range

in a single spring season,

that fits in fairly well with the idea

that those would be on

a couple-year recurrence interval.

If we were seeing, for example,

10 half-cubic-meter rockfalls

in a single spring season,

then we might be more concerned

that these lichenometry results

are not so accurate.

But, so far, what we’re seeing

is somewhat consistent.

But, like I said, there’s

a lot of data left to look at.

So just, on future work, there’s going

to be a lot more scanning to do,

at least through this next spring,

and a lot more data analysis.

And so what I’m hopeful we’ll

see is that we’ll be able to fill out

the picture of rockfall.

We’ll be able to – we’ll be able to

have a really good sense of,

what did the lichenometry tell us

that we believe and we can definitely

support it with other independent

evidence, and where does it fall short?

And be able to – be able to sort of

put those thoughts together

as we keep moving forward.

So thank you all for listening.

That was a lot really fast –

a lot to go through,

but I appreciate your attention.

And I’d be happy to hear your

feedback and answer your questions.

Oh, and by the way, I have

a reference slide at the end here,

but any of those literature methods

that you’re interested in, feel free

to reach out to me, and I’d be happy

to provide more papers if you –

if you want to look at anything

like that. So thank you.

[silence]