A rumble in the …..

….. canyon.

Well it seems we’ve been in blog silence for a while, but that doesn’t mean we haven’t been working away in the background.

Although it is true that COVID-19 has presented its challenges in a whole variety of ways, from having to work at home, having limited occupancy in the labs and even presenting challenges in buying items that we need to push on with our instrumentation developments, and getting them delivered and quarantined, we are still pressing on and many things have happened in the almost six months since our last blog post.

We have a number of developments in progress which range from building a new front end to our datalogger that handles a whole variety of different sensor types because, it seems, every sensor that we want to attach outputs its signals or even numbers in a different way, or needs to be powered, or not powered to work.

Diversity and flexibility of attachments is always a design and development nightmare, as what works well for most things, invariably, doesn’t work at all for just one thing, and its dealing with the oddities that take forever.

Our current challenge is surprisingly not imposed by the sensors we want to attach, but the “advances and improvements” in the components that the printed circuits boards are made from – or in this case just one of them.

A key component has been “improved” with a new generation, which of course has had a very major impact on the design developed to that point.

What is proving most challenging is designing the flagging that handles what is known as a “data overrun” which is when the buffer “bucket” of arriving data fills up to overflowing before the datalogger’s “brain” can process it and pass it off to storage.

Not a challenge we normally face, but as we move to faster and faster sampling rates and collecting more and more data channels in parallel, it is a potential problem that any good design has to consider and plan and build for.

The previous generation component we used, used to provide a flag, the new one doesn’t, and so we now have to design in that capability.

Losing data chunks is simply not acceptable.

So the front end design has had to expand to include the communication channels between the various parts of the datalogger – it is a bit Pandora’s box-like – when you start a development you don’t really know where you’ll end up once you start lifting the lid of the problem you are trying to tackle. And adding in the combination of home working, occasional single occupancy lab working, Zoom-based problem vs solution ideas bouncing, limited periods in which PCBs can be made and the challenges of testing them ……

…… it’s a good job we are up for a challenge.

In parallel, the Boss managed to win a significant amount of capital investment from the Natural Environment Research Council in the face of quite steep competition – almost £1M – to expand and enhance our platforms in a whole variety of ways.

Although good in some ways, its proven not very timely in others, as COVID-19 has made purchasing of bespoke items of our design very challenging indeed.

A large chunk of this capital will be spent on expanding our combo electromagnetic/seismic capability – and involves more than 100 off 3m antenna arms made of plastic pipe – the arrival of those didn’t make us popular with the deliveries quarantine!

This development has included a design and build of an electromagnetic sensor – called an electrode – handler for our new generation of datalogger – which we first built as a prototype in house, and having proven that design we’ve had made in a few numbers by a PCB manufacturing facility so that we can run in-parallel tests across a number of systems at the same time.

Every inch of space on a PCB is used – both front and back. Every component is tagged and numbered.

These boards were populated with components in-house and are being tested. When they pass the final stage, we will have another set made, with the components added by the manufacturing facility, test those, and then bulk buy to roll-out.

Another large chunk of the capital will be spent on enhancing the Facility’s capability to record earthquakes in the deep ocean for long periods of time.

The remainder will be used to upgrade the older platforms to the newer datalogging capability and replace many of the relocation beacons that have failed or become highly corroded during recent long deployments.

Everything that goes into the sea has to be considered as a consumable – its just unfortunate that our “consumables” are not that cheap.

Elsewhere, the scientists that are using the data from the instruments that were stuck on the seabed during the COVID-19 first wave are starting to get their results.

In the Azores, where six months of data were actually recorded instead of the originally planned two, it seems more than 3500 individual earthquakes have been found in the first pass automatically picking of the data with many more pickable by hand.

In the event map below the white circles show where our instruments were ideally located to record these events, with the most occurring in the centre of the array footprint

Courtesy of the Azores science team.

It also seems that our ability to get instruments onto the seabed quickly, within 2 weeks of the funding being approved, managed to get them in place for the peak of the events associated with the magmatic eruption causing them.

Perhaps our biggest challenge over the COVID-19 year was the Congo recovery, with only our instrument platforms having remained on the seabed and therefore now providing the only data for the project to work with.

Our instruments were only there to act as a ground motion ground-truth.

However, they have proven to be very capable of recording the rumbling caused by sediment flows as they progressed down the canyon.

The image below shows – as dense yellow areas – the two large scale slides within 14 hours of each other that broke the seabed telecoms cables and cut off West Africa, and behind those many more smaller events one after the other after the other.

This image shows just five day’s of data, these instruments were recording for nine months or more.

Courtesy of the Congo science team.

This kind of plot – a spectrogram – plots signal frequency up the side and time along the bottom, colour coded in amplitude – i.e. how much signal at that frequency.

The yellow peaks show the flows are characterised by high amplitude signal in the low frequencies. The first flow was recorded for almost 15 hours, and the second smaller one for 6 hours.

The scientists estimate that they were travelling at 8 m/s, so that means we have recorded them for several 1000 km distance of travel.

The number of smaller events shown on the right of this five day window shows just how dynamic these systems are.

A great success for our instruments and their diverse range of capabilities.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s