Loading

Seismic Facies Identification Challenge

Solution with seismiqb and custom postprocessing

Validation example of custom postprocessing with short solution explanation for fourth round

HSilksong

Hello everyone!

My name is Svetlana, and I work with seismic interpretation tasks, and I think this challenge is an excellent opportunity to try my hand.

The challenge has many good notebooks with EDA and solutions, so I will briefly show some of my thoughts about the challenge solution. Also, I want to propose you try another postprocessing trick, which can improve your solution.

For this task, I used open-sourced libraries seismiqb and batchflow. You can find more about seismiqb capabilities in the explainer from the first challenge round.

At the first step, I converted data in the blosc data format: it helps to load and process data faster than npz.

 

After this preparation, I made ordinary machine learning solution steps:

  • Splitting dataset into train and validation parts.
  • Defining and running training procedures on the train data crops
  • Defining and running inference on the validation volume
  • Postprocessing
  • Quality measurement

 

But the devil is in the details.

  • For adequate quality measurement, the training dataset is split into training (70%) and validation (30%) subvolumes by xline. The validation part is neighboring to the test set because seismic data is sensitive to neighboring traces.
  • As the model I used DeepLabV3 with efficientnet-b1 encoder from segmentation_models_pytorch framework.
  • The training sampler generates random data crops from the training set for the better generalizing ability of the model.
  • Label encoding plays an important role: data is imbalanced, the second label is suppressing and impacts model training. To reduce its influence, I proposed encoding it with the highest label. In controversial situations, preference is given to classes with a lower encoded index on the accumulator aggregation stage (due to the specifics of the implementation).
  • Training is implemented with a pipeline because it makes optimized data stream processing.
  • Training pipeline contains augmentations (rotation, noise addition, flip, scaling, and elastic transform) declarations for data variability.
  • In the inference stage, every data unit has several predictions from different data slices. So, a special data container, accumulator, save every prediction as class label frequency for every point.
  • When a prediction is made for the entire validation volume, the accumulator aggregates saved data into the desired output format. For each data point, it takes a label with the highest frequency. But that’s is not enough: aggregated prediction is noisy

 

I propose weighted window aggregation with different weighted approaches to improve this situation. And this helps to smooth the prediction:

 

As a bonus, this postprocessing makes a weighted f1 score a little higher.

You can find more details about training and validation processes in the demo notebook. Also, the notebook contains some proposals for improvements which you can try.

Hope, this brief description and notebook will help you in your future machine learning experience. Good luck!

P.S.: I looked in other public notebooks and found different approaches for postprocessing, and I didn’t find any solution with this postprocessing. I hope it isn't a duplicate, sorry in another case.

 

 


Comments

You must login before you can post a comment.

Execute