Six steps to automate your oil field is our discussion today. In yesterday’s blog, we discussed the tedious past and current methods for trying to optimize gas lift production. We found that it was a laborious process that could not physically allow operators enough time to adequately monitor each well in a field of wells, much less optimize all of them at the same time. Today, we walk you through the six steps to automate your oil field to help optimize your production, increasing ROI and the value of your fields. Below is our six step approach to automating your oil field.
Step 1: Establish a Monitoring Solution on Wells in the Field.
The first step is to set the field up on an application, like OspreyData’s Unified Monitoring solution, to continuously ingest the data from the physical sensors such as the casing, tubing pressure, gas injection rate, tank levels to measure oil and water production and so on. This application should have the capability to process the metadata from the well such as the deviation survey, GLV design, so that there can be a live estimate of the injection depth by mapping the physical sensor data with design and completion data.
Step 2: Train and Deploy Machine Learning Models.
The second step is to train and deploy machine learning models. Note: There can be separate models. For example, one model can calculates normal statistics and isolates normal operating periods from abnormalities, while another detects interfering activities such as freezes, frac-hits, and compressor instabilities. This is needed because you do not want to count on the performance of the well while it is disturbed. We can also build a model to automatically detect the exact time of the set point change and calculate the production rates before and after the change.
Step 3: Generate Live Simulations for All Wells.
The third step is to generate real time simulation for all wells on a live basis. These simulations are to be generated using the ingested and processed data from Step 1, after isolating abnormal states in step 2.
Step 4: Treat Parameters as Probabilities, Not Certainties.
The fourth step deals with some trickiness that can arise. The tricky thing about simulation is that it provides non-unique solutions when attempted to match the actual well. If the underlying reservoir pressure or static bottom-hole pressure, PI and other down-hole states are perfectly known, a unique solution can be possible. But these parameters are highly uncertain, especially on unconventional wells where static bottom-hole pressure does not stabilize even after several days of shut-in. So, these parameters need to be treated probabilistically, rather than with certainty. This aspect is further addressed in Step 5.
Step 5: Make Recommendations Under Uncertainty
In the fifth step, we consider that the purpose of this whole model building is to take decisions or provide recommendations under uncertainty. We can leverage the historic data and response of the well to changes to inverse model the well’s operating conditions and learn about what’s going on down-hole. This is a great additional benefit from this approach. It’s exciting that you may be getting a glimpse into where the well is at in its life cycle. As a result of this, it is also possible to better understand the transition of the underlying parameters of the well, which ensures physical consistency. Engineers and operators can use these underlying parameters for modeling re-stimulation or changing lift type, the data scientists or reservoir engineers can use this for validating their sweet spots.
Step 6: Leverage Your Understanding of the Well
The last step to automate your oil field is to leverage this deeper understanding of the well, now that we know more about the well’s normal operation and underlying conditions. We set up the application to continuously monitor the well and provide proactive gas injection recommendations, allowing operators to stay on top of the whole production process.