基于代理模型和深度强化学习的试井自动解释方法

An Efficient Approach for Automatic Well-testing Interpretation Based on Surrogate Model and Deep Reinforcement Learning

  • 摘要: 基于深度确定性策略梯度(DDPG)算法,提出一种基于深度强化学习的试井曲线自动解释方法。将该方法应用于4种不同试井模型条件下的曲线自动拟合。为提高训练效率,建立基于LSTM神经网络的试井代理模型。通过训练智能体与代理模型交互,最终得到最优曲线拟合策略。结果表明:曲线参数解释的平均相对误差为5.51%,平均计算时间为0.427 s,计算速度相较物理模型提高两个数量级。该方法在案例应用中参数解释平均相对误差为4.32%,证明了方法的可靠性。

     

    Abstract: A deep reinforcement learning (DRL) based approach is proposed for automatic interpretation of well-testing curves. Based on a deep deterministic policy gradient (DDPG) algorithm, the proposed DRL approach is successfully applied to automatic matching of four different types of well-testing curves. To improve training efficiency, a surrogate well-testing model based on LSTM neural network was established. With episodic training, through interaction with the surrogate model the agent converged finally to an optimal curve matching policy on different well-testing models. It shows that the average relative error of the curve parameter interpretation is 5.51%. Additionally, the proposed DRL approach has a high calculation speed, and the average computing time is 0.27 seconds. In case study applications, the proposed method achieved an average relative error of 4.32% in parameter interpretation, which shows reliability of the method.

     

/

返回文章
返回