Degree Type

Dissertation

Date of Award

2020

Degree Name

Doctor of Philosophy

Department

Aerospace Engineering

Major

Aerospace Engineering

First Advisor

Christina L Bloebaum

Abstract

Commercial airlines use revenue management systems to maximize their revenue by making real-time decisions on the prices and booking limits of different fare products and classes offered in each of its scheduled flights. Traditional approaches — such as mathematical programming, dynamic programming and heuristic rule-based decision models — heavily rely on mathematical models of demand and passenger arrival, choice and cancellation, making their performance sensitive to the accuracy of these model estimates. Moreover, many of these approaches scale poorly with increase in problem dimensionality. Additionally, they lack the ability to explore and “directly” learn the true market dynamics from interactions with passengers and adapt to changes in market conditions on their own. To overcome these limitations, this research uses deep reinforcement learning (DRL), a model-free decision-making framework, for finding the optimal policies of seat inventory control and dynamic pricing problems. The DRL framework employs a deep neural network to approximate the expected optimal revenues for all possible state-action combinations, allowing it to handle the large state spaces of the problems. Multiple fare classes with stochastic demand, passenger arrivals and booking cancellations, and overbooking have been considered in the problems. An air travel market simulator was developed based on the market dynamics and passenger behavior for training and testing the agent. The results demonstrate that the DRL agent is capable of learning the optimal airline revenue management policy through interactions with the market, matching the performance of exact dynamic programming methods. The performance of the agent in different simulated market scenarios was found to be close to the theoretical optimal revenues and superior to that of the expected marginal seat revenue-b (EMSRb) method. Also, when faced with market perturbations, the DRL agent has been observed to actively learn to change its policy to maximize revenue in the new environment, demonstrating its ability to adapt to changes in the market conditions.

DOI

https://doi.org/10.31274/etd-20200902-146

Copyright Owner

Syed Arbab Mohd Shihab

Language

en

File Format

application/pdf

File Size

109 pages

Share

COinS