First Advisor

Avinash Unnikrishnan

Term of Graduation

Summer 2022

Date of Publication

7-28-2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy (Ph.D.) in Civil & Environmental Engineering

Department

Civil and Environmental Engineering

Language

English

Subjects

Resource allocation, Business logistics, Reinforcement learning, Drone aircraft -- Industrial applications, Electric vehicles -- Industrial applications, Delivery of goods -- Technological innovations

DOI

10.15760/etd.7972

Physical Description

1 online resource (xv, 202 pages)

Abstract

Transportation is a key driver of any national economy. In the United States, the transportation sector contributes $1.3 trillion to the economy, of which freight transportation represents more than 50%. Trucks alone account for more than 70% freight movements in the United States. In addition to worsening stress at ports of entry and traffic congestion in the system, freight also accounts for nearly one-third of the greenhouse gas emissions in the United States. Emerging transportation technologies like electric unmanned aerial vehicles (or drones) and electric vehicles can provide a more sustainable alternative to combat greenhouse gas emissions and reduce the congestion in the transportation network.

This dissertation extends the frontier in planning and real-time resource allocation in logistics systems that utilize emerging transportation technologies to move freight. A common theme throughout the dissertation is uncertainty. In network planning problems, uncertainty stems from inherent variation in problem parameters or the potential unavailability of data. In real-time operations, uncertainty arises due to the dynamic nature of the problem as the information is gradually revealed over time. The dissertation considers four application problems spanning both public sector and corporate applications. These problems involve a network planning component or a real-time operations component or both. The real-time operations are modeled as online resource allocation problems and multi-armed bandits-based reinforcement learning methodologies are proposed. The contributions are made by developing novel problem formulations for each problem and proposing two new multi-armed bandit problems. A performance regret bound is also obtained for one of the proposed multi-armed bandit problems.

The four application problems are now very briefly described. The first application considers a network planning problem for locating electric drones equipped with automatic external defibrillators (AED) in an effort to combat out-of-hospital cardiac arrests in a service region. The second application considers a facility location and dynamic resource allocation problem applicable to a logistics company expanding to offer instant delivery using electric drones. The third application also considers a facility location and dynamic resource allocation problem but in the context of relief prepositioning and their equitable distribution post-disaster. Finally, the fourth application considers a dynamic truckload pickup and delivery problem in a service area using a fleet of electric trucks.

Rights

©2022 Darshan Rajesh Chauhan

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

Persistent Identifier

https://archives.pdx.edu/ds/psu/38386

Share

COinS