Abstract
In this chapter, we study the Age of Information (AoI) when the status updates of the underlying process of interest can be sampled at any time by the source node and are transmitted over an error-prone wireless channel. We assume the availability of perfect feedback that informs the transmitter about the success or failure of transmitted status updates and consider various retransmission strategies. More specifically, we study the scheduling of sampling and transmission of status updates in order to minimize the long-term average AoI at the destination under resource constraints. We assume that the underlying statistics of the system are not known, and hence, propose average-cost reinforcement learning algorithms for practical applications. Extensions of the results to a multiuser setting with multiple receivers and to an energy-harvesting source node are also presented, different reinforcement learning methods including deep Q Network (DQN) are exploited and their performances are demonstrated.