Online learning for auction mechanism in bandit setting
This paper is concerned with online learning of the optimal auction mechanism for
sponsored search in a bandit setting. Previous works take the click-through rates of ads to
be fixed and known to the search engine and use this information to design optimal auction
mechanism. However, the assumption is not practical since ads can only receive clicks
when they are shown to users. To tackle this problem, we propose to use online learning for
auction mechanism design. To be specific, this task corresponds to a new type of bandit …
sponsored search in a bandit setting. Previous works take the click-through rates of ads to
be fixed and known to the search engine and use this information to design optimal auction
mechanism. However, the assumption is not practical since ads can only receive clicks
when they are shown to users. To tackle this problem, we propose to use online learning for
auction mechanism design. To be specific, this task corresponds to a new type of bandit …
Abstract
This paper is concerned with online learning of the optimal auction mechanism for sponsored search in a bandit setting. Previous works take the click-through rates of ads to be fixed and known to the search engine and use this information to design optimal auction mechanism. However, the assumption is not practical since ads can only receive clicks when they are shown to users. To tackle this problem, we propose to use online learning for auction mechanism design. To be specific, this task corresponds to a new type of bandit problem, which we call the armed bandit problem with shared information (AB-SI). In the AB-SI problem, the arm space (corresponding to the parameter space of the auction mechanism which can be discrete or continuous) is partitioned into a finite number of clusters (corresponding to the finite number of rankings of the ads), and the arms in the same cluster share the explored information (i.e., the click-through rates of the ads in the same ranked list) when any arm from the cluster is pulled. We propose two upper-confidence-bound algorithms called UCB-SI1 and UCB-SI2 to tackle this new problem in discrete-armed bandit and continuum-armed bandit setting respectively. We show that when the total number of arms is finite, the regret bound obtained by UCB-SI1 algorithm is tighter than the classical UCB1 algorithm. In the continuum-armed bandit setting, our proposed UCB-SI2 algorithm can handle a larger classes of reward function and achieve a regret bound of O(T2/3(dlnT)1/3), where d is the pseudo dimension for the real-valued reward function class. Experimental results show that the proposed algorithms can significantly outperform several classical online learning methods on synthetic data.
Elsevier
Showing the best result for this search. See all results