ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

CVODM 2014 - Workshop on Customers Value Optimization in Digital Marketing

Date2014-06-25 - 2014-06-26

Deadline2014-04-25

VenueBeijing, China China

Keywords

Websitehttps://sites.google.com/site/icml2014ltvworkshop

Topics/Call fo Papers

ICML-2014 Workshop on Customers Value Optimization in Digital Marketing
June 25 - 26, 2014, Beijing, China.
https://sites.google.com/site/icml2014ltvworkshop/
Important dates:
Paper Submission: April 25, 2014
Notification of Acceptance: May 1, 2014
OVERVIEW
In many marketing applications, a company or an organization uses technology for interacting with their end customers and making recommendations. For example, a department store might offer customers discount coupons or promotions; an online store might serve targeted "on sale now" offers; or a bank might email appropriate customers new loan or mortgage offers. Today, these marketing decisions are mainly made in a myopic (best opportunity right now) approach and optimize short-term gains. In this workshop we will explore new ways of marketing interactions for optimizing lifetime value (LTV) of customers. LTV can be thought of as long-term objectives such as revenue, customer satisfaction, or customer loyalty. These long-term objectives can be represented as the sum of an appropriate reward function. These sums of rewards can be computed through a stream of interactions between the company and each customer, including both actions from the company (e.g., promotions, advertisements, or emails) and actions by the customer (e.g., purchases, clicks on a website, or signing up for a newsletter).
In this workshop we are going to explore technology for computing interactive company strategies that maximize the sum of rewards. In particular, we will explore reinforcement learning (RL) and Markov decision processes (MDPs) - powerful paradigms for sequential decision-making under uncertainty. In the RL formulation of marketing, the agent is an algorithm that takes actions such as showing an ad and offering a promotion; the environment can be thought of as features about customer demographics, the web content and customer’s behaviors such as recency (last time the webpage was visited), frequency (how often the page has been visited), and monetary value (how much was spent so far); the reward can be thought of as the price of products purchased by the customer in response to an action taken by the marketing algorithm; and finally, the goal of the marketing agent is to maximize its long-term revenue.
Using MDPs and RL to develop algorithms for LTV marketing is still in its infancy. Most of the related work has used toy examples and appeared in marketing conferences and venues. In this workshop we will attempt to discuss the major research challenges of this problem, such as:
? Evaluating a policy off-line (without interaction with the real system and only based on historical data generated by a different policy).
? Policy visualizations.
? Scaling up the computation of the LTV strategies to high dimensional "big data".
? On-line versus batch algorithms.
? Modeling progressively more engaging interactions, using hierarchical techniques and elicitation of the sales funnel process.
? Model selection and validation from batch data.
? Uncertainty estimation.
Solutions to such challenges not only will benefit marketing problems, but will benefit the RL community at large, since these are the challenges appearing in many real-world RL applications, from clinical trials to energy consumption.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PAPER SUBMISSIONS
We encourage submissions addressing the following (non-exhaustive) list of questions:
? Evaluating a policy off-line (without interaction with the real system and only based on historical data generated by a different policy).
? Policy visualizations.
? Scaling up the computation of the LTV strategies to high dimensional "big data".
? On-line versus batch algorithms.
? Modeling progressively more engaging interactions, using hierarchical techniques and elicitation of the sales funnel process.
? Model selection and validation from batch data.
? Uncertainty estimation.
Submission:
We invite researchers from different fields of machine learning (e.g., reinforcement learning, online learning, active learning), optimization, operations research and management sciences, as well as application-domain experts (from e.g., digital marketing, recommendation systems, personalized medicine, etc.) to submit an extended abstract or a paper (between 2 to 8 pages in ICML format) of their work to theochar-AT-adobe.com. Accepted papers will be presented as posters or contributed oral presentations.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ORGANIZERS:
? Shie Mannor (Technion)
? Georgios Theocharous (Adobe Research)
? Mohammad Ghavamzadeh (Adobe Research & INRIA)

Last modified: 2014-03-05 22:56:08