27. AAAI 2013:Bellevue, Washington, USA

Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, July 14-18, 2013, Bellevue, Washington, USA. AAAI Press 【DBLP Link

Paper Num: 243 || Session Num: 7

Main Technical Papers 150

1. Sensitivity of Diffusion Dynamics to Network Uncertainty.

Paper Link】 【Pages】:

【Authors】: Abhijin Adiga ; Chris J. Kuhlman ; Henning S. Mortveit ; Anil Kumar S. Vullikanti

【Abstract】: Simple diffusion processes on networks have been used to model, analyze and predict diverse phenomena such as spread of diseases, information and memes. More often than not, the underlying network data is noisy and sampled. This prompts the following natural question: how sensitive are the diffusion dynamics and subsequent conclusions to uncertainty in the network structure? In this paper, we consider two popular diffusion models: Independent cascades (IC) model and Linear threshold (LT) model. We study how the expected number of vertices that are influenced/infected, given some initial conditions, are affected by network perturbation. By rigorous analysis under the assumption of a reasonable perturbation model we establish the following main results. (1) For the IC model, we characterize the susceptibility to network perturbation in terms of the critical probability for phase transition of the network. We find the expected number of infections is quite stable, unless the the transmission probability is close to the critical probability. (2) We show that the standard LT model with uniform edge weights is relatively stable under network perturbations. (3) Empirically, the transient behavior, i.e., the time series of the number of infections, in both models appears to be more sensitive to network perturbations. We also study these questions using extensive simulations on diverse real world networks, and find that our theoretical predictions for both models match the empirical observations quite closely.

【Keywords】: Diffusion on graphs; Independent cascades model; Linear threshold model; Network sampling; Network perturbation; Submodularity

2. A Morphogenetically Assisted Design Variation Tool.

Paper Link】 【Pages】:

【Authors】: Aaron Adler ; Fusun Yaman ; Jacob Beal ; Jeffrey Cleveland ; Hala Mostafa ; Annan Mozeika

【Abstract】: The complexity and tight integration of electromechanical systems often makes them "brittle" and hard to modify in response to changing requirements. We aim to remedy this by capturing expert knowledge as functional blueprints, an idea inspired by regulatory processes that occur in natural morphogenesis. We then apply this knowledge in an intelligent design variation tool. When a user modifies a design, our tool uses functional blueprints to modify other components in response, thereby maintaining integration and reducing the need for costly search or constraint solving. In this paper, we refine the functional blueprint concept and discuss practical issues in applying it to electromechanical systems. We then validate our approach with a case study applying our prototype tool to create variants of a miniDroid robot and by empirical evaluation of convergence dynamics of networks of functional blueprints.

【Keywords】: Morphogenetic Engineering; functional blue prints

3. Truncated Incremental Search: Faster Replanning by Exploiting Suboptimality.

Paper Link】 【Pages】:

【Authors】: Sandip Aine ; Maxim Likhachev

【Abstract】: Incremental heuristic searches try to reuse their previous search efforts whenever these are available. As a result, they can often solve a sequence of similar planning problems much faster than planning from scratch. State-of-the-art incremental heuristic searches such as LPA, D and D Lite all work by propagating cost changes to all the states on the search tree whose g-values (the costs of computed paths from the start) are no longer optimal. While such a complete propagation of cost changes is required to ensure optimality, the propagations can be stopped much earlier if we are looking for solutions within a given suboptimality bound. We present a framework called Truncated Incremental Search that builds on this observation, and uses a target suboptimality bound to efficiently restrict the cost propagations. Using this framework, we develop two algorithms, Truncated LPA (TLPA) and Truncated D Lite (TD* Lite). We discuss their analytical properties and present experimental results for 2D and 3D (x, y, heading) path planning that show significant improvement in runtime over existing incremental heuristic searches when searching for close-to-optimal solutions. In addition, unlike typical incremental searches, Truncated Incremental Search is much less dependent on the proximity of the cost changes to the goal of the search due to the early termination of the cost change propagation.

【Keywords】: Incremental Search; Heuristic Search; Path Planning

4. Interdependent Multi-Issue Negotiation for Energy Exchange in Remote Communities.

Paper Link】 【Pages】:

【Authors】: Muddasser Alam ; Alex Rogers ; Sarvapali D. Ramchurn

【Abstract】: We present a novel negotiation protocol to facilitate energy exchange between off-grid homes that are equipped with renewable energy generation and electricity storage. Our protocol imposes restrictions over negotiation such that it reduces the complex interdependent multi-issue negotiation to one where agents have a strategy profile in subgame perfect Nash equilibrium. We show that our negotiation protocol is tractable, concurrent, scalable and leads to Pareto-optimal outcomes in a decentralised manner. We empirically evaluate our protocol and show that, in this instance, a society of agents can (i) improve the overall utilities by 14% and (ii) reduce their overall use of the batteries by 37%.

【Keywords】: Complex Negotiation; Interdependent Issues; Energy Exchange; Remote communities; Smart Home

5. Multi-Cycle Query Caching in Agent Programming.

Paper Link】 【Pages】:

【Authors】: Natasha Alechina ; Tristan M. Behrens ; Mehdi Dastani ; Koen V. Hindriks ; Jomi Fred Hübner ; Brian Logan ; Hai H. Nguyen ; Marc van Zee

【Abstract】: In many logic-based BDI agent programming languages, plan selection involves inferencing over some underlying knowledge representation. While context-sensitive plan selection facilitates the development of flexible, declarative programs, the overhead of evaluating repeated queries to the agent's beliefs and goals can result in poor run time performance. In this paper we present an approach to multi-cycle query caching for logic-based BDI agent programming languages. We extend the abstract performance model presented in (Alechina et al. 2012) to quantify the costs and benefits of caching query results over multiple deliberation cycles. We also present results of experiments with prototype implementations of both single- and multi-cycle caching in three logic-based BDI agent platforms, which demonstrate that significant performance improvements are achievable in practice.

【Keywords】: BDI agent programming languages

6. Bundling Attacks in Judgment Aggregation.

Paper Link】 【Pages】:

【Authors】: Noga Alon ; Dvir Falik ; Reshef Meir ; Moshe Tennenholtz

【Abstract】: We consider judgment aggregation over multiple independent issues, where the chairperson has her own opinion, and can try to bias the outcome by bundling several issues together. Since for each bundle judges must give a uniform answer on all issues, different partitions of the issues may result in an outcome that significantly differs from the "true," issue-wise, decision. We prove that the bundling problem faced by the chairperson, i.e. trying to bias the outcome towards her own opinion, is computationally difficult in the worst case. Then we study the probability that an effective bundling attack exists as the disparity between the opinions of the judges and the chair varies. We show that if every judge initially agrees with the chair on every issue with probability of at least 1/2, then there is almost always a bundling attack (i.e. a partition) where the opinion of the chair on all issues is approved. Moreover, such a partition can be found efficiently. In contrast, when the probability is lower than 1/2 then the chair cannot force her opinion using bundling even on a single issue.

【Keywords】: lobbying; bundling; manipulation

7. A Pattern Matching Based Model for Implicit Opinion Question Identification.

Paper Link】 【Pages】:

【Authors】: Hadi Amiri ; Zheng-Jun Zha ; Tat-Seng Chua

【Abstract】: This paper presents the results of developing subjectivity classifiers for Implicit Opinion Question (IOQ) identification. IOQs are defined as opinion questions with no opinion words. An IOQ example is "will the U.S. government pay more attention to the Pacific Rim?" Our analysis on community questions of Yahoo! Answers shows that a large proportion of opinion questions are IOQs. It is thus important to develop techniques to identify such questions. In this research, we first propose an effective framework based on mutual information and sequential pattern mining to construct an opinion lexicon that not only contains opinion words but also patterns. The discovered words and patterns are then combined with a machine learning technique to identify opinion questions. The experimental results on two datasets demonstrate the effectiveness of our approach.

【Keywords】: sentiment analysis, opinion question, implicit opinion question, subjectivity

8. Fast Equilibrium Computation for Infinitely Repeated Games.

Paper Link】 【Pages】:

【Authors】: Garrett Andersen ; Vincent Conitzer

【Abstract】: It is known that an equilibrium of an infinitely repeated two-player game (with limit average payoffs) can be computed in polynomial time, as follows: according to the folk theorem, we compute minimax strategies for both players to calculate the punishment values, and subsequently find a mixture over outcomes that exceeds these punishment values. However, for very large games, even computing minimax strategies can be prohibitive. In this paper, we propose an algorithmic framework for computing equilibria of repeated games that does not require linear programming and that does not necessarily need to inspect all payoffs of the game. This algorithm necessarily sometimes fails to compute an equilibrium, but we mathematically demonstrate that most of the time it succeeds quickly on uniformly random games, and experimentally demonstrate this for other classes of games. This also holds for games with more than two players, for which no efficient general algorithms are known.

【Keywords】: game theory; nash equilibrium; folk theorem; repeated games

9. On the Social Welfare of Mechanisms for Repeated Batch Matching.

Paper Link】 【Pages】:

【Authors】: Elliot Anshelevich ; Meenal Chhabra ; Sanmay Das ; Matthew Gerrior

【Abstract】: We study hybrid online-batch matching problems, where agents arrive continuously, but are only matched in periodic rounds, when many of them can be considered simultaneously. Agents not getting matched in a given round remain in the market for the next round. This setting models several scenarios of interest, including many job markets as well as kidney exchange mechanisms. We consider the social utility of two commonly used mechanisms for such markets: one that aims for stability in each round (greedy), and one that attempts to maximize social utility in each round (max-weight). Surprisingly, we find that in the long term, the social utility of the greedy mechanism can be higher than that of the max-weight mechanism. We hypothesize that this is because the greedy mechanism behaves similarly to a soft threshold mechanism, where all connections below a certain threshold are rejected by the participants in favor of waiting until the next round. Motivated by this observation, we propose a method to approximately calculate the optimal threshold for an individual agent to use based on characteristics of the other agents participating, and demonstrate experimentally that social utility is high when all agents use this strategy. Thresholding can also be applied by the mechanism itself to improve social welfare; we demonstrate this with an example on graphs that model pairwise kidney exchange.

【Keywords】: Kidney-exchange; Online batch matching; Greedy matching; Stable matching; Threshold mechanisms

10. Equilibria of Online Scheduling Algorithms.

Paper Link】 【Pages】:

【Authors】: Itai Ashlagi ; Brendan Lucier ; Moshe Tennenholtz

【Abstract】: We describe a model for competitive online scheduling algorithms. Two servers, each with a single observable queue, compete for customers. Upon arrival, each customer strategically chooses the queue with minimal expected wait time. Each scheduler wishes to maximize its number of customers, and can strategically select which scheduling algorithm, such as First-Come-First-Served (FCFS), to use for its queue. This induces a game played by the servers and the customers. We consider a non-Bayesian setting, where servers and customers play to maximize worst-case payoffs. We show that there is a unique subgame perfect safety-level equilibrium and we describe the associated scheduling algorithm (which is not FCFS). The uniqueness result holds for both randomized and deterministic algorithms, with a different equilibrium algorithm in each case. When the goal of the servers is to minimize competitive ratio, we prove that it is an equilibrium for each server to apply FCFS: each server obtains the optimal competitive ratio of 2.

【Keywords】: Online Games; Equilibrium Analysis; Scheduling Algorithms; Competitive Queues; Non-Bayesian Equilibria

11. Ties Matter: Complexity of Manipulation when Tie-Breaking with a Random Vote.

Paper Link】 【Pages】:

【Authors】: Haris Aziz ; Serge Gaspers ; Nicholas Mattei ; Nina Narodytska ; Toby Walsh

【Abstract】: We study the impact on strategic voting of tie-breaking by means of considering the order of tied candidates within a random vote. We compare this to another non deterministic tie-breaking rule where we simply choose candidate uniformly at random. In general, we demonstrate that there is no connection between the computational complexity of computing a manipulating vote with the two different types of tie-breaking. However, we prove that for some scoring rules, the computational complexity of computing a manipulation can increase from polynomial to NP-hard. We also discuss the relationship with the computational complexity of computing a manipulating vote when we ask for a candidate to be the unique winner, or to be among the set of co-winners.

【Keywords】: voting; social choice; manipulation; strategic voting; tie-breaking

12. Optimal Coalition Structure Generation in Cooperative Graph Games.

Paper Link】 【Pages】:

【Authors】: Yoram Bachrach ; Pushmeet Kohli ; Vladimir Kolmogorov ; Morteza Zadimoghaddam

【Abstract】: Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inherent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation (Deng and Papadimitriou, 1994), which maintains knowledge about synergies between agents in the form of an edge weighted graph. We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minor-free and bounded degree graphs.

【Keywords】: Cooperative Games; Graph Games; Coalition Structure Generation; Approximation Algorithms

13. Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs.

Paper Link】 【Pages】:

【Authors】: Bikramjit Banerjee

【Abstract】: Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a powerful modeling technique for realistic multi-agent coordination problems under uncertainty. Prevalent solution techniques are centralized and assume prior knowledge of the model. Recently a Monte Carlo based distributed reinforcement learning approach was proposed, where agents take turns to learn best responses to each other’s policies. This promotes decentralization of the policy computation problem, and relaxes reliance on the full knowledge of the problem parameters. However, this Monte Carlo approach has a large sample complexity, which we address in this paper. In particular, we propose and analyze a modified version of the previous algorithm that adaptively eliminates parts of the experience tree from further exploration, thus requiring fewer samples while ensuring unchanged confidence in the learned value function. Experiments demonstrate significant reduction in sample complexity – the maximum reductions ranging from 61% to 91% over different benchmark Dec-POMDP problems – with the final policies being often better due to more focused exploration.

【Keywords】: Decentralized Planning; Reinforcement Learning; Monte Carlo Tree Search

14. Causal Transportability with Limited Experiments.

Paper Link】 【Pages】:

【Authors】: Elias Bareinboim ; Judea Pearl

【Abstract】: We address the problem of transferring causal knowledge learned in one environment to another, potentially different environment, when only limited experiments may be conducted at the source. This generalizes the treatment of transportability introduced in [Pearl and Bareinboim, 2011; Bareinboim and Pearl, 2012b], which deals with transferring causal information when any experiment can be conducted at the source. Given that it is not always feasible to conduct certain controlled experiments, we consider the decision problem whether experiments on a selected subset Z of variables together with qualitative assumptions encoded in a diagram may render causal effects in the target environment computable from the available data. This problem, which we call z-transportability, reduces to ordinary transportability when Z is all-inclusive, and, like the latter, can be given syntactic characterization using the do-calculus [Pearl, 1995; 2000]. This paper establishes a necessary and sufficient condition for causal effects in the target domain to be estimable from both the non-experimental information available and the limited experimental information transferred from the source. We further provides a complete algorithm for computing the transport formula, that is, a way of fusing experimental and observational information to synthesize an unbiased estimate of the desired causal relation.

【Keywords】: causal relations; transfer knowledge; transportability; completeness; meta-analysis; experimental design

15. Teamwork with Limited Knowledge of Teammates.

Paper Link】 【Pages】:

【Authors】: Samuel Barrett ; Peter Stone ; Sarit Kraus ; Avi Rosenfeld

【Abstract】: While great strides have been made in multiagent teamwork, existing approaches typically assume extensive information exists about teammates and how to coordinate actions. This paper addresses how robust teamwork can still be created even if limited or no information exists about a specific group of teammates, as in the ad hoc teamwork scenario. The main contribution of this paper is the first empirical evaluation of an agent cooperating with teammates not created by the authors, where the agent is not provided expert knowledge of its teammates. For this purpose, we develop a general-purpose teammate modeling method and test the resulting ad hoc team agent's ability to collaborate with more than 40 unknown teams of agents to accomplish a benchmark task. These agents were designed by people other than the authors without these designers planning for the ad hoc teamwork setting. A secondary contribution of the paper is a new transfer learning algorithm, TwoStageTransfer, that can improve results when the ad hoc team agent does have some limited observations of its current teammates.

【Keywords】: Ad Hoc Teams, Multiagent Systems, Teamwork

16. Teaching Classification Boundaries to Humans.

Paper Link】 【Pages】:

【Authors】: Sumit Basu ; Janara Christensen

【Abstract】: Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.

【Keywords】: classification; teaching classification; human learning; curriculum learning

17. Social Rankings in Human-Computer Committees.

Paper Link】 【Pages】:

【Authors】: Moshe Bitan ; Ya'akov Gal ; Sarit Kraus ; Elad Dokow ; Amos Azaria

【Abstract】: Despite committees and elections being widespread in thereal-world, the design of agents for operating in humancomputer committees has received far less attention than thetheoretical analysis of voting strategies. We address this gapby providing an agent design that outperforms other voters ingroups comprising both people and computer agents. In oursetting participants vote by simultaneously submitting a ranking over a set of candidates and the election system uses a social welfare rule to select a ranking that minimizes disagreements with participants’ votes. We ran an extensive studyin which hundreds of people participated in repeated votingrounds with other people as well as computer agents that differed in how they employ strategic reasoning in their votingbehavior. Our results show that over time, people learn todeviate from truthful voting strategies, and use heuristics toguide their play, such as repeating their vote from the previous round. We show that a computer agent using a bestresponse voting strategy was able to outperform people in thegame. Our study has implication for agent designers, highlighting the types of strategies that enable agents to succeedin committees comprising both human and computer participants. This is the first work to study the role of computeragents in voting settings involving both human and agent participants.

【Keywords】: Human Computer Interaction; AI and Social Science; Social Choise

18. Decoupling the Multiagent Disjunctive Temporal Problem.

Paper Link】 【Pages】:

【Authors】: James C. Boerkoel Jr. ; Edmund H. Durfee

【Abstract】: The Multiagent Disjunctive Temporal Problem (MaDTP) is a general constraint-based formulation for scheduling problems that involve interdependent agents. Decoupling agents' interdependent scheduling problems, so that each agent can manage its schedule independently, requires agents to adopt additional local constraints that effectively subsume their interdependencies. In this paper, we present the first algorithm for decoupling MaDTPs. Our distributed algorithm is provably sound and complete. Our experiments show that the relative efficiency of using temporal decoupling to find solution spaces for MaDTPs, compared to algorithms that find complete solution spaces, improves with the interconnectedness between agents schedules, leading to orders of magnitude relative speeedup. However, decoupling by its nature restricts agents' scheduling flexibility; we define novel flexibility metrics for MaDTPs, and show empirically how the flexibility sacrificed depends on the degree of coupling between agents' schedules.

【Keywords】: Multiagent Scheduling; Distributed Scheduling; Temporal Decoupling; Disjunctive Temporal Problem

19. Qualitative Planning under Partial Observability in Multi-Agent Domains.

Paper Link】 【Pages】:

【Authors】: Ronen I. Brafman ; Guy Shani ; Shlomo Zilberstein

【Abstract】: Decentralized POMDPs (Dec-POMDPs) provide a rich, attractive model for planning under uncertainty and partial observability in cooperative multi-agent domains with a growing body of research. In this paper we formulate a qualitative, propositional model for multi-agent planning under uncertainty with partial observability, which we call Qualitative Dec-POMDP (QDec-POMDP). We show that the worst-case complexity of planning in QDec-POMDPs is similar to that of Dec-POMDPs. Still, because the model is more “classical” in nature, it is more compact and easier to specify. Furthermore, it eases the adaptation of methods used in classical and contingent planning to solve problems that challenge current Dec-POMDPs solvers. In particular, in this paper we describe a method based on compilation to classical planning, which handles multi-agent planning problems significantly larger than those handled by current Dec-POMDP algorithms.

【Keywords】: Contingent Planning;DEc-POMDP;Partial Observability

20. How Bad Is Selfish Voting?

Paper Link】 【Pages】:

【Authors】: Simina Brânzei ; Ioannis Caragiannis ; Jamie Morgenstern ; Ariel D. Procaccia

【Abstract】: It is well known that strategic behavior in elections is essentially unavoidable; we therefore ask: how bad can the rational outcome be? We answer this question via the notion of the price of anarchy, using the scores of alternatives as a proxy for their quality and bounding the ratio between the score of the optimal alternative and the score of the winning alternative in Nash equilibrium. Specifically, we are interested in Nash equilibria that are obtained via sequences of rational strategic moves. Focusing on three common voting rules — plurality, veto, and Borda — we provide very positive results for plurality and very negative results for Borda, and place veto in the middle of this spectrum.

【Keywords】: Social choice; Price of anarchy

21. Improving WalkSAT for Random k-Satisfiability Problem with k > 3.

Paper Link】 【Pages】:

【Authors】: Shaowei Cai ; Kaile Su ; Chuan Luo

【Abstract】: Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiablity (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence among modern SLS algorithms. Recently, there has been increasing interest in WalkSAT, due to the discovery of its great power on large random 3-SAT instances. However, the performance of WalkSAT on random $k$-SAT instances with $k>3$ lags far behind. Indeed, there have been few works in improving SLS algorithms for such instances. This work takes a large step towards this direction. We propose a novel concept namely $multilevel$ $make$. Based on this concept, we design a scoring function called $linear$ $make$, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSAT$lm$. Our experimental results on random 5-SAT and 7-SAT instances show that WalkSAT$lm$ improves WalkSAT by orders of magnitudes. Moreover, WalkSAT$lm$ significantly outperforms state-of-the-art SLS solvers on random 5-SAT instances, while competes well on random 7-SAT ones. Additionally, WalkSAT$lm$ performs very well on random instances from SAT Challenge 2012, indicating its robustness.

【Keywords】: Local Search, Satisfiability, Muti-level make

22. A Kernel Density Estimate-Based Approach to Component Goodness Modeling.

Paper Link】 【Pages】:

【Authors】: Nuno Cardoso ; Rui Abreu

【Abstract】: Intermittent fault localization approaches account for the fact that faulty components may fail intermittently by considering a parameter (known as goodness) that quantifies the probability that faulty components may still exhibit correct behavior. Current, state-of-the-art approaches (1) assume that this goodness probability is context independent and (2) do not provide means for integrating past diagnosis experience in the diagnostic mechanism. In this paper, we present a novel approach, coined Non-linear Feedback-based Goodness Estimate (NFGE), that uses kernel density estimations (KDE) to address such limitations. We evaluated the approach with both synthetic and real data, yielding lower estimation errors, thus increasing the diagnosis performance.

【Keywords】: Intermittent Fault Localization; Kernel Density Estimation; Bayesian Reasoning

23. Instructor Rating Markets.

Paper Link】 【Pages】:

【Authors】: Mithun Chakraborty ; Sanmay Das ; Allen Lavoie ; Malik Magdon-Ismail ; Yonatan Naamad

【Abstract】: We describe the design of Instructor Rating Markets (IRMs) where human participants interact through intelligent automated market-makers in order to provide dynamic collective feedback to instructors on the progress of their classes. The markets are among the first to enable the empirical study of prediction markets where traders can affect the very outcomes they are trading on. More than 200 students across the Rensselaer campus participated in markets for ten classes in the Fall 2010 semester. In this paper, we describe how we designed these markets in order to elicit useful information, and analyze data from the deployment. We show that market prices convey useful information on future instructor ratings and contain significantly more information than do past ratings. The bulk of useful information contained in the price of a particular class is provided by students who are in that class, showing that the markets are serving to disseminate insider information. At the same time, we find little evidence of attempted manipulation by raters. The markets are also a laboratory for comparing different market designs and the resulting price dynamics, and we show how they can be used to compare market making algorithms.

【Keywords】: prediction markets; market making

24. Uncorrelated Lasso.

Paper Link】 【Pages】:

【Authors】: Sibao Chen ; Chris H. Q. Ding ; Bin Luo ; Ying Xie

【Abstract】: Lasso-type variable selection has increasingly expanded its machine learning applications. In this paper, uncorrelated Lasso is proposed for variable selection, where variable de-correlation is considered simultaneously with variable selection, so that selected variables are uncorrelated as much as possible. An effective iterative algorithm, with the proof of convergence, is presented to solve the sparse optimization problem. Experiments on benchmark data sets show that the proposed method has better classification performance than many state-of-the-art variable selection methods.

【Keywords】: Lasso; de-correlation; feature selection

25. Goal-Oriented Euclidean Heuristics with Manifold Learning.

Paper Link】 【Pages】:

【Authors】: Wenlin Chen ; Yixin Chen ; Kilian Q. Weinberger ; Qiang Lu ; Xiaoping Chen

【Abstract】: Recently, a Euclidean heuristic (EH) has been proposed for A* search. EH exploits manifold learning methods to construct an embedding of the state space graph, and derives an admissible heuristic distance between two states from the Euclidean distance between their respective embedded points. EH has shown good performance and memory efficiency in comparison to other existing heuristics such as differential heuristics. However, its potential has not been fully explored. In this paper, we propose a number of techniques that can significantly improve the quality of EH. We propose a goal-oriented manifold learning scheme that optimizes the Euclidean distance to goals in the embedding while maintaining admissibility and consistency. We also propose a state heuristic enhancement technique to reduce the gap between heuristic and true distances. The enhanced heuristic is admissible but no longer consistent. We then employ a modified search algorithm, known as B' algorithm, that achieves optimality with inconsistent heuristics using consistency check and propagation. We demonstrate the effectiveness of the above techniques and report un-matched reduction in search costs across several non-trivial benchmark search problems.

【Keywords】: Heuristic search; Manifold learning; Euclidean heuristic

26. From Interest to Function: Location Estimation in Social Media.

Paper Link】 【Pages】:

【Authors】: Yan Chen ; Jichang Zhao ; Xia Hu ; Xiaoming Zhang ; Zhoujun Li ; Tat-Seng Chua

【Abstract】: Recent years have witnessed the tremendous development of social media, which attracts a vast number of Internet users. The high-dimension content generated by these users provides an unique opportunity to understand their behavior deeply. As one of the most fundamental topics, location estimation attracts more and more research efforts. Different from the previous literature, we find that user's location is strongly related to user interest. Based on this, we first build a detection model to mine user interest from short text. We then establish the mapping between location function and user interest before presenting an efficient framework to predict the user's location with convincing fidelity. Thorough evaluations and comparisons on an authentic data set show that our proposed model significantly outperforms the state-of-the-arts approaches. Moreover, the high efficiency of our model also guarantees its applicability in real-world scenarios.

【Keywords】: location estimation;user interest;social media

27. Dynamic Minimization of Sentential Decision Diagrams.

Paper Link】 【Pages】:

【Authors】: Arthur Choi ; Adnan Darwiche

【Abstract】: The Sentential Decision Diagram (SDD) is a recently proposed representation of Boolean functions, containing Ordered Binary Decision Diagrams (OBDDs) as a distinguished subclass. While OBDDs are characterized by total variable orders, SDDs are characterized more generally by vtrees. As both OBDDs and SDDs have canonical representations, searching for OBDDs and SDDs of minimal size simplifies to searching for variable orders and vtrees, respectively. For OBDDs, there are effective heuristics for dynamic reordering, based on locally swapping variables. In this paper, we propose an analogous approach for SDDs which navigates the space of vtrees via two operations: one based on tree rotations and a second based on swapping children in a vtree. We propose a particular heuristic for dynamically searching the space of vtrees, showing that it can find SDDs that are an order-of-magnitude more succinct than OBDDs found by dynamic reordering.

【Keywords】: knowledge compilation; decision diagrams

28. Timelines with Temporal Uncertainty.

Paper Link】 【Pages】:

【Authors】: Alessandro Cimatti ; Andrea Micheli ; Marco Roveri

【Abstract】: Timelines are a formalism to model planning domains where the  temporal aspects are predominant, and have been used in many  real-world applications. Despite their practical success, a major limitation is the inability  to model temporal uncertainty, i.e. the plan executor cannot decide  the duration of some activities. In this paper we make two key contributions. First, we propose a comprehensive, semantically well founded framework that  (conservatively) extends with temporal uncertainty the state of the  art timeline approach. Second, we focus on the problem of producing time-triggered plans  that are robust with respect to temporal uncertainty, under a  bounded horizon. In this setting, we present the first complete  algorithm, and we show how it can be made practical by leveraging  the power of Satisfiability Modulo Theories.

【Keywords】: Timelines; Temporal Planning; Temporal Uncertainty

29. Online Lazy Updates for Portfolio Selection with Transaction Costs.

Paper Link】 【Pages】:

【Authors】: Puja Das ; Nicholas Johnson ; Arindam Banerjee

【Abstract】: A major challenge for stochastic optimization is the cost of updating model parameters especially when the number of parameters is large. Updating parameters frequently can prove to be computationally or monetarily expensive. In this paper, we introduce an efficient primal-dual based online algorithm that performs lazy updates to the parameter vector and show that its performance is competitive with reasonable strategies which have the benefit of hindsight. We demonstrate the effectiveness of our algorithm in the online portfolio selection domain where a trader has to pay proportional transaction costs every time his portfolio is updated. Our Online Lazy Updates (OLU) algorithm takes into account the transaction costs while computing an optimal portfolio which results in sparse updates to the portfolio vector. We successfully establish the robustness and scalability of our lazy portfolio selection algorithm with extensive theoretical and experimental results on two real-world datasets.

【Keywords】: online portfolio selection; transaction costs; lazy updates; online convex optimization;

30. Assumption-Based Planning: Generating Plans and Explanations under Incomplete Knowledge.

Paper Link】 【Pages】:

【Authors】: Sammy Davis-Mendelow ; Jorge A. Baier ; Sheila A. McIlraith

【Abstract】: Many practical planning problems necessitate the generation of a plan under incomplete information about the state of the world. In this paper we propose the notion of Assumption-Based Planning. Unlike conformant planning, which attempts to find a plan under all possible completions of the initial state, an assumption-based plan supports the assertion of additional assumptions about the state of the world, often resulting in high quality plans where no conformant plan exists. We are interested in this paradigm of planning for two reasons: 1) it captures a compelling form of \emph{commonsense planning}, and 2) it is of great utility in the generation of explanations, diagnoses, and counter-examples -- tasks which share a computational core with We formalize the notion of assumption-based planning, establishing a relationship between assumption-based and conformant planning, and prove properties of such plans. We further provide for the scenario where some assumptions are more preferred than others. Exploiting the correspondence with conformant planning, we propose a means of computing assumption-based plans via a translation to classical planning. Our translation is an extension of the popular approach proposed by Palacios and Geffner and realized in their T0 planner. We have implemented our planner, A0, as a variant of T0 and tested it on a number of expository domains drawn from the International Planning Competition. Our results illustrate the utility of this new planning paradigm.

【Keywords】: Conformant Planning, Assumptions, Diagnosis, Verification

31. Complexity of Inferences in Polytree-shaped Semi-Qualitative Probabilistic Networks.

Paper Link】 【Pages】:

【Authors】: Cassio Polpo de Campos ; Fábio Gagliardi Cozman

【Abstract】: Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks.  This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in time linear in the number of nodes if there is a single observed node. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.

【Keywords】:

32. Parameterized Complexity Results for Plan Reuse.

Paper Link】 【Pages】:

【Authors】: Ronald de Haan ; Anna Roubícková ; Stefan Szeider

【Abstract】: Planning is a notoriously difficult computational problem of high worst-case complexity. Researchers have been investing significant efforts to develop heuristics or restrictions to make planning practically feasible. Case-based planning is a heuristic approach where one tries to reuse previous experience when solving similar problems in order to avoid some of the planning effort. Plan reuse may offer an interesting alternative to plan generation in some settings. We provide theoretical results that identify situations in which plan reuse is provably tractable. We perform our analysis in the framework of parameterized complexity, which supports a rigorous worst-case complexity analysis that takes structural properties of the input into account in terms of parameters. A central notion of parameterized complexity is fixed-parameter tractability which extends the classical notion of polynomial-time tractability by utilizing the effect of parameters. We draw a detailed map of the parameterized complexity landscape of several variants of problems that arise in the context of case-based planning. In particular, we consider the problem of reusing an existing plan, imposing various restrictions in terms of parameters, such as the number of steps that can be added to the existing plan to turn it into a solution of the planning instance at hand.

【Keywords】: planning; plan reuse; case-based planning; fixed-parameter tractability; parameterized complexity

33. Multi-Armed Bandit with Budget Constraint and Variable Costs.

Paper Link】 【Pages】:

【Authors】: Wenkui Ding ; Tao Qin ; Xu-Dong Zhang ; Tie-Yan Liu

【Abstract】: We study the multi-armed bandit problems with budget constraint and variable costs (MAB-BV). In this setting, pulling an arm will receive a random reward together with a random cost, and the objective of an algorithm is to pull a sequence of arms in order to maximize the expected total reward with the costs of pulling those arms complying with a budget constraint. This new setting models many Internet applications (e.g., ad exchange, sponsored search, and cloud computing) in a more accurate manner than previous settings where the pulling of arms is either costless or with a fixed cost. We propose two UCB based algorithms for the new setting. The first algorithm needs prior knowledge about the lower bound of the expected costs when computing the exploration term. The second algorithm eliminates this need by estimating the minimal expected costs from empirical observations, and therefore can be applied to more real-world applications where prior knowledge is not available. We prove that both algorithms have nice learning abilities, with regret bounds of O(ln B). Furthermore, we show that when applying our proposed algorithms to a previous setting with fixed costs (which can be regarded as our special case), one can improve the previously obtained regret bound. Our simulation results on real-time bidding in ad exchange verify the effectiveness of the algorithms and are consistent with our theoretical analysis.

【Keywords】: multi-armed bandit; online learning; budget constraint; variable costs

34. The Automated Acquisition of Suggestions from Tweets.

Paper Link】 【Pages】:

【Authors】: Li Dong ; Furu Wei ; Yajuan Duan ; Xiaohua Liu ; Ming Zhou ; Ke Xu

【Abstract】: This paper targets at automatically detecting and classifying user's suggestions from tweets. The short and informal nature of tweets, along with the imbalanced characteristics of suggestion tweets, makes the task extremely challenging. To this end, we develop a classification framework on Factorization Machines, which is effective and efficient especially in classification tasks with feature sparsity settings. Moreover, we tackle the imbalance problem by introducing cost-sensitive learning techniques in Factorization Machines. Extensively experimental studies on a manually annotated real-life data set show that the proposed approach significantly improves the baseline approach, and yields the precision of 71.06% and recall of 67.86%. We also investigate the reason why Factorization Machines perform better. Finally, we introduce the first manually annotated dataset for suggestion classification.

【Keywords】: Suggestion Classification; Factorization Machines; Feature Sparsity; Imbalance Classification; Tweets

35. A Maximum K-Min Approach for Classification.

Paper Link】 【Pages】:

【Authors】: Mingzhi Dong ; Liang Yin ; Weihong Deng ; Li Shang ; Jun Guo ; Honggang Zhang

【Abstract】: In this paper, a general Maximum K-Min approach for classification is proposed. With the physical meaning of optimizing the classification confidence of the K worst instances, Maximum K-Min Gain/Minimum K-Max Loss (MKM) criterion is introduced. To make the original optimization problem with combinational constraints computationally tractable, the optimization techniques are adopted and a general compact representation lemma for MKM Criterion is summarized. Based on the lemma, a Nonlinear Maximum K-Min (NMKM) classifier and a Semi-supervised Maximum K-Min (SMKM) classifier are presented for traditional classification task and semi-supervised classification task respectively. Based on the experiment results of publicly available datasets, our Maximum K-Min methods have achieved competitive performance when comparing against Hinge Loss classifiers.

【Keywords】: Classification; Maximum K Min

36. HC-Search: Learning Heuristics and Cost Functions for Structured Prediction.

Paper Link】 【Pages】:

【Authors】: Janardhan Rao Doppa ; Alan Fern ; Prasad Tadepalli

【Abstract】: Structured prediction is the problem of learning a function from structured inputs to structured outputs with prototypical examples being part-of-speech tagging and image labeling. Inspired by the recent successes of search-based structured prediction, we introduce a new framework for structured prediction called {\em HC-Search}. Given a structured input, the framework uses a search procedure guided by a learned heuristic H to uncover high quality candidate outputs and then uses a separate learned cost function C to select a final prediction among those outputs. We can decompose the regret of the overall approach into the loss due to H not leading to high quality outputs, and the loss due to C not selecting the best among the generated outputs. Guided by this decomposition, we minimize the overall regret in a greedy stage-wise manner by first training H to quickly uncover high quality outputs via imitation learning, and then training C to correctly rank the outputs generated via H according to their true losses. Experiments on several benchmark domains show that our approach significantly outperforms the state-of-the-art methods.

【Keywords】: structured prediction, imitation learning, rank learning

37. SMILe: Shuffled Multiple-Instance Learning.

Paper Link】 【Pages】:

【Authors】: Gary Doran ; Soumya Ray

【Abstract】: Resampling techniques such as bagging are often used in supervised learning to produce more accurate classifiers. In this work, we show that multiple-instance learning admits a different form of resampling, which we call "shuffling." In shuffling, we resample instances in such a way that the resulting bags are likely to be correctly labeled. We show that resampling results in both a reduction of bag label noise and a propagation of additional informative constraints to a multiple-instance classifier. We empirically evaluate shuffling in the context of multiple-instance classification and multiple-instance active learning and show that the approach leads to significant improvements in accuracy.

【Keywords】: multiple-instance learning; resampling; active learning

38. Liberal Safety for Answer Set Programs with External Sources.

Paper Link】 【Pages】:

【Authors】: Thomas Eiter ; Michael Fink ; Thomas Krennwallner ; Christoph Redl

【Abstract】: Answer set programs with external source access may introduce new constants that are not present in the program, which is known as value invention. As naive value invention leads to programs with infinite grounding and answer sets, syntactic safety criteria are imposed on programs. However, traditional criteria are in many cases unnecessarily strong and limit expressiveness. We present liberal domain-expansion (de-) safe programs, a novel generic class of answer set programs with external source access that has a finite grounding and allows for value invention. De-safe programs use so-called term bounding functions as a parameter for modular instantiation with concrete—e.g., syntactic or semantic or both—safety criteria. This ensures extensibility of the approach in the future. We provide concrete instances of the framework and develop an operator that can be used for computing a finite grounding. Finally, we discuss related notions of safety from the literature, and show that our approach is strictly more expressive.

【Keywords】: Knowledge Representation, Logic Programming, Nonmonotonic Reasoning, Answer Set Programming, Value Invention, Grounding

39. Posted Prices Exchange for Display Advertising Contracts.

Paper Link】 【Pages】:

【Authors】: Yagil Engel ; Moshe Tennenholtz

【Abstract】: We propose a new market design for display advertising contracts, based on posted prices. Our model and algorithmic framework address several major challenges: (i) the space of possible impression types is exponential in the number of attributes, which is typically large, therefore a complete price space cannot be maintained; (ii) advertisers are usually unable or reluctant to provide extensive demand (willingness-to-pay) functions, (iii) the levels of detail with which supply and demand are specified are often not identical.

【Keywords】: electronic commerce; display advertising; market design

40. Computational Aspects of Nearly Single-Peaked Electorates.

Paper Link】 【Pages】:

【Authors】: Gábor Erdélyi ; Martin Lackner ; Andreas Pfandler

【Abstract】: Manipulation, bribery, and control are well-studied ways of changing the outcome of an election. Many voting systems are, in the general case, computationally resistant to some of these manipulative actions. However when restricted to single-peaked electorates, these systems suddenly become easy to manipulate. Recently, Faliszewski, Hemaspaandra, and Hemaspaandra studied the complexity of dishonest behavior in nearly single-peaked electorates. These are electorates that are not single-peaked but close to it according to some distance measure. In this paper we introduce several new distance measures regarding single-peakedness. We prove that determining whether a given profile is nearly single-peaked is NP-complete in many cases. For one case we present a polynomial-time algorithm. Furthermore, we explore the relations between several notions of nearly single-peakedness.

【Keywords】: Computational social choice; (Nearly) Single-peaked profiles; Voting; Computational complexity; Algorithms

41. A General Formal Framework for Pathfinding Problems with Multiple Agents.

Paper Link】 【Pages】:

【Authors】: Esra Erdem ; Doga Gizem Kisa ; Umut Öztok ; Peter Schüller

【Abstract】: Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.

【Keywords】: Pathfinding;Multiple Agents;Answer Set Programming;Logic Programming;Knowledge Representation and Reasoning

42. Abstract Preference Frameworks - a Unifying Perspective on Separability and Strong Equivalence.

Paper Link】 【Pages】:

【Authors】: Wolfgang Faber ; Miroslaw Truszczynski ; Stefan Woltran

【Abstract】: We introduce abstract preference frameworks to study general properties common across a variety of preference formalisms. In particular, we study strong equivalence in preference formalisms and their separability. We identify abstract postulates on preference frameworks, satisfied by most of the currently studied preference formalisms, that lead to characterizations of both properties of interest.

【Keywords】: Reasoning with Preferences; Strong Equivalence

43. Multiagent Knowledge and Belief Change in the Situation Calculus.

Paper Link】 【Pages】:

【Authors】: Liangda Fang ; Yongmei Liu

【Abstract】: Belief change is an important research topic in AI. It becomes more perplexing in multi-agent settings, since the action of an agent may be partially observable to other agents. In this paper, we present a general approach to reasoning about actions and belief change in multi-agent settings. Our approach is based on a multi-agent extension to the situation calculus, augmented by a plausibility relation over situations and another one over actions, which is used to represent agents' different perspectives on actions. When an action is performed, we update the agents' plausibility order on situations by giving priority to the plausibility order on actions, in line with the AGM approach of giving priority to new information. We show that our notion of belief satisfies KD45 properties. As to the special case of belief change of a single agent, we show that our framework satisfies most of the classical AGM, KM, and DP postulates. We also present properties concerning the change of common knowledge and belief of a group of agents.

【Keywords】: Action, Change, and Causality; Belief Change; Reasoning with Beliefs

44. The Cascade Auction - A Mechanism for Deterring Collusion in Auctions.

Paper Link】 【Pages】:

【Authors】: Uriel Feige ; Gil Kalai ; Moshe Tennenholtz

【Abstract】: We introduce a sealed bid auction of a single item in which the winner is chosen at random among the highest k bidders according to a fixed probability distribution, and the price for the chosen winner is the Vickrey-Clarke-Groves price. We call such an auction a cascade auction . Our analysis suggests that this type of auction may give higher revenues compared to second price auction in cases of collusion.

【Keywords】:

45. Backdoors to Normality for Disjunctive Logic Programs.

Paper Link】 【Pages】:

【Authors】: Johannes Klaus Fichte ; Stefan Szeider

【Abstract】: Over the last two decades, propositional satisfiability (S AT ) has become one of the most successful and widely applied techniques for the solution of NP-complete problems. The aim of this paper is to investigate theoretically how S AT can be utilized for the efficient solution of problems that are harder than NP or co-NP. In particular, we consider the fundamental reasoning problems in propositional disjunctive answer set programming (A SP ), B RAVE R EASONING and S KEPTICA R EASONING , which ask whether a given atom is contained in at least one or in all answer sets, respectively. Both problems are located at the second level of the Polynomial Hierarchy and thus assumed to be harder than NP or co-NP. One cannot transform these two reasoning problems into S AT in polynomial time, unless the Polynomial Hierarchy collapses. We show that certain structural aspects of disjunctive logic programs can be utilized to break through this complexity barrier, using new techniques from Parameterized Complexity. In particular, we exhibit transformations from B RAVE and S KEPTICAL R EASONING to S AT that run in time $ O (2^ k n^ 2 )$ where k is a structural parameter of the instance and n the input size. In other words, the reduction is fixed-parameter tractable for parameter k . As the parameter k we take the size of a smallest backdoor with respect to the class of normal (i.e., disjunction-free) programs. Such a backdoor is a set of atoms that when deleted makes the program normal. In consequence, the combinatorial explosion, which is expected when transforming a problem from the second level of the Polynomial Hierarchy to the first level, can now be confined to the parameter k , while the running time of the reduction is polynomial in the input size n , where the order of the polynomial is independent of k . We show that such a transformation is not possible if we consider backdoors with respect to tightness instead of normality. We think that our approach is applicable to many other hard combinatorial problems that lie beyond NP or co-NP, and thus significantly enlarge the applicability of SAT.

【Keywords】: answer-set programming; backdoors; complexity barrier breaking reduction; nonmonotonic reasoning; parameterized complexity; propositional satisfiability

46. Automatic Identification of Conceptual Metaphors With Limited Knowledge.

Paper Link】 【Pages】:

【Authors】: Lisa Gandy ; Nadji Allan ; Mark Atallah ; Ophir Frieder ; Newton Howard ; Sergey Kanareykin ; Moshe Koppel ; Mark Last ; Yair Neuman ; Shlomo Argamon

【Abstract】: Full natural language understanding requires identifying and analyzing the meanings of metaphors, which are ubiquitous in both text and speech. Over the last thirty years, linguistic metaphors have been shown to be based on more general conceptual metaphors, partial semantic mappings between disparate conceptual domains. Though some achievements have been made in identifying linguistic metaphors over the last decade or so, little work has been done to date on automatically identifying conceptual metaphors. This paper describes research on identifying conceptual metaphors based on corpus data. Our method uses as little background knowledge as possible, to ease transfer to new languages and to mini- mize any bias introduced by the knowledge base construction process. The method relies on general heuristics for identifying linguistic metaphors and statistical clustering (guided by Wordnet) to form conceptual metaphor candidates. Human experiments show the system effectively finds meaningful conceptual metaphors.

【Keywords】: linguistic metaphor; conceptual metaphor; corpus linguistics; analogy;

47. Efficient Evolutionary Dynamics with Extensive-Form Games.

Paper Link】 【Pages】:

【Authors】: Nicola Gatti ; Fabio Panozzo ; Marcello Restelli

【Abstract】: Evolutionary game theory combines game theory and dynamical systems and is customarily adopted to describe evolutionary dynamics in multi-agent systems. In particular, it has been proven to be a successful tool to describe multi-agent learning dynamics. To the best of our knowledge, we provide in this paper the first replicator dynamics applicable to the sequence form of an extensive-form game, allowing an exponential reduction of time and space w.r.t. the currently adopted replicator dynamics for normal form. Furthermore, our replicator dynamics is realization equivalent to the standard replicator dynamics for normal form. We prove our results for both discrete-time and continuous-time cases. Finally, we extend standard tools to study the stability of a strategy profile to our replicator dynamics.

【Keywords】: non-cooperative games; evolutionary game theory

48. Algorithms for Strong Nash Equilibrium with More than Two Agents.

Paper Link】 【Pages】:

【Authors】: Nicola Gatti ; Marco Rocco ; Tuomas Sandholm

【Abstract】: Strong Nash equilibrium (SNE) is an appealing solution concept when rational agents can form coalitions. A strategy profile is an SNE if no coalition of agents can benefit by deviating. We present the first general-purpose algorithms for SNE finding in games with more than two agents. An SNE must simultaneously be a Nash equilibrium (NE) and the optimal solution of multiple non-convex optimization problems. This makes even the derivation of necessary and sufficient mathematical equilibrium constraints difficult. We show that forcing an SNE to be resilient only to pure-strategy deviations by coalitions, unlike for NEs, is only a necessary condition here. Second, we show that the application of Karush-Kuhn-Tucker conditions leads to another set of necessary conditions that are not sufficient. Third, we show that forcing the Pareto efficiency of an SNE for each coalition with respect to coalition correlated strategies is sufficient but not necessary. We then develop a tree search algorithm for SNE finding. At each node, it calls an oracle to suggest a candidate SNE and then verifies the candidate. We show that our new necessary conditions can be leveraged to make the oracle more powerful. Experiments validate the overall approach and show that the new conditions significantly reduce search tree size compared to using NE conditions alone.

【Keywords】: Equilibrium computation, Strong Nash

49. Domain-Specific Heuristics in Answer Set Programming.

Paper Link】 【Pages】:

【Authors】: Martin Gebser ; Benjamin Kaufmann ; Javier Romero ; Ramón Otero ; Torsten Schaub ; Philipp Wanko

【Abstract】: We introduce a general declarative framework for incorporating domain-specific heuristics into ASP solving. We accomplish this by extending the first-order modeling language of ASP by a distinguished heuristic predicate. The resulting heuristic information is processed as an equitable part of the logic program and subsequently exploited by the solver when it comes to non-deterministically assigning a truth value to an atom. We implemented our approach as a dedicated heuristic in the ASP solver clasp and show its great prospect by an empirical evaluation.

【Keywords】: answer set programming; answer set solving; heuristics; conflict-driven clause learning

50. Vesselness Features and the Inverse Compositional AAM for Robust Face Recognition Using Thermal IR.

Paper Link】 【Pages】:

【Authors】: Reza Shoja Ghiass ; Ognjen Arandjelovic ; Hakim Bendada ; Xavier Maldague

【Abstract】: Over the course of the last decade, infrared (IR) and particularly thermal IR imaging based face recognition has emerged as a promising complement to conventional, visible spectrum based approaches which continue to struggle when applied in the real world. While inherently insensitive to visible spectrum illumination changes, IR images introduce specific challenges of their own, most notably sensitivity to factors which affect facial heat emission patterns, e.g. emotional state, ambient temperature, and alcohol intake. In addition, facial expression and pose changes are more difficult to correct in IR images because they are less rich in high frequency detail which is an important cue for fitting any deformable model. In this paper we describe a novel method which addresses these major challenges. Specifically, to normalize for pose and facial expression changes we generate a synthetic frontal image of a face in a canonical, neutral facial expression from an image of the face in an arbitrary pose and facial expression. This is achieved by piecewise affine warping which follows active appearance model (AAM) fitting. This is the first publication which explores the use of an AAM on thermal IR images; we propose a pre-processing step which enhances detail in thermal images, making AAM convergence faster and more accurate. To overcome the problem of thermal IR image sensitivity to the exact pattern of facial temperature emissions we describe a representation based on reliable anatomical features. In contrast to previous approaches, our representation is not binary; rather, our method accounts for the reliability of the extracted features. This makes the proposed representation much more robust both to pose and scale changes. The effectiveness of the proposed approach is demonstrated on the largest public database of thermal IR images of faces on which it achieved 100% identification rate, significantly outperforming previously described methods.

【Keywords】:

51. On Power-Law Kernels, Corresponding Reproducing Kernel Hilbert Space and Applications.

Paper Link】 【Pages】:

【Authors】: Debarghya Ghoshdastidar ; Ambedkar Dukkipati

【Abstract】: The role of kernels is central to machine learning. Motivated by the importance of power-law distributions in statistical modeling, in this paper, we propose the notion of power-law kernels to investigate power-laws in learning problem. We propose two power-law kernels by generalizing Gaussian and Laplacian kernels. This generalization is based on distributions, arising out of maximization of a generalized information measure known as nonextensive entropy that is very well studied in statistical mechanics. We prove that the proposed kernels are positive definite, and provide some insights regarding the corresponding Reproducing Kernel Hilbert Space (RKHS). We also study practical significance of both kernels in classification and regression, and present some simulation results.

【Keywords】: Kernels; Tsallis distributions

52. Formalizing Hierarchical Clustering as Integer Linear Programming.

Paper Link】 【Pages】:

【Authors】: Sean Gilpin ; Siegfried Nijssen ; Ian N. Davidson

【Abstract】: Hierarchical clustering is typically implemented as a greedy heuristic algorithm with no explicit objective function. In this work we formalize hierarchical clustering as an integer linear programming (ILP) problem with a natural objective function and the dendrogram properties enforced as linear constraints.  Though exact solvers exists for ILP we show that a simple randomized algorithm and a linear programming (LP) relaxation can be used to provide approximate solutions faster.  Formalizing hierarchical clustering also has the benefit that relaxing the constraints can produce novel problem variations such as overlapping clusterings.  Our experiments show that our formulation is capable of outperforming standard agglomerative clustering algorithms in a variety of settings, including traditional hierarchical clustering as well as learning overlapping clusterings.

【Keywords】: Hierarchical Clustering

53. Radial Restraint: A Semantically Clean Approach to Bounded Rationality for Logic Programs.

Paper Link】 【Pages】:

【Authors】: Benjamin Nathan Grosof ; Terrance Swift

【Abstract】: Declarative logic programs (LP) based on the well-founded semantics (WFS) are widely used for knowledge representation (KR).  Logical functions are desirable expressively in KR, but when present make LP inferencing become undecidable. In this paper, we present radial restraint : a novel approach to bounded rationality in LP. Radial restraint is parameterized by a norm that measures the syntactic complexity of a term, along with an abstraction function based on that norm.  When a term exceeds a bound for the norm, the term is assigned the WFS's third truth-value of undefined .  If the norm is finitary, radial restraint guarantees finiteness of models and decidability of inferencing, even when logical functions are present.  It further guarantees soundness, even when non-monotonicity is present.  We give a fixed-point semantics for radially restrained well-founded models which soundly approximate well-founded models.  We also show how to perform correct inferencing relative to such models, via SLG_ABS, an extension of tabled SLG resolution that uses norm-based abstraction functions.  Finally we discuss how SLG_ABS is implemented in the engine of XSB Prolog, and scales to knowledge bases with more than 10^8 rules and facts.

【Keywords】: knowledge representation, declarative logic programs, termination, decidability, finiteness, bounded rationality, model theory, proof theory

54. Convex Subspace Representation Learning from Multi-View Data.

Paper Link】 【Pages】:

【Authors】: Yuhong Guo

【Abstract】: Learning from multi-view data is important in many applications. In this paper, we propose a novel convex subspace representation learning method for unsupervised multi-view clustering. We first formulate the subspace learning with multiple views as a joint optimization problem with a common subspace representation matrix and a group sparsity inducing norm. By exploiting the properties of dual norms, we then show a convex min-max dual formulation with a sparsity inducing trace norm can be obtained. We develop a proximal bundle optimization algorithm to globally solve the min-max optimization problem. Our empirical study shows the proposed subspace representation learning method can effectively facilitate multi-view clustering and induce superior clustering results than alternative multi-view clustering methods.

【Keywords】: Multi-view Learning; Convex Optimization; Subspace Representation Learning

55. Reduce and Re-Lift: Bootstrapped Lifted Likelihood Maximization for MAP.

Paper Link】 【Pages】:

【Authors】: Fabian Hadiji ; Kristian Kersting

【Abstract】: By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein's likelihood maximization (LM) approach to MAP inference is liftable, too, and actually provides additional structure for optimization. Specifically, it has been recognized that some pseudo marginals may converge quickly, turning intuitively into pseudo evidence. This additional evidence typically changes the structure of the lifted network: it may expand or reduce it. The current lifted network, however, can be viewed as an upper bound on the size of the lifted network required to finish likelihood maximization. Consequently, we re-lift the network only if the pseudo evidence yields a reduced network, which can efficiently be computed on the current lifted network. Our experimental results on Ising models, image segmentation and relational entity resolution demonstrate that this bootstrapped LM via "reduce and re-lift" finds MAP assignments comparable to those found by the original LM approach, but in a fraction of the time.

【Keywords】: MAP inference, Likelihood Maximization, Lifted Inference

Paper Link】 【Pages】:

【Authors】: Chen Hajaj ; Noam Hazon ; David Sarne ; Avshalom Elmalech

【Abstract】: The blooming of comparison shopping agents (CSAs) in recent years enables buyers in today's markets to query more than a single CSA while shopping, thus substantially expanding the list of sellers whose prices they obtain. From the individual CSA point of view, however, the multi-CSAs querying is definitely non-favorable as most of today's CSAs benefit depends on payments they receive from sellers upon transferring buyers to their websites (and making a purchase). The most straightforward way for the CSA to improve its competence is through spending more resources on getting more sellers' prices, potentially resulting in a more attractive best price''. In this paper we suggest a complementary approach that improves the attractiveness of the best price returned to the buyer without having to extend the CSAs' price database. This approach, which we termselective price disclosure'' relies on removing some of the prices known to the CSA from the list of results returned to the buyer. The advantage of this approach is in the ability to affect the buyer's beliefs regarding the probability of obtaining more attractive prices if querying additional CSAs. The paper presents two methods for choosing the subset of prices to be presented to a fully-rational buyer, attempting to overcome the computational complexity associated with evaluating all possible subsets. The effectiveness and efficiency of the methods are demonstrated using real data, collected from five CSAs for four products. Furthermore, since people are known to have an inherently bounded rationality, the two methods are also evaluated with human buyers, demonstrating that selective price-disclosing can be highly effective with people, however the subset of prices that needs to be used should be extracted in a different (and more simplistic) manner.

【Keywords】: comparison shopping agents, information disclosure, experimentation

Paper Link】 【Pages】:

【Authors】: Matthew Hatem ; Wheeler Ruml

【Abstract】: Multiple sequence alignment (MSA) is a central problem in computational biology. It is well known that MSA can be formulated as a shortest path problem and solved using heuristic search, but the memory requirement of A makes it impractical for all but the smallest problems. Partial Expansion A (PEA) reduces the space complexity of A by generating only the most promising successor nodes. However, even PEA exhausts available memory on many problems. Another alternative is Iterative Deepening Dynamic Programming, which uses an uninformed search order but stores only the nodes along the search frontier. However, it too cannot scale to the largest problems. In this paper, we propose storing nodes on cheap and plentiful secondary storage. We present a new general-purpose algorithm, Parallel External PEA (\xppea), that combines PEA* with Delayed Duplicate Detection to take advantage of external memory and multiple processors to solve large MSA problems. In our experiments, \xppea\ is the first algorithm capable of solving the entire Reference Set 1 of the standard BAliBASE benchmark using a biologically accurate cost function. This work suggests that external best-first search can effectively use heuristic information to surpass methods that rely on uninformed search orders.

【Keywords】: heuristic search; multiple sequence alignment; distributed; parallel; external memory;

58. Gradient Networks: Explicit Shape Matching Without Extracting Edges.

Paper Link】 【Pages】:

【Authors】: Edward Hsiao ; Martial Hebert

【Abstract】: We present a novel framework for shape-based template matching in images. While previous approaches required brittle contour extraction, considered only local information, or used coarse statistics, we propose to match the shape explicitly on low-level gradients by formulating the problem as traversing paths in a gradient network. We evaluate our algorithm on a challenging dataset of objects in cluttered environments and demonstrate significant improvement over state-of-the-art methods for shape matching and object detection.

【Keywords】: gradient networks; shape matching; object detection; edges; gradients

59. Robust Discrete Matrix Completion.

Paper Link】 【Pages】:

【Authors】: Jin Huang ; Feiping Nie ; Heng Huang

【Abstract】: Most existing matrix completion methods seek the matrix global structure in the real number domain and produce predictions that are inappropriate for applications retaining discrete structure, where an additional step is required to post-process prediction results with either heuristic threshold parameters or complicated mappings. Such an ad-hoc process is inefficient and impractical. In this paper, we propose a novel robust discrete matrix completion algorithm that produces the prediction from the collection of user specified discrete values by introducing a new discrete constraint to the matrix completion model. Our method achieves a high prediction accuracy, very close to the most optimal value of competitive methods with threshold values tuning. We solve the difficult integer programming problem via incorporating augmented Lagrangian method in an elegant way, which greatly accelerates the converge process of our method and provides the asymptotic convergence in theory. The proposed discrete matrix completion model is applied to solve three real-world applications, and all empirical results demonstrate the effectiveness of our method.

【Keywords】: Discrete Matrix Completion; Social Network Link Prediction; SNP Missing Value Inference; Protein Interaction Prediction

60. Spectral Rotation versus K-Means in Spectral Clustering.

Paper Link】 【Pages】:

【Authors】: Jin Huang ; Feiping Nie ; Heng Huang

【Abstract】: Spectral clustering has been a popular data clustering algorithm. This category of approaches often resort to other clustering methods, such as K-Means, to get the final cluster. The potential flaw of such common practice is that the obtained relaxed continuous spectral solution could severely deviate from the true discrete solution. In this paper, we propose to impose an additional orthonormal constraint to better approximate the optimal continuous solution to the graph cut objective functions. Such a method, called spectral rotation in literature, optimizes the spectral clustering objective functions better than K -Means, and improves the clustering accuracy. We would provide efficient algorithm to solve the new problem rigorously, which is not significantly more costly than K-Means. We also establish the connection between our method andK-Means to provide theoretical motivation of our method. Experimental results show that our algorithm consistently reaches better cut and meanwhile outperforms in clustering metrics than classic spectral clustering methods.

【Keywords】: Spectral Clustering; Spectral Rotation

61. Supervised and Projected Sparse Coding for Image Classification.

Paper Link】 【Pages】:

【Authors】: Jin Huang ; Feiping Nie ; Heng Huang ; Chris H. Q. Ding

【Abstract】: Classic sparse representation for classification (SRC) method fails to incorporate the label information of training images, and meanwhile has a poor scalability due to the expensive computation for l_1 norm. In this paper, we propose a novel subspace sparse coding method with utilizing label information to effectively classify the images in the subspace. Our new approach unifies the tasks of dimension reduction and supervised sparse vector learning, by simultaneously preserving the data sparse structure and meanwhile seeking the optimal projection direction in the training stage, therefore accelerates the classification process in the test stage. Our method achieves both flat and structured sparsity for the vector representations, therefore making our framework more discriminative during the subspace learning and subsequent classification. The empirical results on 4 benchmark data sets demonstrate the effectiveness of our method.

【Keywords】: Supervised Sparse Representation; Projected Sparse Representation; Sparse Learning

62. Unsupervised Cluster Matching via Probabilistic Latent Variable Models.

Paper Link】 【Pages】:

【Authors】: Tomoharu Iwata ; Tsutomu Hirao ; Naonori Ueda

【Abstract】: We propose a probabilistic latent variable model for unsupervised cluster matching, which is the task of finding correspondences between clusters of objects in different domains. Existing object matching methods find one-to-one matching. The proposed model finds many-to-many matching, and can handle multiple domains with different numbers of objects. The proposed model assumes that there are an infinite number of latent vectors that are shared by all domains, and that each object is generated using one of the latent vectors and a domain-specific linear projection. By inferring a latent vector to be used for generating each object, objects in different domains are clustered in shared groups, and thus we can find matching between clusters in an unsupervised manner. We present efficient inference procedures for the proposed model based on a stochastic EM algorithm. The effectiveness of the proposed model is demonstrated with experiments using synthetic and real data sets.

【Keywords】: Clustering; Object matching; Unsupervised learning

63. Strategic Behavior when Allocating Indivisible Goods Sequentially.

Paper Link】 【Pages】:

【Authors】: Thomas Kalinowski ; Nina Narodytska ; Toby Walsh ; Lirong Xia

【Abstract】: We study a simple sequential allocation mechanism for allocating indivisible goods between agents in which agents take turns to pick items.We focus on agents behaving strategically. We view the allocation procedure as a finite repeated game with perfect information. We show that with just two agents, we can compute the unique subgame perfect Nash equilibrium in linear time. With more agents, computing the subgame perfect Nash equilibria is more difficult. There can be an exponential number of equilibria and computing even one of them is PSPACE-hard. We identify a special case, when agents value many of the items identically, where we can efficiently compute the subgame perfect Nash equilibria. We also consider the effect of externalities and modifications to the mechanism that make it strategy proof.

【Keywords】: fair division; indivisible goods; Nash equilibrium

64. On the Subexponential Time Complexity of CSP.

Paper Link】 【Pages】:

【Authors】: Iyad A. Kanj ; Stefan Szeider

【Abstract】: A Constraint Satisfaction Problem (CSP) with n variables ranging over a domain of d values can be solved by brute-force in d^n steps (omitting a polynomial factor). With a more careful approach, this trivial upper bound can be improved for certain natural restrictions of the CSP. In this paper we establish theoretical limits to such improvements, and draw a detailed landscape of the subexponential-time complexity of CSP. We first establish relations between the subexponential-time complexity of CSP and that of other problems, including CNF-Sat. We exploit this connection to provide tight characterizations of the subexponential-time complexity of CSP under common assumptions in complexity theory. For several natural CSP parameters, we obtain threshold functions that precisely dictate the subexponential-time complexity of CSP with respect to the parameters under consideration. Our analysis provides fundamental results indicating whether and when one can significantly improve on the brute-force search approach for solving CSP.

【Keywords】: Constraint Satisfaction; Subexponential Time; Exponential Time Hypothesis; Treewidth

65. Improving the Performance of Consistency Algorithms by Localizing and Bolstering Propagation in a Tree Decomposition.

Paper Link】 【Pages】:

【Authors】: Shant Karakashian ; Robert J. Woodward ; Berthe Y. Choueiry

【Abstract】: The tractability of a Constraint Satisfaction Problem (CSP)is guaranteed by a direct relationship between its consistencylevel and a structural parameter of its constraint network suchas the treewidth. This result is not widely exploited in practicebecause enforcing higher-level consistencies can be costlyand can change the structure of the constraint network andincrease its width. Recently, R(,m)C was proposed as a relational consistency property that does not modify the structureof the graph and, thus, does not affect its width. In this paper,we explore two main strategies, based on a tree decomposition of the CSP, for improving the performance of enforcingR(,m)C and getting closer to the above tractability condition. Those strategies are: a) localizing the application ofthe consistency algorithm to the clusters of the tree decomposition, and b) bolstering constraint propagation betweenclusters by adding redundant constraints at their separators,for which we propose three new schemes. We characterizethe resulting consistency properties by comparing them, theoretically and empirically, to the original R(*,m)C and thepopular GAC and maxRPWC, and establish the benefits ofour approach for solving difficult problems.

【Keywords】: Constraint satisfaction; CSP; consistency property; constraint propagation; tree decomposition; relational consistency

66. Data-Parallel Computing Meets STRIPS.

Paper Link】 【Pages】:

【Authors】: Erez Karpas ; Tomer Sagi ; Carmel Domshlak ; Avigdor Gal ; Avi Mendelson ; Moshe Tennenholtz

【Abstract】: The increased demand for distributed computations on “big data” has led to solutions such as SCOPE, DryadLINQ, Pig, and Hive, which allow the user to specify queries in an SQL-like language, enriched with sets of user-defined operators. The lack of exact semantics for user-defined operators interferes with the query optimization process, thus putting the burden of suggesting, at least partial, query plans on the user. In an attempt to ease this burden, we propose a formal model that allows for data-parallel program synthesis (DPPS) in a semantically well-defined manner. We show that this model generalizes existing frameworks for data-parallel computation, while providing the flexibility of query plan generation that is currently absent from these frameworks. In particular, we show how existing, off-the-shelf, AI planning tools can be used for solving DPPS tasks.

【Keywords】: planning; query optimization; data-parallel computing

67. Resolution and Parallelizability: Barriers to the Efficient Parallelization of SAT Solvers.

Paper Link】 【Pages】:

【Authors】: George Katsirelos ; Ashish Sabharwal ; Horst Samulowitz ; Laurent Simon

【Abstract】: Recent attempts to create versions of Satisfiability (SAT) solversthat exploit parallel hardware and information sharing have met withlimited success. In fact,the most successful parallel solvers in recent competitions were basedon portfolio approaches with little to no exchange of informationbetween processors. This experience contradicts the apparentparallelizability of exploring a combinatorial search space. Wepresent evidence that this discrepancy can be explained by studyingSAT solvers through a proof complexity lens, as resolution refutationengines. Starting with theobservation that a recently studied measure of resolution proofs,namely depth, provides a (weak) upper bound to the best possiblespeedup achievable by such solvers, we empirically show the existenceof bottlenecks to parallelizability that resolution proofs typicallygenerated by SAT solvers exhibit. Further, we propose a new measureof parallelizability based on the best-case makespan of an offlineresource constrained scheduling problem. This measureexplicitly accounts for a bounded number of parallel processors andappears to empirically correlate with parallel speedups observed inpractice. Our findings suggest that efficient parallelization of SATsolvers is not simply a matter of designing the right clause sharingheuristics; even in the best case, it can be --- and indeed is ---hindered by the structure of the resolution proofs current SAT solverstypically produce.

【Keywords】: Satisfiability; Parallelization; CDCL Solvers; Proof Complexity; Clause Learning; Resolution Depth

68. Red-Black Relaxed Plan Heuristics.

Paper Link】 【Pages】:

【Authors】: Michael Katz ; Jörg Hoffmann ; Carmel Domshlak

【Abstract】: Despite its success, the delete relaxation has significant pitfalls. Recent work has devised the red-black planning framework, where red variables take the relaxed semantics (accumulating their values), while black variables take the regular semantics. Provided the red variables are chosen so that red-black plan generation is tractable, one can generate such a plan for every search state, and take its length as the heuristic distance estimate. Previous results were not suitable for this purpose because they identified tractable fragments for red-black plan existence, as opposed to red-black plan generation. We identify a new fragment of red-black planning, that fixes this issue. We devise machinery to efficiently generate red-black plans, and to automatically select the red variables. Experiments show that the resulting heuristics can significantly improve over standard delete relaxation heuristics.

【Keywords】: Classical Planning; Heuristic Search; Delete Relaxation; Red-Black Planning

69. Incremental Learning Framework for Indoor Scene Recognition.

Paper Link】 【Pages】:

【Authors】: Aram Kawewong ; Rapeeporn Pimup ; Osamu Hasegawa

【Abstract】: This paper presents a novel framework for online incremental place recognition in an indoor environment. The framework addresses the scenario in which scene images are gradually obtained during long-term operation in the real-world indoor environment. Multiple users may interact with the classification system and confirm either current or past prediction results; the system then immediately updates itself to improve the classification system. This framework is based on the proposed \emph{n}-value self-organizing and incremental neural network (\emph{n}-SOINN), which has been derived by modifying the original SOINN to be appropriate for use in scene recognition. The evaluation was performed on the standard MIT 67-category indoor scene dataset and shows that the proposed framework achieves the same accuracy as that of the state-of-the-art offline method, while the computation time of the proposed framework is significantly faster and fully incremental update is allowed. Additionally, a small extra set of training samples is incrementally given to the system to simulate the incremental learning situation. The result shows that the proposed framework can leverage such additional samples and achieve the state-of-the-art result.

【Keywords】: scene recognition; incremental learning; human robot interaction

70. A Fast Pairwise Heuristic for Planning under Uncertainty.

Paper Link】 【Pages】:

【Authors】: Koosha Khalvati ; Alan K. Mackworth

【Abstract】: POMDP (Partially Observable Markov Decision Process) is a mathematical framework that models planning under uncertainty. Solving a POMDP is an intractable problem and even the state of the art POMDP solvers are too computationally expensive for large domains. This is a major bottleneck. In this paper, we propose a new heuristic, called the pairwise heuristic, that can be used in a one-step greedy strategy to find a near optimal solution for POMDP problems very quickly. This approach is a good candidate for large problems where real-time solution is a necessity but exact optimality of the solution is not vital. The pairwise heuristic uses the optimal solutions for pairs of states. For each pair of states in the POMDP, we find the optimal sequence of actions to resolve the uncertainty and to maximize the reward, given that the agent is uncertain about which state of the pair it is in. Then we use these sequences as a heuristic and find the optimal action in each step of the greedy strategy using this heuristic. We have tested our method on the available large classical test benchmarks in various domains. The resulting total reward is close to, if not greater than, the total reward obtained by other state of the art POMDP solvers, while the time required to find the solution is always much less.

【Keywords】: Decision Making, Planning under Uncertainty

71. Joint Extraction and Labeling via Graph Propagation for Dictionary Construction.

Paper Link】 【Pages】:

【Authors】: Doo Soon Kim ; Kunal Verma ; Peter Z. Yeh

【Abstract】: In this paper, we present an approach that jointly infers the boundaries of tokens and their labels to construct dictionaries for Information Extraction. Our approach for joint-inference is based on graph propagation, and extends it in two novel ways. First, we extend the graph representation to capture ambiguities that occur during the token extraction phase. Second, we modify the labeling phase (i.e., label propagation) to utilize this new representation, allowing evidence from labeling to be used for token extraction. Our evaluation shows these extensions (and hence our approach) significantly improve the performance of the outcome dictionaries over pipeline-based approaches by preventing aggressive commitment. Our evaluation also shows that our extensions over a base graph-propagation framework improve the precision without hurting the recall.

【Keywords】: Information Extraction; Joint Inference; Graph Label Propagation

72. Walking on Minimax Paths for k-NN Search.

Paper Link】 【Pages】:

【Authors】: Kye-Hyeon Kim ; Seungjin Choi

【Abstract】: Link-based dissimilarity measures, such as shortest path or Euclidean commute time distance, base their distance on paths between nodes of a weighted graph. These measures are known to be better suited to data manifold with nonconvex-shaped clusters, compared to Euclidean distance, so that k -nearest neighbor (NN) search is improved in such metric spaces. In this paper we present a new link-based dissimilarity measure based on minimax paths between nodes. Two main benefits of minimax path-based dissimilarity measure are: (1) only a subset of paths is considered to make it scalable, while Euclidean commute time distance considers all possible paths; (2) it better captures nonconvex-shaped cluster structure, compared to shortest path distance. We define the total cost assigned to a path between nodes as L p norm of intermediate costs of edges involving the path, showing that minimax path emerges from our L p norm over paths framework. We also define minimax distance as the intermediate cost of the longest edge on the minimax path, then present a greedy algorithm to compute k smallest minimax distances between a query and N data points in O(log N + k log k) time. Numerical experiments demonstrate that our minimax k-NN algorithm reduce the search time by several orders of magnitude, compared to existing methods, while the quality of k -NN search is significantly improved over Euclidean distance.

【Keywords】: k-Nearest Neighbor Search; Link-based Dissimilarity; Minimax Distance; Minimax Path

73. A Hierarchical Aspect-Sentiment Model for Online Reviews.

Paper Link】 【Pages】:

【Authors】: Suin Kim ; Jianwen Zhang ; Zheng Chen ; Alice H. Oh ; Shixia Liu

【Abstract】: To help users quickly understand the major opinions from massive online reviews, it is important to automatically reveal the latent structure of the aspects, sentiment polarities, and the association between them. However, there is little work available to do this effectively. In this paper, we propose a hierarchical aspect sentiment model (HASM) to discover a hierarchical structure of aspect-based sentiments from unlabeled online reviews. In HASM, the whole structure is a tree. Each node itself is a two-level tree, whose root represents an aspect and the children represent the sentiment polarities associated with it. Each aspect or sentiment polarity is modeled as a distribution of words. To automatically extract both the structure and parameters of the tree, we use a Bayesian nonparametric model, recursive Chinese Restaurant Process (rCRP), as the prior and jointly infer the aspect-sentiment tree from the review texts. Experiments on two real datasets show that our model is comparable to two other hierarchical topic models in terms of quantitative measures of topic trees. It is also shown that our model achieves better sentence-level classification accuracy than previously proposed aspect-sentiment joint models.

【Keywords】: Sentiment Analysis; Aspect-Sentiment Tree; Hierarchical Model; Bayesian Nonparametric Model

74. Answering Counting Aggregate Queries over Ontologies of the DL-Lite Family.

Paper Link】 【Pages】:

【Authors】: Egor V. Kostylev ; Juan L. Reutter

【Abstract】: One of the main applications of description logics is the ontology-based data access model, which requires algorithms for query answering over ontologies. In fact, some description logics, like those in the DL-Lite family, are designed so that simple queries, such as conjunctive queries, are efficiently computable. In this paper we study counting aggregate queries over ontologies, i.e. queries which use aggregate functions COUNT and COUNT DISTINCT. We propose an intuitive semantics for certain answers for these queries, which conforms to the open world assumption. We compare our semantics with other approaches that have been proposed in different contexts. We establish data and combined computational complexity for the problems of answering counting aggregate queries over ontologies for several variants of DL-Lite.

【Keywords】: Query Answering; Description Logics

75. Generating Natural-Language Video Descriptions Using Text-Mined Knowledge.

Paper Link】 【Pages】:

【Authors】: Niveda Krishnamoorthy ; Girish Malkarnenkar ; Raymond J. Mooney ; Kate Saenko ; Sergio Guadarrama

【Abstract】: We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with "real-world' knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61% of the time.

【Keywords】: video description; text mining; grounding;

76. Simple Temporal Problems with Taboo Regions.

Paper Link】 【Pages】:

【Authors】: T. K. Satish Kumar ; Marcello Cirillo ; Sven Koenig

【Abstract】: In this paper, we define and study the general framework of Simple Temporal Problems with Taboo regions (STPTs) and show how these problems capture metric temporal reasoning aspects which are common to many real-world applications. STPTs encode simple temporal constraints between events and user-defined taboo regions on the timeline, during which no event is allowed to take place. We discuss two different variants of STPTs. The first one deals with (instantaneous) events, while the second one allows for (durative) processes. We also provide polynomial-time algorithms for solving them. If all events or processes cannot be scheduled outside of the taboo regions, one needs to define and reason about "soft" STPTs. We show that even "soft" STPTs can be solved in polynomial time, using reductions to max-flow problems. The resulting algorithms allow for incremental computations, which is important for the successful application of our approach in real-time domains.

【Keywords】: temporal reasoning; algorithms and complexity

77. How to Cut a Cake Before the Party Ends.

Paper Link】 【Pages】:

【Authors】: David Kurokawa ; John K. Lai ; Ariel D. Procaccia

【Abstract】: For decades researchers have struggled with the problem of envy-free cake cutting: how to divide a divisible good between multiple agents so that each agent likes his own allocation best. Although an envy-free cake cutting protocol was ultimately devised, it is unbounded, in the sense that the number of operations can be arbitrarily large, depending on the preferences of the agents. We ask whether bounded protocols exist when the agents' preferences are restricted. Our main result is an envy-free cake cutting protocol for agents with piecewise linear valuations, which requires a number of operations that is polynomial in natural parameters of the given instance.

【Keywords】: Cake cutting; Fair division

78. Composition Games for Distributed Systems: The EU Grant Games.

Paper Link】 【Pages】:

【Authors】: Shay Kutten ; Ron Lavi ; Amitabh Trehan

【Abstract】: We analyze ways by which people decompose into groups in distributed systems. We are interested in systems in which an agent can increase its utility by connecting to other agents, but must also pay a cost that increases with the size of the system. The right balance is achieved by the right size group of agents. We formulate and analyze three intuitive and realistic games and show how simple changes in the protocol can drastically improve the price of anarchy of these games. In particular, we identify two important properties for a low price of anarchy: agreement in joining the system, and the possibility of appealing a rejection from a system. We show that the latter property is especially important if there are some pre-existing constraints regarding who may collaborate (or communicate) with whom.

【Keywords】: Game Theory, Grant Schemes, Group Decomposition

79. Structured Kernel-Based Reinforcement Learning.

Paper Link】 【Pages】:

【Authors】: Branislav Kveton ; Georgios Theocharous

【Abstract】: Kernel-based reinforcement learning (KBRL) is a popular approach to learning non-parametric value function approximations. In this paper, we present structured KBRL, a paradigm for kernel-based RL that allows for modeling independencies in the transition and reward models of problems. Real-world problems often exhibit this structure and can be solved more efficiently when it is modeled. We make three contributions. First, we motivate our work, define a structured backup operator, and prove that it is a contraction. Second, we show how to evaluate our operator efficiently. Our analysis reveals that the fixed point of the operator is the optimal value function in a special factored MDP. Finally, we evaluate our method on a synthetic problem and compare it to two KBRL baselines. In most experiments, we learn better policies than the baselines from an order of magnitude less training data.

【Keywords】: Reinforcement learning, kernels, Markov decision processes

80. Extending STR to a Higher-Order Consistency.

Paper Link】 【Pages】:

【Authors】: Christophe Lecoutre ; Anastasia Paparrizou ; Kostas Stergiou

【Abstract】: One of the most widely studied classes of constraints in constraint programming (CP) is that of table constraints. Numerousspecialized filtering algorithms, enforcing the wellknown property called generalized arc consistency (GAC),have been developed for such constraints. Among the most successful GAC algorithms for table constraints, we find variants of simple tabular reduction (STR), like STR2. In this paper,we propose an extension of STR-based algorithms that achieves full pairwise consistency (FPWC), a consistency stronger than GAC and max restricted pairwise consistency (maxRPWC). Our approach involves counting the number of occurrences of specific combinations of values in constraint intersections. Importantly, the worst-case time complexity of one call to the basic filtering procedure at the heart of our new algorithm is quite close to that of STR algorithms. Experiments demonstrate that our method can outperform STR2 in many classes of problems, being significantly faster in some cases. Also, it is clearly superior to maxRPWC+, an algorithm that has been recently proposed.

【Keywords】: Constraint Propagation; Strong Local Consistencies; Table Constraints

81. m-Transportability: Transportability of a Causal Effect from Multiple Environments.

Paper Link】 【Pages】:

【Authors】: Sanghack Lee ; Vasant Honavar

【Abstract】: We study m-transportability, a generalization of transportability, which offers a license to use causal information elicited from experiments and observations in m>=1 source environments to estimate a causal effect in a given targetenvironment. We provide a novel characterization of m-transportability that directly exploits the completeness of do-calculus to obtain the necessary and sufficient conditions for m-transportability. We provide an algorithm for deciding m-transportability that determines whether a causal relation is m-transportable; and if it is, produces a transport formula, that is, a recipe for estimating the desired causal effect by combining experimental information from m source environments with observational information from the target environment.

【Keywords】: causal relations; transfer knowledge; domain adaptation; transportability

82. Solving Security Games on Graphs via Marginal Probabilities.

Paper Link】 【Pages】:

【Authors】: Joshua Letchford ; Vincent Conitzer

【Abstract】: Security games involving the allocation of multiple security resources to defend multiple targets generally have an exponential number of pure strategies for the defender. One method that has been successful in addressing this computational issue is to instead directly compute the marginal probabilities with which the individual resources are assigned (first pursued by Kiekintveld et al. (2009)). However, in sufficiently general settings, there exist games where these marginal solutions are not implementable, that is, they do not correspond to any mixed strategy of the defender. In this paper, we examine security games where the defender tries to monitor the vertices of a graph, and we show how the type of graph, the type of schedules, and the type of defender resources affect the applicability of this approach. In some settings, we show the approach is applicable and give a polynomial-time algorithm for computing an optimal defender strategy; in other settings, we give counterexample games that demonstrate that the approach does not work, and prove NP-hardness results for computing an optimal defender strategy.

【Keywords】: Game Theory; Stackelberg game

83. Story Generation with Crowdsourced Plot Graphs.

Paper Link】 【Pages】:

【Authors】: Boyang Li ; Stephen Lee-Urban ; George Johnston ; Mark Riedl

【Abstract】: Story generation is the problem of automatically selecting a sequence of events that meet a set of criteria and can be told as a story. Story generation is knowledge-intensive; traditional story generators rely on a priori defined domain models about fictional worlds, including characters, places, and actions that can be performed. Manually authoring the domain models is costly and thus not scalable. We present a novel class of story generation system that can generate stories in an unknown domain. Our system (a) automatically learns a domain model by crowdsourcing a corpus of narrative examples and (b) generates stories by sampling from the space defined by the domain model. A large-scale evaluation shows that stories generated by our system for a previously unknown topic are comparable in quality to simple stories authored by untrained humans

【Keywords】: Story Generation; Crowdsourcing

84. An Extended GHKM Algorithm for Inducing Lambda-SCFG.

Paper Link】 【Pages】:

【Authors】: Peng Li ; Yang Liu ; Maosong Sun

【Abstract】: Semantic parsing, which aims at mapping a natural language (NL) sentence into its formal meaning representation (e.g., logical form), has received increasing attention in recent years. While synchronous context-free grammar (SCFG) augmented with lambda calculus (lambda-SCFG) provides an effective mechanism for semantic parsing, how to learn such lambda-SCFG rules still remains a challenge because of the difficulty in determining the correspondence between NL sentences and logical forms. To alleviate this structural divergence problem, we extend the GHKM algorithm, which is a state-of-the-art algorithm for learning synchronous grammars in statistical machine translation, to induce lambda-SCFG from pairs of NL sentences and logical forms. By treating logical forms as trees, we reformulate the theory behind GHKM that gives formal semantics to the alignment between NL words and logical form tokens. Experiments on the GEOQUERY dataset show that our semantic parser achieves an F-measure of 90.2%, the best result published to date.

【Keywords】: semantic parsing; GHKM algorithm; synchronous context-free grammar; lambda calculus; rule extraction

85. Reasoning about Saturated Conditional Independence Under Uncertainty: Axioms, Algorithms, and Levesque's Situations to the Rescue.

Paper Link】 【Pages】:

【Authors】: Sebastian Link

【Abstract】: The implication problem of probabilistic conditional independencies is investigated in the presence of missing data. Here, graph separation axioms fail to hold for saturated conditional independencies, unlike the known idealized case with no missing data. Several axiomatic, algorithmic, and logical characterizations of the implication problem for saturated conditional independencies are established. In particular, equivalences are shown to the implication problem of a propositional fragment under Levesque's situations, and that of Lien's class of multivalued database dependencies under null values.

【Keywords】: Axiomatization; Algorithm; Conditional independence; Data dependency; Implication problem; Logic; Missing data; Saturation; Situation

86. Large-Scale Hierarchical Classification via Stochastic Perceptron.

Paper Link】 【Pages】:

【Authors】: Dehua Liu ; Bojun Tu ; Hui Qian ; Zhihua Zhang

【Abstract】: Hierarchical classification (HC) plays an significant role in machine learning and data mining. However, most of the state-of-the-art HC algorithms suffer from high computational costs. To improve the performance of solving, we propose a stochastic perceptron (SP) algorithm in the large margin framework. In particular, a stochastic choice procedure is devised to decide the direction of next iteration. We prove that after finite iterations the SP algorithm yields a sub-optimal solution with high probability if the input instances are separable. For large-scale and high-dimensional data sets, we reform SP to the kernel version (KSP), which dramatically reduces the memory space needed. The KSP algorithm has the merit of low space complexity as well as low time complexity. The experimental results show that our KSP approach achieves almost the same accuracy as the contemporary algorithms on the real-world data sets, but with much less CPU running time.

【Keywords】: Large Margin, Hierarchical Classification, Stochastic Perceptron

87. Reciprocal Hash Tables for Nearest Neighbor Search.

Paper Link】 【Pages】:

【Authors】: Xianglong Liu ; Junfeng He ; Bo Lang

【Abstract】: Recent years have witnessed the success of hashingtechniques in approximate nearest neighbor search. Inpractice, multiple hash tables are usually employed toretrieve more desired results from all hit buckets ofeach table. However, there are rare works studying theunified approach to constructing multiple informativehash tables except the widely used random way. In thispaper, we regard the table construction as a selectionproblem over a set of candidate hash functions. Withthe graph representation of the function set, we proposean efficient solution that sequentially applies normal-ized dominant set to finding the most informative andindependent hash functions for each table. To furtherreduce the redundancy between tables, we explore thereciprocal hash tables in a boosting manner, where thehash function graph is updated with high weights em-phasized on the misclassified neighbor pairs of previoushash tables. The construction method is general andcompatible with different types of hashing algorithmsusing different feature spaces and/or parameter settings.Extensive experiments on two large-scale benchmarksdemonstrate that the proposed method outperforms bothnaive construction method and state-of-the-art hashingalgorithms, with up to 65.93% accuracy gains.

【Keywords】: Locality Sensitive Hashing; Multiple Hash Tables; Nearest Neighbor Search

88. A Generalized Student-t Based Approach to Mixed-Type Anomaly Detection.

Paper Link】 【Pages】:

【Authors】: Yen-Cheng Lu ; Feng Chen ; Yang Chen ; Chang-Tien Lu

【Abstract】: Anomaly detection for mixed-type data is an important problem that has not been well addressed in the machine learning field. There are two challenging issues for mixed-type datasets, namely modeling mutual correlations between mixed-type attributes and capturing large variations due to anomalies. This paper presents BuffDetect, a robust error buffering approach for anomaly detection in mixed-type datasets. A new variant of the generalized linear model is proposed to model the dependency between mixed-type attributes. The model incorporates an error buffering component based on Student-t distribution to absorb the variations caused by anomalies. However, because of the non- Gaussian design, the problem becomes analytically intractable. We propose a novel Bayesian inference approach, which integrates Laplace approximation and several computational optimizations, and is able to efficiently approximate the posterior of high dimensional latent variables by iteratively updating the latent variables in groups. Extensive experimental evaluations based on 13 benchmark datasets demonstrate the effectiveness and efficiency of BuffDetect.

【Keywords】: Anomaly Detection, Outlier Detection, Mixed-type data; Robust estimation

89. Unified Constraint Propagation on Multi-View Data.

Paper Link】 【Pages】:

【Authors】: Zhiwu Lu ; Yuxin Peng

【Abstract】: This paper presents a unified framework for intra-view and inter-view constraint propagation on multi-view data. Pairwise constraint propagation has been studied extensively, where each pairwise constraint is defined over a pair of data points from a single view. In contrast, very little attention has been paid to inter-view constraint propagation, which is more challenging since each pairwise constraint is now defined over a pair of data points from different views. Although both intra-view and inter-view constraint propagation are crucial for multi-view tasks, most previous methods can not handle them simultaneously. To address this challenging issue, we propose to decompose these two types of constraint propagation into semi-supervised learning subproblems so that they can be uniformly solved based on the traditional label propagation techniques. To further integrate them into a unified framework, we utilize the results of intra-view constraint propagation to adjust the similarity matrix of each view and then perform inter-view constraint propagation with the adjusted similarity matrices. The experimental results in cross-view retrieval have shown the superior performance of our unified constraint propagation.

【Keywords】:

90. Vector-Valued Multi-View Semi-Supervsed Learning for Multi-Label Image Classification.

Paper Link】 【Pages】:

【Authors】: Yong Luo ; Dacheng Tao ; Chang Xu ; Dongchen Li ; Chao Xu

【Abstract】: Images are usually associated with multiple labels and comprised of multiple views, due to each image containing several objects (e.g. a pedestrian, bicycle and tree) and multiple visual features (e.g. color, texture and shape). Currently available tools tend to use either labels or features for classification, but both are necessary to describe the image properly. There have been recent successes in using vector-valued functions, which construct matrix-valued kernels, to explore the multi-label structure in the output space. This has motivated us to develop multi-view vector-valued manifold regularization (MV$^3$MR) in order to integrate multiple features. MV$^3$MR exploits the complementary properties of different features, and discovers the intrinsic local geometry of the compact support shared by different features, under the theme of manifold regularization. We validate the effectiveness of the proposed MV$^3$MR methodology for image classification by conducting extensive experiments on two challenge datasets, PASCAL VOC' 07 and MIR Flickr.

【Keywords】: multi-view; vector-valued; semi-supervised; manifold regularization; multi-label

91. Basis Adaptation for Sparse Nonlinear Reinforcement Learning.

Paper Link】 【Pages】:

【Authors】: Sridhar Mahadevan ; Stephen Giguere ; Nicholas Jacek

【Abstract】: This paper presents a new approach to representation discovery in reinforcement learning (RL) using basis adaptation. We introduce a general framework for basis adaptation as {\em nonlinear separable least-squares value function approximation} based on finding Frechet gradients of an error function using variable projection functionals. We then present a scalable proximal gradient-based approach for basis adaptation using the recently proposed mirror-descent framework for RL. Unlike traditional temporal-difference (TD) methods for RL, mirror descent based RL methods undertake proximal gradient updates of weights in a dual space, which is linked together with the primal space using a Legendre transform involving the gradient of a strongly convex function. Mirror descent RL can be viewed as a proximal TD algorithm using Bregman divergence as the distance generating function. We present a new class of regularized proximal-gradient based TD methods, which combine feature selection through sparse L1 regularization and basis adaptation. Experimental results are provided to illustrate and validate the approach.

【Keywords】: Reinforcement learning, basis adaptation, mirror descent

92. Integrating Programming by Example and Natural Language Programming.

Paper Link】 【Pages】:

【Authors】: Mehdi Hafezi Manshadi ; Daniel Gildea ; James F. Allen

【Abstract】: We motivate the integration of programming by example and natural language programming by developing a system for specifying programs for simple text editing operations based on regular expressions. The programs are described with unconstrained natural language instructions, and providing one or more examples of input/output. We show that natural language allows the system to deduce the correct program much more often and much faster than is possible with the input/output example(s) alone, showing that natural language programming and programming by example can be combined in a way that overcomes the ambiguities that both methods suffer from individually, while providing a more natural interface to the user.

【Keywords】: Programming by Example; Natural Language Programming; Programming by Demonstration; Program Induction

93. A Framework for Aggregating Influenced CP-Nets and its Resistance to Bribery.

Paper Link】 【Pages】:

【Authors】: Alberto Maran ; Nicolas Maudet ; Maria Silvia Pini ; Francesca Rossi ; Kristen Brent Venable

【Abstract】: We consider multi-agent settings where a set of agents want to take a collective decision, based on their preferences over the possible candidate options. While agents have their initial inclination, they may interact and influence each other, and therefore modify their preferences, until hopefully they reach a stable state and declare their final inclination. At that point, a voting rule is used to aggregate the agents’ preferences and generate the collective decision. Recent work has modeled the influence phenomenon in the case of voting over a single issue. Here we generalize this model to account for preferences over combinatorially structured domains including several issues. We propose a way to model influence when agents express their preferences as CP-nets. We define two procedures for aggregating preferences in this scenario, by interleaving voting and influence convergence, and study their resistance to bribery.

【Keywords】: influence, bribery, preferences, CP-nets

94. Automating Collusion Detection in Sequential Games.

Paper Link】 【Pages】:

【Authors】: Parisa Mazrooei ; Christopher Archibald ; Michael Bowling

【Abstract】: Collusion is the practice of two or more parties deliberately cooperating to the detriment of others. While such behavior may be desirable in certain circumstances, in many it is considered dishonest and unfair. If agents otherwise hold strictly to the established rules, though, collusion can be challenging to police. In this paper, we introduce an automatic method for collusion detection in sequential games. We achieve this through a novel object, called a collusion table, that captures the effects of collusive behavior, i.e., advantage to the colluding parties, without assuming any particular pattern of behavior. We show the effectiveness of this method in the domain of poker, a popular game where collusion is prohibited.

【Keywords】: Sequential Games; Collusion; Agent Evaluation

95. On the Value of Using Group Discounts under Price Competition.

Paper Link】 【Pages】:

【Authors】: Reshef Meir ; Tyler Lu ; Moshe Tennenholtz ; Craig Boutilier

【Abstract】: The increasing use of group discounts has provided opportunities for buying groups with diverse preferences to coordinate their behavior in order to exploit the best offers from multiple vendors. We analyze this problem from the viewpoint of the vendors, asking under what conditions a vendor should adopt a volume-based price schedule rather than posting a fixed price, either as a monopolist or when competing with other vendors. When vendors have uncertainty about buyers' valuations specified by a known distribution, we show that a vendor is always better off posting a fixed price, provided that buyers' types are i.i.d. and that other vendors also use fixed prices. We also show that these assumptions cannot be relaxed: if buyers are not i.i.d., or other vendors post discount schedules, then posting a schedule may yield higher profit for the vendor. We provide similar results under a distribution-free uncertainty model, where vendors minimize their maximum regret over all type realizations.

【Keywords】: discount schedules; competition; fixed prices

96. Bounding the Cost of Stability in Games over Interaction Networks.

Paper Link】 【Pages】:

【Authors】: Reshef Meir ; Yair Zick ; Edith Elkind ; Jeffrey S. Rosenschein

【Abstract】: We study the stability of cooperative games played over an interaction network, in a model that was introduced by Myerson ['77]. We show that the cost of stability of such games (i.e., the subsidy required to stabilize the game) can be bounded in terms  of natural parameters of their underlying interaction networks. Specifically, we prove that if the treewidth of the interaction network H is k , then the relative cost of stability of any game played over H is at most k + 1, and if the pathwidth of H is k ', then the relative cost of stability is at most k '. We show that these bounds are tight for all k ≥ 2 and all k ' ≥ 1, respectively.

【Keywords】: treewidth; cooperative games; graph; subsidy

97. A First-Order Formalization of Commitments and Goals for Planning.

Paper Link】 【Pages】:

【Authors】: Felipe Meneguzzi ; Pankaj R. Telang ; Munindar P. Singh

【Abstract】: Commitments help model interactions in multiagent systems in a computationally realizable yet high-level manner without compromising the autonomy and heterogeneity of the member agents. Recent work shows how to combine commitments with goals and apply planning methods to enable agents to determine their actions. However, previous approaches to modeling commitments are confined to propositional representations, which limits their applicability in practical cases. We propose a first-order representation and reasoning technique that accommodates templatic commitments and goals that may be applied repeatedly with differing bindings for domain objects. Doing so not only leads to a more perspicuous modeling, but also supports many practical patterns.

【Keywords】: Goals; Commitments; HTN Planning

98. A Cyclic Weighted Median Method for L1 Low-Rank Matrix Factorization with Missing Entries.

Paper Link】 【Pages】:

【Authors】: Deyu Meng ; Zongben Xu ; Lei Zhang ; Ji Zhao

【Abstract】: A challenging problem in machine learning, information retrieval and computer vision research is how to recover a low-rank representation of the given data in the presence of outliers and missing entries. The L1-norm low-rank matrix factorization (LRMF) has been a popular approach to solving this problem. However, L1-norm LRMF is difficult to achieve due to its non-convexity and non-smoothness, and existing methods are often inefficient and fail to converge to a desired solution. In this paper we propose a novel cyclic weighted median (CWM) method, which is intrinsically a coordinate decent algorithm, for L1-norm LRMF. The CWM method minimizes the objective by solving a sequence of scalar minimization sub-problems, each of which is convex and can be easily solved by the weighted median filter. The extensive experimental results validate that the CWM method outperforms state-of-the-arts in terms of both accuracy and computational efficiency.

【Keywords】: matrix factorization; weighted median filter; face reconstruction

99. From Semantic to Emotional Space in Probabilistic Sense Sentiment Analysis.

Paper Link】 【Pages】:

【Authors】: Mitra Mohtarami ; Man Lan ; Chew Lim Tan

【Abstract】: This paper proposes an effective approach to model the emotional space of words to infer their Sense Sentiment Similarity (SSS). SSS reflects the distance between the words regarding their senses and underlying sentiments. We propose a probabilistic approach that is built on a hidden emotional model in which the basic human emotions are considered as hidden. This leads to predict a vector of emotions for each sense of the words, and then to infer the sense sentiment similarity. The effectiveness of the proposed approach is investigated in two Natural Language Processing tasks: Indirect yes/no Question Answer Pairs Inference and Sentiment Orientation Prediction.

【Keywords】: Sentiment Analysis; Sense Sentiment Similarity; Emotional Vectors

100. Analyzing the Effectiveness of Adversary Modeling in Security Games.

Paper Link】 【Pages】:

【Authors】: Thanh Hong Nguyen ; Rong Yang ; Amos Azaria ; Sarit Kraus ; Milind Tambe

【Abstract】: Recent deployments of Stackelberg security games (SSG) have led to two competing approaches to handle boundedly rational human adversaries: (1) integrating models of human (adversary) decision-making into the game-theoretic algorithms, and (2) applying robust optimization techniques that avoid adversary modeling. A recent algorithm (MATCH) based on the second approach was shown to outperform the leading modeling-based algorithm even in the presence of significant amount of data. Is there then any value in using human behavior models in solving SSGs? Through extensive experiments with 547 human subjects playing 11102 games in total, we emphatically answer the question in the affirmative, while providing the following key contributions: (i) we show that our algorithm, SU-BRQR, based on a novel integration of human behavior model with the subjective utility function, significantly outperforms both MATCH and its improvements; (ii) we are the first to present experimental results with security intelligence experts, and find that even though the experts are more rational than the Amazon Turk workers, SU-BRQR still outperforms an approach assuming perfect rationality (and to a more limited extent MATCH); (iii) we show the advantage of SU-BRQR in a new, large game setting and demonstrate that sufficient data enables it to improve its performance over MATCH.

【Keywords】: security games; boundedly rationaly, human decision making, Stackelberg games

101. Symmetry-Aware Marginal Density Estimation.

Paper Link】 【Pages】:

【Authors】: Mathias Niepert

【Abstract】: The Rao-Blackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries. A novel marginal density estimator is introduced and shown both analytically and empirically to outperform standard estimators by several orders of magnitude. The developed theory and algorithms apply to a broad class of probabilistic models including statistical relational models considered not susceptible to lifted probabilistic inference.

【Keywords】: probabilistic inference; graphical models' symmetry-aware inference; symmetry; lifted inference

102. Cost-Optimal Planning by Self-Interested Agents.

Paper Link】 【Pages】:

【Authors】: Raz Nissim ; Ronen I. Brafman

【Abstract】: As our world becomes better connected and autonomous agents no longer appear to be science fiction, a natural need arises for enabling groups of selfish agents to cooperate in generating plans for diverse tasks that none of them can perform alone in a cost-effective manner. While most work on planning for/by selfish agents revolves around finding stable solutions (e.g., Nash Equilibrium), this work combines techniques from mechanism design with a recently introduced method for distributed planning, in order to find cost optimal (and, thus, social welfare maximizing) solutions. Based on the Vickrey-Clarke-Groves mechanisms, we present both a centralized, and a privacy-preserving distributed mechanism.

【Keywords】: Planning; Multiagent Systems; Mechanism Design; Distributed Problem Solving

103. RockIt: Exploiting Parallelism and Symmetry for MAP Inference in Statistical Relational Models.

Paper Link】 【Pages】:

【Authors】: Jan Noessner ; Mathias Niepert ; Heiner Stuckenschmidt

【Abstract】: RockIt is a maximum a-posteriori (MAP) query engine for statistical relational models. MAP inference in graphical models is an optimization problem which can be compiled to integer linear programs (ILPs).We describe several advances in translating MAP queries to ILP instances and present the novel meta-algorithm cutting plane aggregation (CPA). CPA exploits local context-specific symmetries and bundles up sets of linear constraints. The resulting counting constraints lead to more compact ILPs and make the symmetry of the ground model more explicit to state-of-the-art ILP solvers. Moreover, RockIt parallelizes most parts of the MAP inference pipeline taking advantage of ubiquitous shared-memory multi-core architectures. We report on extensive experiments with Markov logic network (MLN) benchmarks showing that RockIt outperforms the state-of-the-art systems Alchemy, Markov TheBeast, and Tuffy both in terms of efficiency and quality of results.

【Keywords】: Statistical Relational Models; Markov Logic; Cutting Plane Aggregation; Cutting Plane Inference; Counting Constraints; Maximum A-Posteriori Query; Integer Linear Programming; Symmetry Detection; Parallelization

104. Mixed Observability Predictive State Representations.

Paper Link】 【Pages】:

【Authors】: Sylvie C. W. Ong ; Yuri Grinberg ; Joelle Pineau

【Abstract】: Learning accurate models of agent behaviours is crucial for the purpose of controlling systems where the agents' and environment's dynamics are unknown. This is a challenging problem, but structural assumptions can be leveraged to tackle it effectively. In particular, many systems exhibit mixed observability, when observations of some system components are essentially perfect and noiseless, while observations of other components are imperfect, aliased or noisy. In this paper we present a new model learning framework, the mixed observability predictive state representation (MO-PSR), which extends the previously known predictive state representations to the case of mixed observability systems. We present a learning algorithm that is scalable to large amounts of data and to large mixed observability domains, and show theoretical analysis of the learning consistency and computational complexity. Empirical results demonstrate that our algorithm is capable of learning accurate models, at a larger scale than with the generic predictive state representation, by leveraging the mixed observability properties.

【Keywords】: Machine Learning; Model Learning; Predictive State Representations; Mixed Observability

105. Discovering Hierarchical Structure for Sources and Entities.

Paper Link】 【Pages】:

【Authors】: Aditya Pal ; Nilesh N. Dalvi ; Kedar Bellare

【Abstract】: In this paper, we consider the problem of jointly learning hierarchies over a set of sources and entities based on their containment relationship. We model the concept of hierarchy using a set of latent binary features and propose a generative model that assigns those latent features to sources and entities in order to maximize the probability of the observed containment. To avoid fixing the number of features beforehand, we consider a non-parametric approach based on the Indian Buffet Process. The hierarchies produced by our algorithm can be used for completing missing associations and discovering structural bindings in the data. Using simulated and real datasets we provide empirical evidence of the effectiveness of the proposed approach in comparison to the existing hierarchy agnostic approaches.

【Keywords】: Infinite Features; Dyadic Topologies; Probabilistic Model; Gibbs Sampling

106. Rank Aggregation via Low-Rank and Structured-Sparse Decomposition.

Paper Link】 【Pages】:

【Authors】: Yan Pan ; Hanjiang Lai ; Cong Liu ; Yong Tang ; Shuicheng Yan

【Abstract】: Rank aggregation, which combines multiple individual rank lists toobtain a better one, is a fundamental technique in various applications such as meta-search and recommendation systems. Most existing rank aggregation methods blindly combine multiple rank lists with possibly considerable noises, which often degrades their performances. In this paper, we propose a new model for robust rank aggregation (RRA) via matrix learning, which recovers a latent rank list from the possibly incomplete and noisy input rank lists. In our model, we construct a pairwise comparison matrix to encode the order information in each input rank list. Based on our observations, each comparison matrix can be naturally decomposed into a shared low-rank matrix, combined with a deviation error matrix which is the sum of a column-sparse matrix and a row-sparse one. The latent rank list can be easily extracted from the learned low-rank matrix. The optimization formulation of RRA has an element-wise multiplication operator to handle missing values, a symmetric constraint on the noise structure, and a factorization trick to restrict the maximum rank of the low-rank matrix. To solve this challenging optimization problem, we propose a novel procedure based on the Augmented Lagrangian Multiplier scheme. We conduct extensive experiments on meta-search and collaborative filtering benchmark datasets. The results show that the proposed RRA has superior performance gain over several state-of-the-art algorithms for rank aggregation.

【Keywords】: rank aggregation; low-rank matrices; structured sparsity; meta-search

107. Dynamic Social Choice with Evolving Preferences.

Paper Link】 【Pages】:

【Authors】: David C. Parkes ; Ariel D. Procaccia

【Abstract】: Social choice theory provides insights into a variety of collective decision making settings, but nowadays some of its tenets are challenged by internet environments, which call for dynamic decision making under constantly changing preferences. In this paper we model the problem via Markov decision processes (MDP), where the states of the MDP coincide with preference profiles and a (deterministic, stationary) policy corresponds to a social choice function. We can therefore employ the axioms studied in the social choice literature as guidelines in the design of socially desirable policies. We present tractable algorithms that compute optimal policies under different prominent social choice constraints. Our machinery relies on techniques for exploiting symmetries and isomorphisms between MDPs.

【Keywords】: Voting; Markov decision processes; Symmetries

108. PAC Optimal Exploration in Continuous Space Markov Decision Processes.

Paper Link】 【Pages】:

【Authors】: Jason Pazis ; Ronald Parr

【Abstract】: Current exploration algorithms can be classified in two broad categories: Heuristic, and PAC optimal. While numerous researchers have used heuristic approaches such as epsilon-greedy exploration successfully, such approaches lack formal, finite sample guarantees and may need a significant amount of fine-tuning to produce good results. PAC optimal exploration algorithms, on the other hand, offer strong theoretical guarantees but are inapplicable in domains of realistic size. The goal of this paper is to bridge the gap between theory and practice, by introducing C-PACE, an algorithm which offers strong theoretical guarantees and can be applied to interesting, continuous space problems.

【Keywords】: Reinforcement Learning; Exploration

109. Sample Complexity and Performance Bounds for Non-Parametric Approximate Linear Programming.

Paper Link】 【Pages】:

【Authors】: Jason Pazis ; Ronald Parr

【Abstract】: One of the most difficult tasks in value function approximation for Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. Recent results in non-parametric approximate linear programming (NP-ALP), have demonstrated that this can be done effectively using nothing more than a smoothness assumption on the value function. In this paper we extend these results to the case where samples come from real world transitions instead of the full Bellman equation, adding robustness to noise. In addition, we provide the first max-norm, finite sample performance guarantees for any form of ALP. NP-ALP is amenable to problems with large (multidimensional) or even infinite (continuous) action spaces, and does not require a model to select actions using the resulting approximate solution.

【Keywords】: Reinforcement Learning; Approximate Linear Programming

110. An Agent Design for Repeated Negotiation and Information Revelation with People.

Paper Link】 【Pages】:

【Authors】: Noam Peled ; Ya'akov (Kobi) Gal ; Sarit Kraus

【Abstract】: Many negotiations in the real world are characterized by incomplete information, and participants' success depends on their ability to reveal information in a way that facilitates agreement without compromising the individual gains of agents. This paper presents a novel agent design for repeated negotiation in incomplete information settings that learns to reveal information strategically during the negotiation process. The agent used classical machine learning techniques to predict how people make and respond to offers during the negotiation, how they reveal information and their response to potential revelation actions by the agent. The agent was evaluated empirically in an extensive empirical study spanning hundreds of human subjects. Results show that the agent was able to outperform people. In particular, it learned (1) to make offers that were beneficial to people while not compromising its own benefit; (2) to incrementally reveal information to people in a way that increased its expected performance. The approach generalizes to new settings without the need to acquire additional data. This work demonstrates the efficacy of combining machine learning with opponent modeling techniques towards the design of computer agents for negotiating with people in settings of incomplete information.

【Keywords】: Negotiation; Revelation; Decision-Making

111. Salient Object Detection via Low-Rank and Structured Sparse Matrix Decomposition.

Paper Link】 【Pages】:

【Authors】: Houwen Peng ; Bing Li ; Rongrong Ji ; Weiming Hu ; Weihua Xiong ; Congyan Lang

【Abstract】: Salient object detection provides an alternative solution to various image semantic understanding tasks such as object recognition, adaptive compression and image retrieval. Recently, low-rank matrix recovery (LR) theory has been introduced into saliency detection, and achieves impressed results. However, the existing LR-based models neglect the underlying structure of images, and inevitably degrade the associated performance. In this paper, we propose a Low-rank and Structured sparse Matrix Decomposition (LSMD) model for salient object detection. In the model, a tree-structured sparsity-inducing norm regularization is firstly introduced to provide a hierarchical description of the image structure to ensure the completeness of the extracted salient object. The similarity of saliency values within the salient object is then guaranteed by the $\ell _\infty$-norm. Finally, high-level priors are integrated to guide the matrix decomposition and enhance the saliency detection. Experimental results on the largest public benchmark database show that our model outperforms existing LR-based approaches and other state-of-the-art methods, which verifies the effectiveness and robustness of the structure cues in our model.

【Keywords】: Saliency Detection; Low Rank; Structured Sparsity; Matrix Decomposition

112. Bribery in Voting With Soft Constraints.

Paper Link】 【Pages】:

【Authors】: Maria Silvia Pini ; Francesca Rossi ; Kristen Brent Venable

【Abstract】: We consider a multi-agent scenario where a collection of agents needs to select a common decision from a large set of decisions over which they express their preferences. This decision set has a combinatorial structure, that is, each decision is an element of the Cartesian product of the domains of some variables. Agents express their preferences over the decisions via soft constraints. We consider both sequential preference aggregation methods (they aggregate the preferences over one variable at a time) and one-step methods and we study the computational complexity of influencing them through bribery. We prove that bribery is NPcomplete for the sequential aggregation methods (based on Plurality, Approval, and Borda) for most of the cost schemes we defined, while it is polynomial for one-step Plurality.

【Keywords】: bribery, preferences, soft constraints

113. Progression of Decomposed Situation Calculus Theories.

Paper Link】 【Pages】:

【Authors】: Denis K. Ponomaryov ; Mikhail Soutchanski

【Abstract】: In many tasks related to reasoning about consequences of a logical theory, it is desirable to decompose the theory into a number of components with weakly-related or independent signatures. This facilitates reasoning when signature of a query formula belongs to only one of the components. However, an initial theory may be subject to change due to execution of actions affecting features mentioned in the theory. Having once computed a decomposition of a theory, one would like to know whether a decomposition has to be computed again for the theory obtained from taking into account the changes resulting from execution of an action. In the paper, we address this problem in the scope of the situation calculus, where change of an initial theory is related to the well-studied notion of progression. Progression provides a form of forward reasoning; it relies on forgetting values of those features which are subject to change and computing new values for them. We prove new results about properties of decomposition components under forgetting and show when a decomposition can be preserved in progression of an initial theory.

【Keywords】: decomposition, inseparability, forgetting, progression, basic action theory, the projection problem, situation calculus

114. Partial MUS Enumeration.

Paper Link】 【Pages】:

【Authors】: Alessandro Previti ; João Marques-Silva

【Abstract】: Minimal explanations of infeasibility find a wide range of uses. In the Boolean domain, these are referred to as Minimal Unsatisfiable Subsets (MUSes). In some settings, one needs to enumerate MUSes of a Boolean formula. Most often the goal is to enumerate all MUSes. In cases where this is computationally infeasible, an alternative is to enumerate some MUSes. This paper develops a novel approach for partial enumeration of MUSes, that complements existing alternatives. If the enumeration of all MUSes is viable, then existing alternatives represent the best option. However, for formulas where the enumeration of all MUSes is unrealistic, our approach provides a solution for enumerating some MUSes within a given time bound. The experimental results focus on formulas for which existing solutions are unable to enumerate MUSes, and shows that the new approach can in most cases enumerate a non-negligible number of MUSes within a given time bound.

【Keywords】: Minimal Unsatisfiable Subsets; Minimal Conflict Sets; MUS Enumeration

115. Multiagent Learning with a Noisy Global Reward Signal.

Paper Link】 【Pages】:

【Authors】: Scott Proper ; Kagan Tumer

【Abstract】: Scaling multiagent reinforcement learning to domains with many agents is a complex problem. In particular, multiagent credit assignment becomes a key issue as the system size increases. Some multiagent systems suffer from a global reward signal that is very noisy or difficult to analyze. This makes deriving a learnable local reward signal very difficult. Difference rewards (a particular instance of reward shaping) have been used to alleviate this concern, but they remain difficult to compute in many domains. In this paper we present an approach to modeling the global reward using function approximation that allows the quick computation of local rewards. We demonstrate how this model can result in significant improvements in behavior for three congestion problems: a multiagent ``bar problem'', a complex simulation of the United States airspace, and a generic air traffic domain. We show how the model of the global reward may be either learned on- or off-line using either linear functions or neural networks. For the bar problem, we show an increase in reward of nearly 200% over learning using the global reward directly. For the air traffic problem, we show a decrease in costs of 25% over learning using the global reward directly.

【Keywords】: Multiagent Learning; Congestion Problems; Difference Rewards

116. A Robust Bayesian Truth Serum for Non-Binary Signals.

Paper Link】 【Pages】:

【Authors】: Goran Radanovic ; Boi Faltings

【Abstract】: Several mechanisms have been proposed for incentivizing truthful reports of a private signals owned by rational agents, among them the peer prediction method and the Bayesian truth serum. The robust Bayesian truth serum (RBTS) for small populations and binary signals is particularly interesting since it does not require a common prior to be known to the mechanism. We further analyze the problem of the common prior not known to the mechanism and give several results regarding the restrictions that need to be placed in order to have an incentive-compatible mechanism. Moreover, we construct a Bayes-Nash incentive-compatible scheme called multi-valued RBTS that generalizes RBTS to operate on both small populations and non-binary signals.

【Keywords】: Incentive Compatibility, Truth Elicitation, Mechanism Design

117. Continuous Conditional Random Fields for Efficient Regression in Large Fully Connected Graphs.

Paper Link】 【Pages】:

【Authors】: Kosta Ristovski ; Vladan Radosavljevic ; Slobodan Vucetic ; Zoran Obradovic

【Abstract】: When used for structured regression, powerful Conditional Random Fields (CRFs) are typically restricted to modeling effects of interactions among examples in local neighborhoods. Using more expressive representation would result in dense graphs, making these methods impractical for large-scale applications. To address this issue, we propose an effective CRF model with linear scale-up properties regarding approximate learning and inference for structured regression on large, fully connected graphs. The proposed method is validated on real-world large-scale problems of image de-noising and remote sensing. In conducted experiments, we demonstrated that dense connectivity provides an improvement in prediction accuracy. Inference time of less than ten seconds on graphs with millions of nodes and trillions of edges makes the proposed model an attractive tool for large-scale, structured regression problems.

【Keywords】: regression; conditional random fields; large scale data; fully connected graph

118. Information Sharing Under Costly Communication in Joint Exploration.

Paper Link】 【Pages】:

【Authors】: Igor Rochlin ; David Sarne

【Abstract】: This paper studies distributed cooperative multi-agent exploration methods in settings where the exploration is costly and the overall performance measure is determined by the minimum performance achieved by any of the individual agents. Such an exploration setting is applicable to various multi-agent systems, e.g., in Dynamic Spectrum Access exploration. The goal in such problems is to optimize the process as a whole, considering the tradeoffs between the quality of the solution obtained and the cost associated with the exploration and coordination between the agents. Through the analysis of the two extreme cases where coordination is completely free and when entirely disabled, we manage to extract the solution for the general case where coordination is taken to be costly, modeled as a fee that needs to be paid for each additional coordinated agent. The strategy structure for the general case is shown to be threshold-based, and the thresholds which are analytically derived in this paper can be calculated offline, resulting in a very low online computational load.

【Keywords】: Multi-Agent Exploration; Cooperation; Coordination; Dynamic spectrum access networks

119. Enforcing Meter in Finite-Length Markov Sequences.

Paper Link】 【Pages】:

【Authors】: Pierre Roy ; François Pachet

【Abstract】: Markov processes are increasingly used to generate finite-length sequences that imitate a given style. However, Markov processes are notoriously difficult to control. Recently, Markov constraints have been introduced to give users some control on generated sequences. Markov constraints reformulate finite-length Markov sequence generation in the framework of constraint satisfaction (CSP). However, in practice, this approach is limited to local constraints and its performance is low for global constraints, such as cardinality or arithmetic constraints. This limitation prevents generated sequences to follow structural properties which are independent of the style, but inherent to the domain, such as meter. In this article, we introduce meter, a constraint that ensures a sequence is 1) Markovian with regards to a given corpus and 2) follows metrical rules expressed as cumulative cost functions. Additionally, meter can simultaneously enforce cardinality constraints. We propose a domain consistency algorithm whose complexity is pseudo-polynomial. This result is obtained thanks to a theorem on the growth of sumsets by Khovanskii. We illustrate our constraint on meter-constrained music generation problems that were so far not solvable by any other technique.

【Keywords】: Constraint; Markov chains; Meter; Music

120. Active Task Selection for Lifelong Machine Learning.

Paper Link】 【Pages】:

【Authors】: Paul Ruvolo ; Eric Eaton

【Abstract】: In a lifelong learning framework, an agent acquires knowledge incrementally over consecutive learning tasks, continually building upon its experience. Recent lifelong learning algorithms have achieved nearly identical performance to batch multi-task learning methods while reducing learning time by three orders of magnitude. In this paper, we further improve the scalability of lifelong learning by developing curriculum selection methods that enable an agent to actively select the next task to learn in order to maximize performance on future learning tasks. We demonstrate that active task selection is highly reliable and effective, allowing an agent to learn high performance models using up to 50% fewer tasks than when the agent has no control over the task order. We also explore a variant of transfer learning in the lifelong learning setting in which the agent can focus knowledge acquisition toward a particular target task.

【Keywords】: lifelong learning; multi-task learning; active learning; curriculum selection

121. Optimizing Objective Function Parameters for Strength in Computer Game-Playing.

Paper Link】 【Pages】:

【Authors】: Yoshikuni Sato ; Makoto Miwa ; Shogo Takeuchi ; Daisuke Takahashi

【Abstract】: The learning of evaluation functions from game records has been widely studied in the field of computer game-playing. Conventional learning methods optimize the evaluation function parameters by using the game records of expert players in order to imitate their plays. Such conventional methods utilize objective functions to increase the agreement between the moves selected by game-playing programs and the moves in the records of actual games. The methods, however, have a problem in that increasing the agreement does not always improve the strength of a program. Indeed, it is not clear how this agreement relates to the strength of a trained program. To address this problem, this paper presents a learning method to optimize objective function parameters for strength in game-playing. The proposed method employs an evolutionary learning algorithm with the strengths (Elo ratings) of programs as their fitness scores. Experimental results show that the proposed method is effective since programs using the objective function produced by the proposed method are superior to those using conventional objective functions.

【Keywords】:

Paper Link】 【Pages】:

【Authors】: Swakkhar Shatabda ; M. A. Hakim Newton ; Abdul Sattar

【Abstract】: Protein structure prediction is an unsolved problem in computational biology. One great difficulty is due to the unknown factors in the actual energy function. Moreover, the energy models available are often not very informative particularly when spatially similar structures are compared during search. We introduce several novel heuristics to augment the energy model and present a new local search algorithm that exploits these heuristics in a mixed fashion. Although the heuristics individually are weaker in performance than the energy function, their combination interestingly produces stronger results. For standard benchmark proteins on the face centered cubic lattice and a realistic 20x20 energy model, we obtain structures with significantly lower energy than those obtained by the state-of-the-art algorithms. We also report results for these proteins using the same energy model on the cubic lattice.

【Keywords】: Protein Structure Prediction; Simplified Models; Lattice Fit;Cubic Lattice; FCC Lattice; Energy Models, Local Search, Constraints, Constraint Based Local Search

123. Hypothesis Exploration for Malware Detection Using Planning.

Paper Link】 【Pages】:

【Authors】: Shirin Sohrabi ; Octavian Udrea ; Anton Riabov

【Abstract】: In this paper we apply AI planning to address the hypothesis exploration problem and provide assistance to network administrators in detecting malware based on unreliable observations derived from network traffic.Building on the already established characterization and use of AI planning for similar problems, we propose a formulation of the hypothesis generation problem for malware detection as an AI planning problem with temporally extended goals and actions costs. Furthermore, we propose a notion of hypothesis plausibility'' under unreliable observations, which we model as plan quality. We then show that in the presence of unreliable observations, simply finding one mostplausible'' hypothesis, although challenging, is not sufficient for effective malware detection. To that end, we propose a method for applying a state-of-the-art planner within a principled exploration process, to generate multiple distinct high-quality plans. We experimentally evaluate this approach by generating random problems of varying hardness both with respect to the number of observations, as well as the degree of unreliability. Based on these experiments, we argue that our approach presents a significant improvement over prior work that are focused on finding a single optimal plan, and that our hypothesis exploration application can motivate the development of new planners capable of generating the top high-quality plans.

【Keywords】: Planning; Application; Reasoning about actions; Model-based reasoning

124. Filtering With Logic Programs and Its Application to General Game Playing.

Paper Link】 【Pages】:

【Authors】: Michael Thielscher

【Abstract】: Motivated by the problem of building a basic reasoner for general game playing with imperfect information, we address the problem of filtering with logic programs, whereby an agent updates its incomplete knowledge of a program by observations. We develop a filtering method by adapting an existing backward-chaining and abduction method for so-called open logic programs. Experimental results show that this provides a basic effective and efficient "legal" player for general imperfect-information games.

【Keywords】: General game playing; Logical Filtering

125. GiSS: Combining Gibbs Sampling and SampleSearch for Inference in Mixed Probabilistic and Deterministic Graphical Models.

Paper Link】 【Pages】:

【Authors】: Deepak Venugopal ; Vibhav Gogate

【Abstract】: Mixed probabilistic and deterministic graphical models are ubiquitous in real-world applications. Unfortunately, Gibbs sampling, a popular MCMC technique, does not converge to the correct answers in presence of determinism and therefore cannot be used for inference in such models. In this paper, we propose to remedy this problem by combining Gibbs sampling with SampleSearch, an advanced importance sampling technique which leverages complete SAT/CSP solvers to generate high quality samples from hard deterministic spaces. We call the resulting algorithm, GiSS. Unlike Gibbs sampling which yields unweighted samples, GiSS yields weighted samples. Computing these weights exactly can be computationally expensive and therefore we propose several approximations. We show that our approximate weighting schemes yield consistent estimates and demonstrate experimentally that GiSS is competitive in terms of accuracy with state-of-the-art algorithms such as SampleSearch, MC-SAT and Belief propagation.

【Keywords】: Probabilistic Graphical Models; Markov Chain Monte Carlo; Gibbs Sampling; Importance Sampling; Machine Learning; Statistical Models

126. Guiding Scientific Discovery with Explanations Using DEMUD.

Paper Link】 【Pages】:

【Authors】: Kiri L. Wagstaff ; Nina L. Lanza ; David R. Thompson ; Thomas G. Dietterich ; Martha S. Gilmore

【Abstract】: In the era of large scientific data sets, there is an urgent need for methods to automatically prioritize data for review. At the same time, for any automated method to be adopted by scientists, it must make decisions that they can understand and trust. In this paper, we propose Discovery through Eigenbasis Modeling of Uninteresting Data (DEMUD), which uses principal components modeling and reconstruction error to prioritize data. DEMUD’s major advance is to offer domain-specific explanations for its prioritizations. We evaluated DEMUD’s ability to quickly identify diverse items of interest and the value of the explanations it provides. We found that DEMUD performs as well or better than existing class discovery methods and provides, uniquely, the first explanations for why those items are of interest. Further, in collaborations with planetary scientists, we found that DEMUD (1) quickly identifies very rare items of scientific value, (2) maintains high diversity in its selections, and (3) provides explanations that greatly improve human classification accuracy.

【Keywords】: discovery; explanations

127. Multiscale Manifold Learning.

Paper Link】 【Pages】:

【Authors】: Chang Wang ; Sridhar Mahadevan

【Abstract】: Many high-dimensional data sets that lie on a low-dimensional manifold exhibit nontrivial regularities at multiple scales. Most work in manifold learning ignores this multiscale structure. In this paper, we propose approaches to explore the deep structure of manifolds. The proposed approaches are based on the diffusion wavelets framework, data driven, and able to directly process directional neighborhood relationships without ad-hoc symmetrization. The proposed multiscale algorithms are evaluated using both synthetic and real-world data sets, and shown to outperform previous manifold learning methods.

【Keywords】: multiscale analysis; manifold learning; dimensionality reduction

128. Effective Bilingual Constraints for Semi-Supervised Learning of Named Entity Recognizers.

Paper Link】 【Pages】:

【Authors】: Mengqiu Wang ; Wanxiang Che ; Christopher D. Manning

【Abstract】: Most semi-supervised methods in Natural Language Processing capitalize on unannotated resources in a single language; however, information can be gained from using parallel resources in more than one language, since translations of the same utterance in different languages can help to disambiguate each other. We demonstrate a method that makes effective use of vast amounts of bilingual text (a.k.a. bitext) to improve monolingual systems. We propose a factored probabilistic sequence model that encourages both crosslanguage and intra-document consistency. A simple Gibbs sampling algorithm is introduced for performing approximate inference. Experiments on English-Chinese Named Entity Recognition (NER) using the OntoNotes dataset demonstrate that our method is significantly more accurate than state-ofthe- art monolingual CRF models in a bilingual test setting. Our model also improves on previous work by Burkett et al. (2010), achieving a relative error reduction of 10.8% and 4.5% in Chinese and English, respectively. Furthermore, by annotating a moderate amount of unlabeled bi-text with our bilingual model, and using the tagged data for uptraining, we achieve a 9.2% error reduction in Chinese over the state-ofthe- art Stanford monolingual NER system.

【Keywords】: Information Extraction, Semi-supervised Learning

129. Sparse Multi-Task Learning for Detecting Influential Nodes in an Implicit Diffusion Network.

Paper Link】 【Pages】:

【Authors】: Yingze Wang ; Guang Xiang ; Shi-Kuo Chang

【Abstract】: How to identify influential nodes is a central research topic in information diffusion analysis. Many existing methods rely on the assumption that the network structure is completely known by the model. However, in many applications, such a network is either unavailable or insufficient to explain the underlying information diffusion phenomena. To address this challenge, we develop a multi-task sparse linear influence model (MSLIM), which can simultaneously predict the volume for each contagion and automatically identify sets of the most influential nodes for different contagions. Our method is based on the linear influence model with two main advantages: 1) it does not require the network structure; 2) it can detect different sets of the most influential nodes for different contagions. To solve the corresponding convex optimization problem for learning the model, we adopt the accelerated gradient method (AGM) framework and show that there is an exact closed-form solution for the proximal mapping. Therefore, the optimization procedure achieves the optimal first-order convergence rate and can be scaled to very large datasets. The proposed model is validated on a set of 2.6 millions tweets from 1000 users of Twitter. We show that MSLIM can efficiently select the most influential users for specific contagions. We also present several interesting patterns of the selected influential users.

【Keywords】:

130. Ranking Scientific Articles by Exploiting Citations, Authors, Journals, and Time Information.

Paper Link】 【Pages】:

【Authors】: Yujing Wang ; Yunhai Tong ; Ming Zeng

【Abstract】: Ranking scientific articles is an important but challenging task, partly due to the dynamic nature of the evolving publication network. In this paper, we mainly focus on two problems: (1) how to rank articles in the heterogeneous network; and (2) how to use time information in the dynamic network in order to obtain a better ranking result. To tackle the problems, we propose a graph based ranking method, which utilizes citations, authors, journals/conferences and the publication time information collaboratively. The experiments were carried out on two public datasets. The result shows that our approach is practical and ranks scientific articles more accurately than the state-of-art methods.

【Keywords】:

Paper Link】 【Pages】:

【Authors】: Darrell Whitley ; Adele E. Howe ; Doug Hains

【Abstract】: Stochastic local search (SLS) is the dominant paradigm for incomplete SAT and MAXSAT solvers. Early studies on small 3SAT instances found that the use of “best improving” moves did not improve search compared to using an arbitrary “first improving” move. Yet SLS algorithms continue to use best improving moves. We revisit this issue by studying very large random and industrial MAXSAT problems. Because locating best improving moves is more expensive than first improving moves, we designed an “approximate best” improving move algorithm and prove that it is as efficient as first improving move SLS. For industrial problems the first local optima found using best improving moves are statistically significantly better than local optima found using first improving moves. However, this advantage reverses as search continues and algorithms must explore equal moves on plateaus. This reversal appears to be associated with critical variables that are in many clauses and that also yield large improving moves.

【Keywords】: satisfiability, local search

132. Grounding Natural Language References to Unvisited and Hypothetical Locations.

Paper Link】 【Pages】:

【Authors】: Thomas Emrys Williams ; Rehj Cantrell ; Gordon Briggs ; Paul W. Schermerhorn ; Matthias Scheutz

【Abstract】: While much research exists on resolving spatial natural language references to known locations, little work deals with handling references to unknown locations. In this paper we introduce and evaluate algorithms integrated into a cognitive architecture which allow an agent to learn about its environ-ment while resolving references to both known and unknown locations. We also describe how multiple components in the architecture jointly facilitate these capabilities.

【Keywords】: spatial reference resolution; human-robot interaction; natural language processing

Paper Link】 【Pages】:

【Authors】: Christopher Makoto Wilt ; Wheeler Ruml

【Abstract】: Although the heuristic search algorithm A is well-known to be optimally efficient, this result explicitly assumes forward search. Bidirectional search has long held promise for surpassing A's efficiency, and many varieties have been proposed, but it has proven difficult to achieve robust performance across multiple domains in practice. We introduce a simple bidirectional search technique called Incremental KKAdd that judiciously performs backward search to improve the accuracy of the forward heuristic function for any search algorithm. We integrate this technique with A, assess its theoretical properties, and empirically evaluate its performance across seven benchmark domains. In the best case, it yields a factor of six reduction in node expansions and CPU time compared to A, and in the worst case, its overhead is provably bounded by a user-supplied parameter, such as 1%. Viewing performance across all domains, it also surpasses previously proposed bidirectional search algorithms. These results indicate that Incremental KKAdd is a robust way to leverage bidirectional search in practice.

【Keywords】: bidirectional search; heuristic improvement;

134. Supervised Nonnegative Tensor Factorization with Maximum-Margin Constraint.

Paper Link】 【Pages】:

【Authors】: Fei Wu ; Xu Tan ; Yi Yang ; Dacheng Tao ; Siliang Tang ; Yueting Zhuang

【Abstract】: Non-negative tensor factorization (NTF) has attracted great attention in the machine learning community. In this paper, we extend traditional non-negative tensor factorization into a supervised discriminative decomposition, referred as Supervised Non-negative Tensor Factorization with Maximum-Margin Constraint(SNTFM2). SNTFM2 formulates the optimal discriminative factorization of non-negative tensorial data as a coupled least-squares optimization problem via a maximum-margin method. As a result, SNTFM2 not only faithfully approximates the tensorial data by additive combinations of the basis, but also obtains a strong generalization power to discriminative analysis (in particularfor classification in this paper). The experimental results show the superiority of our proposed model over state-of-the-art techniques on both toy and real world data sets.

【Keywords】: supervised; tensor factorization; maximum-margin

135. Lazy Gaussian Process Committee for Real-Time Online Regression.

Paper Link】 【Pages】:

【Authors】: Han Xiao ; Claudia Eckert

【Abstract】: A significant problem of Gaussian process (GP) is its unfavorable scaling with a large amount of data. To overcome this issue, we present a novel GP approximation scheme for online regression. Our model is based on a combination of multiple GPs with random hyperparameters. The model is trained by incrementally allocating new examples to a selected subset of GPs. The selection is carried out efficiently by optimizing a submodular function. Experiments on real-world data sets showed that our method outperforms existing online GP regression methods in both accuracy and efficiency. The applicability of the proposed method is demonstrated by the mouse-trajectory prediction in an Internet banking scenario.

【Keywords】: Gaussian process, kernel method, regression, stream data, online learning, approximation, scalability

136. A Topic-Based Coherence Model for Statistical Machine Translation.

Paper Link】 【Pages】:

【Authors】: Deyi Xiong ; Min Zhang

【Abstract】: Coherence that ties sentences of a text into a meaningfully connected structure is of great importance to text generation and translation. In this paper, we propose a topic-based coherence model to produce coherence for document translation, in terms of the continuity of sentence topics in a text. We automatically extract a coherence chain for each source text to be translated. Based on the extracted source coherence chain, we adopt a maximum entropy classifier to predict the target coherence chain that defines a linear topic structure for the target document. The proposed topic-based coherence model then uses the predicted target coherence chain to help decoder select coherent word/phrase translations. Our experiments show that incorporating the topic-based coherence model into machine translation achieves substantial improvement over both the baseline and previous methods that integrate document topics rather than coherence chains into machine translation.

【Keywords】: Statistical Machine Translation; Coherence; Topic

137. Towards Cohesive Anomaly Mining.

Paper Link】 【Pages】:

【Authors】: Yun Xiong ; Yangyong Zhu ; Philip S. Yu ; Jian Pei

【Abstract】: In some applications, such as bioinformatics, social network analysis, and computational criminology, it is desirable to find compact clusters formed by a (very) small portion of objects in a large data set. Since such clusters are comprised of a small number of objects, they are extraordinary and anomalous with respect to the entire data set. This specific type of clustering task cannot be solved well by the conventional clustering methods since generally those methods try to assign most of the data objects into clusters. In this paper, we model this novel and application-inspired task as the problem of mining cohesive anomalies. We propose a general framework and a principled approach to tackle the problem. The experimental results on both synthetic and real data sets verify the effectiveness and efficiency of our approach.

【Keywords】: Data Mining; Clustering; Rare Pattern

138. Learning Integrated Symbolic and Continuous Action Models for Continuous Domains.

Paper Link】 【Pages】:

【Authors】: Joseph Zhen Ying Xu ; John E. Laird

【Abstract】: Long-living autonomous agents must be able to learn to perform competently in novel environments. One important aspect of competence is the ability to plan, which entails the ability to learn models of the agent’s own actions and their effects on the environment. In this paper we describe an approach to learn action models of environments with continuous-valued spatial states and realistic physics consisting of multiple interacting rigid objects. In such environments, we hypothesize that objects exhibit multiple qualitatively distinct behaviors we call modes, conditioned on their spatial relationships to each other. We argue that action models that explicitly represent these modes using a combination of symbolic spatial relationships and continuous metric information learn faster, generalize better, and make more accurate predictions than models that only use metric information. We present a method to learn action models with piecewise linear modes conditioned on a combination of first order Horn clauses that test symbolic spatial predicates and continuous classifiers. We empirically demonstrate that our method learns more accurate and more general models of a physics simulation than a method that learns a single function (locally weighted regression).

【Keywords】: Action Modeling

139. Multi-Label Learning with PRO Loss.

Paper Link】 【Pages】:

【Authors】: Miao Xu ; Yu-Feng Li ; Zhi-Hua Zhou

【Abstract】: Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however, cannot be met because most existing methods were designed to optimize existing criteria, yet there is no criterion which encodes the aforementioned requirement. In this paper, we present a new criterion, Pro Loss, concerning the prediction on all labels as well as the rankings of only relevant labels. We then propose ProSVM which optimizes Pro Lossefficiently using alternating direction method of multipliers. We further improve its efficiency with an upper approximation that reduces the number of constraints from O ( T , 2 ) to O ( T ), where T is the number of labels. Experiments show that our proposals are not only superior on Pro Loss, but also highly competitive on existing evaluation criteria.

【Keywords】: Multi-Label Learning

140. Temporal Milestones in HTNs.

Paper Link】 【Pages】:

【Authors】: Fusun Yaman ; Brett Benyo ; Alice M. Mulvehill

【Abstract】: We present temporal milestones for hierarchical task networks to enable the complex synchronization of tasks. A temporal milestone of a task is an intermediate event that occurs during the execution of a complex task, e.g., the start time, the end time or a milestone of any of its subtasks. Unlike landmark variables, introduced in existing work, temporal milestones respect the task abstraction boundaries and preserve structural properties enabling much more efficient reasoning. Furthermore, temporal milestones are as expressive as landmark variables. We provide analytical and empirical evidence to support these claims.

【Keywords】: temporal reasoning; HTN planning

141. Joint Object and Pose Recognition Using Homeomorphic Manifold Analysis.

Paper Link】 【Pages】:

【Authors】: Haopeng Zhang ; Tarek El-Gaaly ; Ahmed M. Elgammal ; Zhiguo Jiang

【Abstract】: Object recognition is a key precursory challenge in the fields of object manipulation and robotic/AI visual reasoning in general. Recognizing object categories, particular instances of objects and viewpoints/poses of objects are three critical subproblems robots must solve in order to accurately grasp/manipulate objects and reason about their environ- ments. Multi-view images of the same object lie on intrinsic low-dimensional manifolds in descriptor spaces (e.g. visual/depth descriptor spaces). These object manifolds share the same topology despite being geometrically different. Each object manifold can be represented as a deformed version of a unified manifold. The object manifolds can thus be parametrized by its homeomorphic mapping/reconstruction from the unified manifold. In this work, we construct a manifold descriptor from this mapping between homeomorphic manifolds and use it to jointly solve the three challenging recognition sub-problems. We extensively experiment on a challenging multi-modal (i.e. RGBD) dataset and other object pose datasets and achieve state-of-the-art results.

【Keywords】: vision; kinect; object recognition; machine learning, multi-modal visual learning

142. Automated Workflow Synthesis.

Paper Link】 【Pages】:

【Authors】: Haoqi Zhang ; Eric Horvitz ; David C. Parkes

【Abstract】: By coordinating efforts from humans and machines, human computation systems can solve problems that machines cannot tackle alone. A general challenge is to design efficient human computation algorithms or workflows with which to coordinate the work of the crowd. We introduce a method for automated workflow synthesis aimed at ideally harnessing human efforts by learning about the crowd's performance on tasks and synthesizing an optimal workflow for solving a problem. We present experimental results for human sorting tasks, which demonstrate both the benefit of understanding and optimizing the structure of workflows based on observations. Results also demonstrate the benefits of using value of information to guide experiments for identifying efficient workflows with fewer experiments.

【Keywords】: human computation, crowd computing, workflows, synthesis, active indirect elicitation

143. A Concave Conjugate Approach for Nonconvex Penalized Regression with the MCP Penalty.

Paper Link】 【Pages】:

【Authors】: Shubao Zhang ; Hui Qian ; Wei Chen ; Zhihua Zhang

【Abstract】: The minimax concave plus penalty (MCP) has been demonstrated to be effective in nonconvex penalization for feature selection. In this paper we propose a novel construction approach for MCP. In particular, we show that MCP can be derived from a concave conjugate of the Euclidean distance function. This construction approach in turn leads us to an augmented Lagrange multiplier method for solving the penalized regression problem with MCP. In our method each tuning parameter corresponds to a feature, and these tuning parameters can be automatically updated. We also develop a d.c. (difference of convex functions) programming approach for the penalized regression problem. We find that the augmented Lagrange multiplier method degenerates into the d.c. programming method under specific conditions. Experimental analysis is conducted on a set of simulated data. The result is encouraging.

【Keywords】: MCP, Concave Conjugate, Regression

144. Smart Multi-Task Bregman Clustering and Multi-Task Kernel Clustering.

Paper Link】 【Pages】:

【Authors】: Xianchao Zhang ; Xiaotong Zhang

【Abstract】: Multitask Bregman Clustering (MBC) alternatively updates clusters and learns relationship between clusters of different tasks, and the two phases boost each other. However, the boosting does not always have positive effect, it may also cause negative effect. Another issue of MBC is that it cannot deal with nonlinear separable data. In this paper, we show that MBC's process of using cluster relationship to boost the updating clusters phase may cause negative effect, i.e., cluster centroid may be skewed under some conditions. We propose a smart multi-task Bregman clustering (S-MBC) algorithm which identifies negative effect of the boosting and avoids the negative effect if it occurs. We then extend the framework of S-MBC to a smart multi-task kernel clustering (S-MKC) framework to deal with nonlinear separable data. We also propose a specific implementation of the framework which could be applied to any Mercer kernel. Experimental results confirm our analysis, and demonstrate the superiority of our proposed methods.

【Keywords】: multi-task clustering; multi-task learning; kernel

145. A Tensor-Variate Gaussian Process for Classification of Multidimensional Structured Data.

Paper Link】 【Pages】:

【Authors】: Qibin Zhao ; Liqing Zhang ; Andrzej Cichocki

【Abstract】: As tensors provide a natural and efficient representation of multidimensional structured data, in this paper, we consider probabilistic multinomial probit classification for tensor-variate inputs with Gaussian processes (GP) priors placed over the latent function. In order to take into account the underlying multimodes structure information within the model, we propose a framework of probabilistic product kernels for tensorial data based on a generative model assumption. More specifically, it can be interpreted as mapping tensors to probability density function space and measuring similarity by an information divergence. Since tensor kernels enable us to model input tensor observations, the proposed tensor-variate GP is considered as both a generative and discriminative model. Furthermore, a fully variational Bayesian treatment for multiclass GP classification with multinomial probit likelihood is employed to estimate the hyperparameters and infer the predictive distributions. Simulation results on both synthetic data and a real world application of human action recognition in videos demonstrate the effectiveness and advantages of the proposed approach for classification of multiway tensor data, especially in the case that the underlying structure information among multimodes is discriminative for the classification task.

【Keywords】: Tensor; Gaussian processes; Classification; Kernel

146. Time-Dependent Trajectory Regression on Road Networks via Multi-Task Learning.

Paper Link】 【Pages】:

【Authors】: Jiangchuan Zheng ; Lionel M. Ni

【Abstract】: Road travel costs are important knowledge hidden in large-scale GPS trajectory data sets, the discovery of which can benefit many applications such as intelligent route planning and automatic driving navigation. While there are previous studies which tackled this task by modeling it as a regression problem with spatial smoothness taken into account, they unreasonably assumed that the latent cost of each road remains unchanged over time. Other works on route planning and recommendation that have considered temporal factors simply assumed that the temporal dynamics be known in advance as a parametric function over time, which is not faithful to reality. To overcome these limitations, in this paper, we propose an extension to a previous static trajectory regression framework by learning the temporal dynamics of road travel costs in an innovative non-parametric manner which can effectively overcome the temporal sparsity problem. In particular, we unify multiple different trajectory regression problems in a multi-task framework by introducing a novel cross-task regularization which encourages temporal smoothness on the change of road travel costs. We then propose an efficient block coordinate descent method to solve the resulting problem by exploiting its separable structures and prove its convergence to global optimum. Experiments conducted on both synthetic and real data sets demonstrate the effectiveness of our method and its improved accuracy on travel time prediction.

【Keywords】: trajectory mining; temporal smoothness; multi-task learning; fused lasso

147. Clustering with Complex Constraints - Algorithms and Applications.

Paper Link】 【Pages】:

【Authors】: Weifeng Zhi ; Xiang Wang ; Buyue Qian ; Patrick Butler ; Naren Ramakrishnan ; Ian Davidson

【Abstract】: Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing types of constraints but also more complex logical combinations beyond conjunctions. We first show how any statement in conjunctive normal form (CNF) can be represented as a linear inequality. Since existing clustering formulations such as spectral clustering cannot easily incorporate these linear inequalities, we propose a quadratic programming (QP) clustering formulation to accommodate them. This new formulation allows us to have much more complex guidance in clustering. We demonstrate the effectiveness of our approach in two applications on text and personal information management. We also compare our algorithm against existing constrained spectral clustering algorithm to show its efficiency in computational time.

【Keywords】:

148. Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling.

Paper Link】 【Pages】:

【Authors】: Sheng-hua Zhong ; Yan Liu ; Feifei Ren ; Jinghuan Zhang ; Tongwei Ren

【Abstract】: Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides im-portant information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we in-herit the classical bottom-up spatial saliency map. In tem-poral saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion. The spatial and the temporal saliency maps are constructed and further fused together to create a novel attention model. The pro-posed attention model is evaluated on three video datasets. Empirical validations demonstrate the salient regions de-tected by our dynamic consistent saliency map highlight the interesting objects effectively and efficiency. More im-portantly, the automatically video attended regions detected by proposed attention model are consistent with the ground truth saliency maps of eye movement data.

【Keywords】: Video Saliency Map;Spatio-Temporal Attention Model;Optical Flow

149. Supervised Coupled Dictionary Learning with Group Structures for Multi-modal Retrieval.

Paper Link】 【Pages】:

【Authors】: Yueting Zhuang ; Yanfei Wang ; Fei Wu ; Yin Zhang ; Weiming Lu

【Abstract】: A better similarity mapping function across heterogeneous high-dimensional features is very desirable for many applications involving multi-modal data. In this paper, we introduce coupled dictionary learning (DL) into supervised sparse coding for multi-modal (cross-media) retrieval. We call this Supervised coupled dictionary learning with group structures for Multi-Modal retrieval (SliM2). SliM2 formulates the multi-modal mapping as a constrained dictionary learning problem. By utilizing the intrinsic power of DL to deal with the heterogeneous features, SliM2 extends unimodal DL to multi-modal DL. Moreover, the label information is employed in SliM2 to discover the shared structure inside intra-modality within the same class by a mixed norm (i.e., l1/l2-norm). As a result, the multimodal retrieval is conducted via a set of jointly learned mapping functions across multi-modal data. The experimental results show the effectiveness of our proposed model when applied to cross-media retrieval.

【Keywords】: Multi-modal Retrieval; Cross-media Retrieval; Dictionary Learning;Supervised Learning

150. Model-Lite Case-Based Planning.

Paper Link】 【Pages】:

【Authors】: Hankz Hankui Zhuo ; Tuan Anh Nguyen ; Subbarao Kambhampati

【Abstract】: There is increasing awareness in the planning community that depending on complete models impedes the applicability of planning technology in many real world domains where the burden of specifying complete domain models is too high. In this paper, we consider a novel solution for this challenge that combines generative planning on incomplete domain models with a library of plan cases that are known to be correct. While this was arguably the original motivation for case-based planning, most existing case-based planners assume (and depend on) from-scratch planners that work on complete domain models. In contrast, our approach views the plan generated with respect to the incomplete model as a ``skeletal plan'' and augments it with directed mining of plan fragments from library cases. We will present the details of our approach and present an empirical evaluation of our method in comparison to a state-of-the-art case-based planner that depends on complete domain models.

【Keywords】:

Artificial Intelligence and the Web Special Track 18

151. Preventing Unraveling in Social Networks Gets Harder.

Paper Link】 【Pages】:

【Authors】: Rajesh Hemant Chitnis ; Fedor V. Fomin ; Petr A. Golovach

【Abstract】: The behavior of users in social networks is often observed to be affected by the actions of their friends. Bhawalkar et al. (ICALP '12) introduced a formal mathematical model for user engagement in social networks where each individual derives a benefit proportional to the number of its friends which are engaged. Given a threshold degree k the equilibrium for this model is a maximal subgraph whose minimum degree is at least k . However the dropping out of individuals with degrees less than k might lead to a cascading effect of iterated withdrawals such that the size of equilibrium subgraph becomes very small. To overcome this some special vertices called "anchors" are introduced: these vertices need not have large degree. Bhawalkar et al. considered the Anchored k-Core problem: Given a graph G and integers b, k and p do there exist sets of vertices B, H such that B is a subset of H , size of B is at most b and size of H is at least p , and every vertex  v which is in H but not in B has degree at least k in the induced subgraph G[H] . They showed that the problem is NP-hard for all k greater equal 2, and gave some inapproximability and fixed-parameter intractability results. In this paper we give improved hardness results for this problem. In particular we show that the  Anchored k-Core problem is W[1]-hard parameterized by p , even for k =3. This improves the result of Bhawalkar et al.  (who show W[2]-hardness parameterized by  b ) as our parameter is always bigger since p is greater equal than b . Then we answer a question of Bhawalkar et al. by showing that the Anchored k-Core problem remains NP-hard on planar graphs for all k greater equal 3, even if the maximum degree of the graph is k +2. Finally we show that the problem is FPT on planar graphs parameterized by b for all k greater equal 7.

【Keywords】: Social networks, parameterized complexity, planar graphs

152. Not Quite the Same: Identity Constraints for the Web of Linked Data.

Paper Link】 【Pages】:

【Authors】: Gerard de Melo

【Abstract】: Linked Data is based on the idea that information from different sources can flexibly be connected to enable novel applications that individual datasets do not support on their own. This hinges upon the existence of links between datasets that would otherwise be isolated. The most notable form, sameAs links, are intended to express that two identifiers are equivalent in all respects. Unfortunately, many existing ones do not reflect such genuine identity. This study provides a novel method to analyse this phenomenon, based on a thorough theoretical analysis, as well as a novel graph-based method to resolve such issues to some extent. Our experiments on a representative Web-scale set of sameAs links from the Web of Data show that our method can identify and remove hundreds of thousands of constraint violations. Linked Data is based on the idea that information from different sources can flexibly be connected to enable novel applications that individual datasets do not support on their own. This hinges upon the existence of links between datasets that would otherwise be isolated. The most notable form, sameAs links, are intended to express that two identifiers are equivalent in all respects. Unfortunately, many existing ones do not reflect such genuine identity. This study provides a novel method to analyse this phenomenon, based on a thorough theoretical analysis, as well as a novel graph-based method to resolve such issues to some extent. Our experiments on a representative Web-scale set of sameAs links from the Web of Data show that our method can identify and remove hundreds of thousands of constraint violations.

【Keywords】: Linked Data, Identity

153. Learning to Rank Effective Paraphrases from Query Logs for Community Question Answering.

Paper Link】 【Pages】:

【Authors】: Alejandro Figueroa ; Guenter Neumann

【Abstract】: We present a novel method for ranking query paraphrases for effective search in community question answering (cQA). The method uses query logs from Yahoo! Search and Yahoo! Answers for automatically extracting a corpus of paraphrases of queries and questions using the query-question click history. Elements of this corpus are automatically ranked according to recall and mean reciprocal rank, and then used for learning two independent learning to rank models (SVMRank), whereby a set of new query paraphrases can be scored according to recall and MRR. We perform several automatic evaluation procedures using cross-validation for analyzing the behavior of various aspects of our learned ranking functions, which show that our method is useful and effective for search in cQA.

【Keywords】: Question Answering on the Web; Community Question Answering; Learning to Rank; Machine Learning; Paraphrases; Ranking; Effective Paraphrases;

154. Fast and Exact Top-k Algorithm for PageRank.

Paper Link】 【Pages】:

【Authors】: Yasuhiro Fujiwara ; Makoto Nakatsuji ; Hiroaki Shiokawa ; Takeshi Mishima ; Makoto Onizuka

【Abstract】: To obtain high PageRank score nodes, the original approach iteratively computes the Page-Rank score of each node until convergence by using the whole graph. If the graph is large, this approach is infeasible due to its high computational cost. The goal of this study is to find top-k Page-Rank score nodes efficiently for a given graph without sacrificing accuracy. Our solution, F-Rank, is based on two ideas: (1) It iteratively estimates lower/upper bounds of Page-Rank scores, and (2) It constructs subgraphs in each iteration by pruning unnecessary nodes and edges to identify top-k nodes. Our theoretical analysis shows that F-Rank guarantees result exactness. Experiments show that F-Rank finds top-k nodes much faster than the original approach.

【Keywords】:

155. Understanding and Predicting Interestingness of Videos.

Paper Link】 【Pages】:

【Authors】: Yu-Gang Jiang ; Yanran Wang ; Rui Feng ; Xiangyang Xue ; Yingbin Zheng ; Hanfang Yang

【Abstract】: The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the ground-truth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.

【Keywords】:

156. Clustering Crowds.

Paper Link】 【Pages】:

【Authors】: Hiroshi Kajino ; Yuta Tsuboi ; Hisashi Kashima

【Abstract】: We present a clustered personal classifier method (CPC method) that jointly estimates a classifier and clusters of workers in order to address the learning from crowds problem.Crowdsourcing allows us to create a large but low-quality data set at very low cost.The learning from crowds problem is to learn a classifier from such a low-quality data set.From some observations, we notice that workers form clusters according to their abilities.Although such a fact was pointed out several times, no method has applied it to the learning from crowds problem.We propose a CPC method that utilizes the clusters of the workers to improve the performance of the obtained classifier, where both the classifier and the clusters of the workers are estimated.The proposed method has two advantages.One is that it realizes robust estimation of the classifier because it utilizes prior knowledge about the workers that they tend to form clusters.The other is that we can obtain the clusters of the workers, which help us analyze the properties of the workers.Experimental results on synthetic and real data sets indicate that the proposed method can estimate the classifier robustly.In addition, clustering workers is shown to work well. Especially in the real data set, an outlier worker was found by applying the proposed method.

【Keywords】: Crowdsourcing; Supervised Learning; Clustering

157. LA-CTR: A Limited Attention Collaborative Topic Regression for Social Media.

Paper Link】 【Pages】:

【Authors】: Jeon-Hyung Kang ; Kristina Lerman

【Abstract】: Probabilistic models can learn users' preferences from the history of their item adoptions on a social media site, and in turn, recommend new items to users based on learned preferences. However, current models ignore psychological factors that play an important role in shaping online social behavior. One such factor is attention, the mechanism that integrates perceptual and cognitive features to select the items the user will consciously process and may eventually adopt. Recent research has shown that people have finite attention, which constrains their online interactions, and that they divide their limited attention non-uniformly over other people. We propose a collaborative topic regression model that incorporates limited, non-uniformly divided attention. We show that the proposed model is able to learn more accurate user preferences than state-of-art models, which do not take human cognitive factors into account. Specifically we analyze voting on news items on the social news aggregator and show that our model is better able to predict held out votes than alternate models. Our study demonstrates that psycho-socially motivated models are better able to describe and predict observed behavior than models which only consider latent social structure and content.

【Keywords】:

158. A Fast Bandit Algorithm for Recommendation to Users With Heterogenous Tastes.

Paper Link】 【Pages】:

【Authors】: Pushmeet Kohli ; Mahyar Salek ; Greg Stoddard

【Abstract】: We study recommendation in scenarios where there's no prior information about the quality of content in the system. We present an online algorithm that continually optimizes recommendation relevance based on behavior of past users. Our method trades weaker theoretical guarantees in asymptotic performance than the state-of-the-art for stronger theoretical guarantees in the online setting. We test our algorithm on real-world data collected from previous recommender systems and show that our algorithm learns faster than existing methods and performs equally well in the long-run.

【Keywords】: Recommender Systems; Multi-armed Bandits; Online Algorithms

159. Better Human Computation Through Principled Voting.

Paper Link】 【Pages】:

【Authors】: Andrew Mao ; Ariel D. Procaccia ; Yiling Chen

【Abstract】: Designers of human computation systms often face the need to aggregate noisy information provided by multiple people. While voting is often used for this purpose, the choice of voting method is typically not principled. We conduct extensive experiments on Amazon Mechanical Turk to better understand how different voting rules perform in practice. Our empirical conclusions show that noisy human voting can differ from what popular theoretical models would predict. Our short-term goal is to motivate the design of better human computation systems; our long-term goal is to spark an interaction between researchers in (computational) social choice and human computation.

【Keywords】: social choice; voting; human computation

160. Exploring the Contribution of Unlabeled Data in Financial Sentiment Analysis.

Paper Link】 【Pages】:

【Authors】: Jimmy S. J. Ren ; Wei Wang ; Jiawei Wang ; Stephen S. Y. Liao

【Abstract】: With the proliferation of its applications in various industries, sentiment analysis by using publicly available web data has become an active research area in text classification during these years. It is argued by researchers that semi-supervised learning is an effective approach to this problem since it is capable to mitigate the manual labeling effort which is usually expensive and time-consuming. However, there was a long-term debate on the effectiveness of unlabeled data in text classification. This was partially caused by the fact that many assumptions in theoretic analysis often do not hold in practice. We argue that this problem may be further understood by adding an additional dimension in the experiment. This allows us to address this problem in the perspective of bias and variance in a broader view. We show that the well-known performance degradation issue caused by unlabeled data can be reproduced as a subset of the whole scenario. We argue that if the bias-variance trade-off is to be better balanced by a more effective feature selection method unlabeled data is very likely to boost the classification performance. We then propose a feature selection framework in which labeled and unlabeled training samples are both considered. We discuss its potential in achieving such a balance. Besides, the application in financial sentiment analysis is chosen because it not only exemplifies an important application, the data possesses better illustrative power as well. The implications of this study in text classification and financial sentiment analysis are both discussed.

【Keywords】: Text Classification;Sentiment Analysis;Machine Learning;Financial Data

161. Hotspotting - A Probabilistic Graphical Model For Image Object Localization Through Crowdsourcing.

Paper Link】 【Pages】:

【Authors】: Mahyar Salek ; Yoram Bachrach ; Peter Key

【Abstract】: Object localization is an image annotation task which consists of finding the location of a target object in an image. It is common to crowdsource annotation tasks and aggregate responses to estimate the true annotation. While for other kinds of annotations consensus is simple and powerful, it cannot be applied to object localization as effectively due to the task's rich answer space and inherent noise in responses. We propose a probabilistic graphical model to localize objects in images based on responses from the crowd. We improve upon natural aggregation methods such as the mean and the median by simultaneously estimating the difficulty level of each question and skill level of every participant. We empirically evaluate our model on crowdsourced data and show that our method outperforms simple aggregators both in estimating the true locations and in ranking participants by their ability. We also propose a simple adaptive sourcing scheme that works well for very sparse datasets.

【Keywords】: Crowdsourcing; Image Object Localization; Probabilistic Graphical Models; Machine Learning

162. OpenEval: Web Information Query Evaluation.

Paper Link】 【Pages】:

【Authors】: Mehdi Samadi ; Manuela M. Veloso ; Manuel Blum

【Abstract】: In this paper, we investigate information validation tasks that are initiated as queries from either automated agents or humans. We introduce OpenEval, a new online information validation technique, which uses information on the web to automatically evaluate the truth of queries that are stated as multi-argument predicate instances (e.g., DrugHasSideEffect(Aspirin,GI Bleeding)). OpenEval gets a small number of instances of a predicate as seed positive examples and automatically learns how to evaluate the truth of a new predicate instance by querying the web and processing the retrieved unstructured web pages. We show that OpenEval is able to respond to the queries within a limited amount of time while also achieving high F1 score. In addition, we show that the accuracy of responses provided by OpenEval is increased as more time is given for evaluation. We have extensively tested our model and shown empirical results that illustrate the effectiveness of our approach compared to related techniques.

【Keywords】:

163. Fast Algorithm for Modularity-Based Graph Clustering.

Paper Link】 【Pages】:

【Authors】: Hiroaki Shiokawa ; Yasuhiro Fujiwara ; Makoto Onizuka

【Abstract】: In AI and Web communities, modularity-based graph clustering algorithms are being applied to various applications. However, existing algorithms are not applied to large graphs because they have to scan all vertices/edges iteratively. The goal of this paper is to efficiently compute clusters with high modularity from extremely large graphs with more than a few billion edges. The heart of our solution is to compute clusters by incrementally pruning unnecessary vertices/edges and optimizing the order of vertex selections. Our experiments show that our proposal outperforms all other modularity-based algorithms in terms of computation time, and it finds clusters with high modularity.

【Keywords】: Graph; Clustering; Modularity

164. Introducing Nominals to the Combined Query Answering Approaches for EL.

Paper Link】 【Pages】:

【Authors】: Giorgio Stefanoni ; Boris Motik ; Ian Horrocks

【Abstract】: So-called combined approaches answer a conjunctive query over a description logic ontology in three steps: first, they materialise certain consequences of the ontology and the data; second, they evaluate the query over the data; and third, they filter the result of the second phase to eliminate unsound answers. Such approaches were developed for various members of the DL-Lite and the EL families of languages, but none of them can handle ontologies containing nominals. In our work, we bridge this gap and present a combined query answering approach for ELHO--a logic that contains all features of the OWL 2 EL standard apart from transitive roles and complex role inclusions. This extension is nontrivial because nominals require equality reasoning, which introduces complexity into the first and the third step. Our empirical evaluation suggests that our technique is suitable for practical application, and so it provides a practical basis for conjunctive query answering in a large fragment of OWL 2 EL.

【Keywords】: Description Logics; Conjunctive Query Answering; Web Ontology Language; Query Answering under Constraints

165. TONIC: Target Oriented Network Intelligence Collection for the Social Web.

Paper Link】 【Pages】:

【Authors】: Roni Tzvi Stern ; Liron Samama ; Rami Puzis ; Tal Beja ; Zahy Bnaya ; Ariel Felner

【Abstract】: In this paper we introduce the Target Oriented Network Intelligence Collection (TONIC) problem, which is the problem of finding profiles in a social network that contain information about a given target via automated crawling. We formalize TONIC as a search problem and a best-first approach is proposed for solving it.Several heuristics are presented to guide this search.These heuristics are based on the topology of the currently known part of the social network.The efficiency of the proposed heuristics and the effect of the graph topology on their performance is experimentally evaluated on the Google+ social network.

【Keywords】: Artificial Intelligence, Social Networks, Crawling, Heuristic Search

166. The Effects of Performance-Contingent Financial Incentives in Online Labor Markets.

Paper Link】 【Pages】:

【Authors】: Ming Yin ; Yiling Chen ; Yu-An Sun

【Abstract】: Online labor markets such as Amazon Mechanical Turk (MTurk) have emerged as platforms that facilitate the allocation of productive effort across global economies. Many of these markets compensate workers with monetary payments. We study the effects of performance-contingent financial rewards on work quality and worker effort in MTurk via two experiments. We find that the magnitude of performance-contingent financial rewards alone affects neither quality nor effort. However, when workers working on two tasks of the same type in a sequence, the change in the magnitude of the reward over the two tasks affects both. In particular, both work quality and worker effort increase (alternatively decrease) as the reward increases (alternatively decreases) for the second task. This suggests the existence of the anchoring effect on workers’ perception of incentives in MTurk and that this effect can be leveraged in workflow design to increase the effectiveness of financial incentives.

【Keywords】:

167. Heterogeneous Metric Learning with Joint Graph Regularization for Cross-Media Retrieval.

Paper Link】 【Pages】:

【Authors】: Xiaohua Zhai ; Yuxin Peng ; Jianguo Xiao

【Abstract】: As the major component of big data, unstructured heterogeneous multimedia content such as text, image, audio, video and 3D increasing rapidly on the Internet. User demand a new type of cross-media retrieval where user can search results across various media by submitting query of any media. Since the query and the retrieved results can be of different media, how to learn a heterogeneous metric is the key challenge. Most existing metric learning algorithms only focus on a single media where all of the media objects share the same data representation. In this paper, we propose a joint graph regularized heterogeneous metric learning (JGRHML) algorithm, which integrates the structure of different media into a joint graph regularization. In JGRHML, different media are complementary to each other and optimizing them simultaneously can make the solution smoother for both media and further improve the accuracy of the final metric. Based on the heterogeneous metric, we further learn a high-level semantic metric through label propagation. JGRHML is effective to explore the semantic relationship hidden across different modalities. The experimental results on two datasets with up to five media types show the effectiveness of our proposed approach.

【Keywords】: Heterogeneous Metric Learning; Joint Graph Regularization; Cross-Media Retrieval

168. Active Transfer Learning for Cross-System Recommendation.

Paper Link】 【Pages】:

【Authors】: Lili Zhao ; Sinno Jialin Pan ; Evan Wei Xiang ; ErHeng Zhong ; Zhongqi Lu ; Qiang Yang

【Abstract】: Recommender systems, especially the newly launched ones, have to deal with the data-sparsity issue, where little existing rating information is available. Recently, transfer learning has been proposed to address this problem by leveraging the knowledge from related recommender systems where rich collaborative data are available. However, most previous transfer learning models assume that entity-correspondences across different systems are given as input, which means that for any entity (e.g., a user or an item) in a target system, its corresponding entity in a source system is known. This assumption can hardly be satisfied in real-world scenarios where entity-correspondences across systems are usually unknown, and the cost of identifying them can be expensive. For example, it is extremely difficult to identify whether a user A from Facebook and a user B from Twitter are the same person. In this paper, we propose a framework to construct entity correspondence with limited budget by using active learning to facilitate knowledge transfer across recommender systems. Specifically, for the purpose of maximizing knowledge transfer, we first iteratively select entities in the target system based on our proposed criterion to query their correspondences in the source system. We then plug the actively constructed entity-correspondence mapping into a general transferred collaborative-filtering model to improve recommendation quality. We perform extensive experiments on real world datasets to verify the effectiveness of our proposed framework for this cross-system recommendation problem.

【Keywords】:

Cognitive Systems Special Track 7

169. A Hybrid Architectural Approach to Understanding and Appropriately Generating Indirect Speech Acts.

Paper Link】 【Pages】:

【Authors】: Gordon Michael Briggs ; Matthias Scheutz

【Abstract】: Current approaches to handling indirect speech acts (ISAs) do not account for their sociolinguistic underpinnings (i.e., politeness strategies). Deeper understanding and appropriate generation of indirect acts will require mechanisms that integrate natural language (NL) understanding and generation with social information about agent roles and obligations,which we introduce in this paper. Additionally, we tackle the problem of understanding and handling indirect answers that take the form of either speech acts or physical actions, which requires an inferential, plan-reasoning approach. In order to enable artificial agents to handle an even wider-variety of ISAs, we present a hybrid approach, utilizing both the idiomatic and inferential strategies. We then demonstrate our system successfully generating indirect requests and handling indirect answers, and discuss avenues of future research.

【Keywords】: Indirect Speech Acts; Human-Robot Interaction

170. An Agent Model for the Appraisal of Normative Events Based in In-Group and Out-Group Relations.

Paper Link】 【Pages】:

【Authors】: Nuno Ferreira ; Samuel Mascarenhas ; Ana Paiva ; Gennaro di Tosto ; Frank Dignum ; John McBreen ; Nick Degens ; Gert Jan Hofstede ; Giulia Andrighetto ; Rosaria Conte

【Abstract】: Emotional synthetic characters are able to evaluate (appraise) events as positive or negative with their emotional states being triggered by several factors. Currently, the vast majority of  models for appraisal in synthetic characters consider factors related to the goals and preferences of the characters. We argue that appraisals that only take into consideration these "personal" factors are incomplete as other more social factors, such as the normative and the social context, including in-group and out-group relations, should be considered as well. Without them, moral emotions such as shame cannot be appraised, limiting the believability of the characters in certain situations. We present a model for the appraisal of characters' actions that evaluates whether actions by in-group and out-group members which conform, or not, to social norms generate different emotions depending on the social relations between the characters. The model was then implemented in an architecture for virtual agents and evaluated with humans. Results suggest that the emotions generated by our model are perceived by the participants, taking into account the social context and that participants experienced very similar emotions, both in type and intensity, to the emotions appraised and generated by the characters.

【Keywords】: Appraisal; Social Relations; Norms; Normative Events; Emotions

171. Learning to Efficiently Pursue Communication Goals on the Web with the GOSMR Architecture.

Paper Link】 【Pages】:

【Authors】: Kevin Gold

【Abstract】: We present GOSMR ("goal oriented scenario modeling robots"), a cognitive architecture designed to show coordinated, goal-directed behavior over the Internet, focusing on the web browser as a case study. The architecture combines a variety of artificial intelligence techniques, including planning, temporal difference learning, elementary reasoning over uncertainty, and natural language parsing, but is designed to be computationally lightweight. Its intended use is to be deployed on virtual machines in large-scale network experiments in which simulated users' adaptation in the face of resource denial should be intelligent but varied. The planning system performs temporal difference learning of action times, discounts goals according to hyperbolic discounting of time-to-completion and chance of success, takes into account the assertions of other agents, and separates abstract action from site-specific affordances. Our experiment, in which agents learn to prefer a social networking style site for sending and receiving messages, shows that utility-proportional goal selection is a reasonable alternative to Boltzmann goal selection for producing a rational mix of behavior.

【Keywords】: cognitive architecture; web; execution monitoring; affordances; utility-proportional payoff selection; Boltzmann selection

172. Preemptive Strategies for Overcoming the Forgetting of Goals.

Paper Link】 【Pages】:

【Authors】: Justin Li ; John E. Laird

【Abstract】: Maintaining and pursuing multiple goals over varying time scales is an important ability for artificial agents in many cognitive architectures. Goals that remain suspended for long periods, however, are prone to be forgotten. This paper presents a class of preemptive strategies that allow agents to selectively retain goals in memory and to recover forgotten goals. Preemptive strategies work by retrieving and rehearsing goals at triggers, which are either periodic or are predictive of the opportunity to act. Since cognitive architectures contain common hierarchies of memory systems and share similar forgetting mechanisms, these strategies work across multiple architectures. We evaluate their effectiveness in a simulated mobile robot controlled by Soar, and demonstrate how preemptive strategies can be adapted to different environments and agents.

【Keywords】: cognitive architectures; prospective memory; forgetting;

173. SALL-E: Situated Agent for Language Learning.

Paper Link】 【Pages】:

【Authors】: Ian E. Perera ; James F. Allen

【Abstract】: We describe ongoing research towards building a cognitively plausible system for near one-shot learning of the meanings of attribute words and object names, by grounding them in a sensory model. The system learns incrementally from human demonstrations recorded with the Microsoft Kinect, in which the demonstrator can use unrestricted natural language descriptions. We achieve near-one shot learning of simple objects and attributes by focusing solely on examples where the learning agent is confident, ignoring the rest of the data. We evaluate the system's learning ability by having it generate descriptions of presented objects, including objects it has never seen before, and comparing the system response against collected human descriptions of the same objects. We propose that our method of retrieving object examples with a k-nearest neighbor classifier using Mahalanobis distance corresponds to a cognitively plausible representation of objects. Our initial results show promise for achieving rapid, near one-shot, incremental learning of word meanings.

【Keywords】: Symbol grounding; Natural language understanding; Language Learning

174. Automatic Extraction of Efficient Axiom Sets from Large Knowledge Bases.

Paper Link】 【Pages】:

【Authors】: Abhishek B. Sharma ; Kenneth D. Forbus

【Abstract】: Efficient reasoning in large knowledge bases is an important problem for AI systems. Hand-optimization of reasoning becomes impractical as KBs grow, and impossible as knowledge is automatically added via knowledge capture or machine learning. This paper describes a method for automatic extraction of axioms for efficient inference over large knowledge bases, given a set of query types and information about the types of facts in the KB currently as well as what might be learned. We use the highly right skewed distribution of predicate connectivity in large knowledge bases to prune intractable regions of the search space. We show the efficacy of these techniques via experiments using queries from a learning by reading system. Results show that these methods lead to an order of magnitude improvement in time with minimal loss in coverage.

【Keywords】: knowledge-based systems, efficient reasoning

175. Graph Traversal Methods for Reasoning in Large Knowledge-Based Systems.

Paper Link】 【Pages】:

【Authors】: Abhishek B. Sharma ; Kenneth D. Forbus

【Abstract】: Commonsense reasoning at scale is a core problem for cognitive systems. In this paper, we discuss two ways in which heuristic graph traversal methods can be used to generate plausible inference chains. First, we discuss how Cyc’s predicate-type hierarchy can be used to get reasonable answers to queries. Second, we explain how connection graph-based techniques can be used to identify script-like structures. Finally, we demonstrate through experiments that these methods lead to significant improvement in accuracy for both Q/A and script construction.

【Keywords】: commonsense reasoning, plausible Q/A

Computational Sustainability and AI Special Track 16

176. Agent Cooperatives for Effective Power Consumption Shifting.

Paper Link】 【Pages】:

【Authors】: Charilaos Akasiadis ; Georgios Chalkiadakis

【Abstract】: In this paper, we present a directly applicable scheme for electricity consumption shifting and effective demand curve flattening. The scheme can employ the services of either individual or cooperating consumer agents alike. Agents participating in the scheme, however, are motivated to form cooperatives, in order to reduce their electricity bills via lower group prices granted for sizable consumption shifting from high to low demand time intervals. The scheme takes into account individual costs, and uses a strictly proper scoring rule to reward contributors according to efficiency. Cooperative members, in particular, can attain variable reduced electricity price rates, given their different load shifting capabilities. This allows even agents with initially forbidding shifting costs to participate in the scheme, and is achieved by a weakly budget-balanced, truthful reward sharing mechanism. We provide four variants of this approach, and evaluate it experimentally.

【Keywords】: Smart Grid; peak trimming; energy; incentives for cooperation; coalition formation; scoring rules; group buying

177. PAC Optimal Planning for Invasive Species Management: Improved Exploration for Reinforcement Learning from Simulator-Defined MDPs.

Paper Link】 【Pages】:

【Authors】: Thomas G. Dietterich ; Majid Alkaee Taleghan ; Mark Crowley

【Abstract】: Often the most practical way to define a Markov Decision Process (MDP) is as a simulator that, given a state and an action, produces a resulting state and immediate reward sampled from the corresponding distributions. Simulators in natural resource management can be very expensive to execute, so that the time required to solve such MDPs is dominated by the number of calls to the simulator. This paper presents an algorithm, DDV, that combines improved confidence intervals on the Q values (as in interval estimation) with a novel upper bound on the discounted state occupancy probabilities to intelligently choose state-action pairs to explore. We prove that this algorithm terminates with a policy whose value is within epsilon of the optimal policy (with probability 1-delta) after making only polynomially-many calls to the simulator. Experiments on one benchmark MDP and on an MDP for invasive species management show very large reductions in the number of simulator calls required.

【Keywords】: MDP Planning; Reinforcement Learning; Simulator-Defined MDPs

178. Multiple Hypothesis Object Tracking For Unsupervised Self-Learning: An Ocean Eddy Tracking Application.

Paper Link】 【Pages】:

【Authors】: James H. Faghmous ; Muhammed Uluyol ; Luke Styles ; Matthew Le ; Varun Mithal ; Shyam Boriah ; Vipin Kumar

【Abstract】: Mesoscale ocean eddies transport heat, salt, energy, and nutrients across oceans. As a result, accurately identifying and tracking such phenomena are crucial for understanding ocean dynamics and marine ecosystem sustainability. Traditionally, ocean eddies are monitored through two phases: identification and tracking. A major challenge for such an approach is that the tracking phase is dependent on the performance of the identification scheme, which can be susceptible to noise and sampling errors. In this paper, we focus on tracking, and introduce the concept of multiple hypothesis assignment (MHA), which extends traditional multiple hypothesis tracking for cases where the features tracked are noisy or uncertain. Under this scheme, features are assigned to multiple potential tracks, and the final assignment is deferred until more data are available to make a relatively unambiguous decision. Unlike the most widely used methods in the eddy tracking literature, MHA uses contextual spatio-temporal information to take corrective measures autonomously on the detection step a pos- teriori and performs significantly better in the presence of noise. This study is also the first to empirically analyze the relative robustness of eddy tracking algorithms.

【Keywords】:

179. Adaptive Spatio-Temporal Exploratory Models: Hemisphere-wide species distributions from massively crowdsourced eBird data.

Paper Link】 【Pages】:

【Authors】: Daniel Fink ; Theodoros Damoulas ; Jaimin Dave

【Abstract】: Broad-scale spatiotemporal processes in conservation and sustainability science, such as continent-wide animal movement, occur across a range of spatial and temporal scales. Understanding these processes at multiple scales is crucial for developing and coordinating conservation strategies across national boundaries. In this paper we propose a general class of models we call AdaSTEM, for Adaptive Spatio-Temporal Exploratory Models, that are able to exploit variation in the density of observations while adapting to multiple scales in space and time. We show that this framework is able to efficiently discover multiscale structure when it is present, while retaining predictive performance when absent. We provide an empirical comparison and analysis, offer theoretical insights from the ensemble loss decomposition, and deploy AdaSTEM to estimate the spatiotemporal distribution of Barn Swallow (Hirundo rustica) across the Western Hemisphere using massively crowdsourced eBird data.

【Keywords】: Species Distribution Models; Big Data; Crowdsourcing; Ensemble Models; Mixture Models

180. Online Optimization with Dynamic Temporal Uncertainty: Incorporating Short Term Predictions for Renewable Integration in Intelligent Energy Systems.

Paper Link】 【Pages】:

【Authors】: Vikas K. Garg ; T. S. Jayram ; Balakrishnan Narayanaswamy

【Abstract】: Growing costs, environmental awareness and government directives have set the stage for an increase in the fraction of electricity supplied using intermittent renewable sources such as solar and wind energy. To compensate for the increased variability in supply and demand, we need algorithms for online energy resource allocation under temporal uncertainty of future consumption and availability. Recent advances in prediction algorithms offer hope that a reduction in future uncertainty, through short term predictions, will increase the worth of the renewables. Predictive information is then revealed incrementally in an online manner, leading to what we call dynamic temporal uncertainty. We demonstrate the non-triviality of this problem and provide online algorithms, both randomized and deterministic, to handle time varying uncertainty in future rewards for non-stationary MDPs in general and for energy resource allocation in particular. We derive theoretical upper and lower bounds that hold even for a finite horizon, and establish that, in the deterministic case, discounting future rewards can be used as a strategy to maximize the total (undiscounted) reward. We also corroborate the efficacy of our methodology using wind and demand traces.

【Keywords】: Reinforcement Learning; Markov Decision Process (MDP); Robust Optimization; Smart grid; Storage management; Online learning; Optimization; Control systems; Imprecise MDPs; Energy; Uncertainty; Decision making; Non-stationary policies; MPC; Regret

181. Autonomous Agents in Future Energy Markets: The 2012 Power Trading Agent Competition.

Paper Link】 【Pages】:

【Authors】: Wolfgang Ketter ; Markus Peters ; John Collins

【Abstract】: Sustainable energy systems of the future will need more than efficient, clean, and low-cost energy sources. They will also need efficient price signals that motivate sustainable energy consumption behaviors and a tight real-time alignment of energy demand with supply from renewable and traditional sources. The Power Trading Agent Competition (Power TAC) is a rich, competitive, open-source simulation platform for future retail power markets built on real-world data and state-of-the-art customer models. Its purpose is to help researchers understand the dynamics of customer and retailer decision-making as well as the robustness of proposed market designs. Power TAC invites researchers to develop autonomous electricity broker agents and to pit them against best-in-class strategies in global competitions, the first of which will be held at AAAI 2013. Power TAC competitions provide compelling, actionable information for policy makers and industry leaders. We describe the competition scenario, demonstrate the realism of the Power TAC platform, and analyze key characteristics of successful brokers in one of our 2012 pilot competitions between seven research groups from five different countries.

【Keywords】: Smart Grid, Economic Simulation

182. Robust Network Design For Multispecies Conservation.

Paper Link】 【Pages】:

【Authors】: Ronan LeBras ; Bistra N. Dilkina ; Yexiang Xue ; Carla P. Gomes ; Kevin S. McKelvey ; Michael K. Schwartz ; Claire A. Montgomery

【Abstract】: Our work is motivated by an important network design application in computational sustainability concerning wildlife conservation. In the face of human development and climate change, it is important that conservation plans for protecting landscape connectivity exhibit certain level of robustness. While previous work has focused on conservation strategies that result in a connected network of habitat reserves, the robustness of the proposed solutions has not been taken into account. In order to address this important aspect, we formalize the problem as a node-weighted bi-criteria network design problem with connectivity requirements on the number of disjoint paths between pairs of nodes. While in most previous work on survivable network design the objective is to minimize the cost of the selected network, our goal is to optimize the quality of the selected paths within a specified budget, while meeting the connectivity requirements. We characterize the complexity of the problem under different restrictions. We provide a mixed-integer programming encoding that allows for finding solutions with optimality guarantees, as well as a hybrid local search method with better scaling behavior but no guarantees. We evaluate the typical-case performance of our approaches using a synthetic benchmark, and apply them to a large-scale real-world network design problem concerning the conservation of wolverine and lynx populations in the U.S. Rocky Mountains (Montana).

【Keywords】:

183. Negotiated Learning for Smart Grid Agents: Entity Selection based on Dynamic Partially Observable Features.

Paper Link】 【Pages】:

【Authors】: Prashant P. Reddy ; Manuela M. Veloso

【Abstract】: An attractive approach to managing electricity demand in the Smart Grid relies on real-time pricing (RTP) tariffs, where customers are incentivized to quickly adapt to changes in the cost of supply. However, choosing amongst competitive RTP tariffs is difficult when tariff prices change rapidly. The problem is further complicated when we assume that the price changes for a tariff are published in real-time only to those customers who are currently subscribed to that tariff, thus making the prices partially observable. We present models and learning algorithms for autonomous agents that can address the tariff selection problem on behalf of customers. We introduce 'Negotiated Learning', a general algorithm that enables a self-interested sequential decision-making agent to periodically select amongst a variable set of 'entities' (e.g., tariffs) by negotiating with other agents in the environment to gather information about dynamic partially observable entity 'features' (e.g., tariff prices) that affect the entity selection decision. We also contribute a formulation of the tariff selection problem as a 'Negotiable Entity Selection Process', a novel representation. We support our contributions with intuitive justification and simulation experiments based on real data on an open Smart Grid simulation platform.

【Keywords】: multiagent systems, semi-cooperative agents, negotiated learning

184. A Tractable Leader-Follower MDP Model for Animal Disease Management.

Paper Link】 【Pages】:

【Authors】: Régis Sabbadin ; Anne-France Viet

【Abstract】: Sustainable animal disease management requires to design and implement control policies at the regional scale. However, for diseases which are not regulated, individual farmers are responsible for the adoption and successful application of control policies at the farm scale. Organizations (groups of farmers, health institutions...) may try to influence farmers' control actions through financial incentives, in order to ensure sustainable (from the health and economical point of views) disease management policies. Economics / Operations Research frameworks have been proposed for modeling the effect of incentives on agents. The Leader-Follower Markov Decision Processes framework is one such framework, that combines Markov Decision Processes (MDP) and stochastic games frameworks. However, since finding equilibrium policies in stochastic games is hard when the number of players is large, LF-MDP problems are intractable. Our contribution, in this article, is to propose a tractable model of the animal disease management problem. The tractable model is obtained through a few simple modeling approximations which are acceptable when the problem is viewed from the organization side. As a result, we design a polynomial-time algorithm for animal disease management, which we evaluate on a case study inspired from the problem of controlling the spread of the Porcine Reproductive and Respiratory Syndrome (PRRS).

【Keywords】: MDP

185. A Temporal Motif Mining Approach to Unsupervised Energy Disaggregation: Applications to Residential and Commercial Buildings.

Paper Link】 【Pages】:

【Authors】: Huijuan Shao ; Manish Marwah ; Naren Ramakrishnan

【Abstract】: Non-intrusive appliance load monitoring has emerged as an attractive approach to study energy consumption patterns without instrumenting every device in a building. The ensuing computational problem is to disaggregate total energy usage into usage by specific devices, to gain insight into consumption patterns. We exploit the temporal ordering implicit in on/off events of devices to uncover motifs (episodes) corresponding to the operation of individual devices. Extracted motifs are then subjected to a sequence of constraint checks to ensure that the resulting episodes are interpretable. Our results reveal that motif mining is adept at distinguishing devices with multiple power levels and at disentangling the combinatorial operation of devices. With suitably configured processing steps, we demonstrate the applicability of our method to both residential and commercial buildings.

【Keywords】: energy disaggregation;motif mining;computational sustainability

186. Approximate Bayesian Inference for Reconstructing Velocities of Migrating Birds from Weather Radar.

Paper Link】 【Pages】:

【Authors】: Daniel R. Sheldon ; Andrew Farnsworth ; Jed Irvine ; Benjamin Van Doren ; Kevin F. Webb ; Thomas G. Dietterich ; Steve Kelling

【Abstract】: Archived data from the WSR-88D network of weather radars in the US hold detailed information about the continent-scale migratory movements of birds over the last 20 years. However, significant technical challenges must be overcome to understand this information and harness its potential for science and conservation. We present an approximate Bayesian inference algorithm to reconstruct the velocity fields of birds migrating in the vicinity of a radar station. This is part of a larger project to quantify bird migration at large scales using weather radar data.

【Keywords】: Bird Migration; Bayesian Inference; BirdCast; WSR-88D; Doppler Radar; Velocity Profiling

187. Enabling E-Mobility: Facility Location for Battery Loading Stations.

Paper Link】 【Pages】:

【Authors】: Sabine Storandt ; Stefan Funke

【Abstract】: The short cruising range due to the limited battery supply of current Electric Vehicles (EVs) is one of the main obstacles for a complete transition to E-mobility. Until batteries of higher energy storage density have been developed, it is of utmost importance to deliberately plan the locations of new loading stations for best possible coverage. Ideally the network of loading stations should allow driving from anywhere to anywhere (and back) without running out of energy. We show that minimizing the number of necessary loading stations to achieve this goal is NP-hard and even worse, we can rule out polynomial-time constant approximation algorithms. Hence algorithms with better approximation guarantees have to make use of the special structure of road networks (which is not obvious how to do it). On the positive side, we show with instance based lower bounds that our heuris- tic algorithms achieve provably good solutions on real-world problem instances.

【Keywords】: E-Mobility, Inapproximability, Heuristic Search

188. Model Predictive Control with Uncertainty in Human Driven Systems.

Paper Link】 【Pages】:

【Authors】: Alexander David Styler ; Illah Reza Nourbakhsh

【Abstract】: Human driven systems present a unique optimization challenge for robot control. Generally, operators of these systems behave rationally given environmental factors and desired goals. However, information available to subsystem controllers is often incomplete, and the operator becomes more difficult to model without this input information. In this work we present a data-driven, nonparametric model to capture both expectation and uncertainty of the upcoming duty for a subsystem controller. This model is a modified k-nearest neighbor regressor used to generate weighted samples from a distribution of upcoming duty, which are then exploited to generate an optimal control. We test the model on a simulated heterogeneous energy pack manager in an Electric Vehicle operated by a human driver. For this domain, upcoming load on the energy pack strongly affects the optimal use and charging strategy of the pack. Given incomplete information, there is a natural uncertainty in upcoming duty due to traffic, destination, signage, and other factors. We test against a dataset of real driving data gathered from volunteers, and compare the results other models and the optimal upper bound.

【Keywords】: Energy; Uncertainty; Prediction; Human-built systems and land use

189. Resource Sharing for Control of Wildland Fires.

Paper Link】 【Pages】:

【Authors】: Alan Tsang ; Kate Larson ; Rob McAlpine

【Abstract】: Wildland fires (or wildfires) occur on all continents except for Antarctica. These fires threaten communities, change ecosystems, destroy vast quantities of natural resources and the cost estimates of the damage done annually is in the billions of dollars. Controlling wildland fires is resource-intensive and there are numerous examples where the resource demand has outstripped resource availability. Trends in changing climates, fire occurrence and the expansion of the wildland-urban interface all point to increased resource shortages in the future. One approach for coping with these shortages has been the sharing of resources across different wildland-fire agencies. This introduces new issues as agencies have to balance their own needs and risk-management with their desire to help fellow agencies in need. Using ideas from the field of multiagent systems, we conduct the first analysis of strategic issues arising in resource-sharing for wildland-fire control. We also argue that the wildland-fire domain has numerous features that make it attractive to researchers in artificial intelligence and computational sustainability.

【Keywords】: Wildland Fire; Resource Allocation; Multiagent Systems

190. Multiagent Coordination for Energy Consumption Scheduling in Consumer Cooperatives.

Paper Link】 【Pages】:

【Authors】: Andreas Veit ; Ying Xu ; Ronghuo Zheng ; Nilanjan Chakraborty ; Katia P. Sycara

【Abstract】: A key challenge to create a sustainable and energy-efficient society is in making consumer demand adaptive to energy supply, especially renewable supply. In this paper, we propose a partially-centralized organization of consumers, namely, a consumer cooperative for purchasing electricity from the market. We propose a novel multiagent coordination algorithm to shape the energy consumption of the cooperative. In the cooperative, a central coordinator buys the electricity for the whole group and consumers make their own consumption decisions based on their private consumption constraints and preferences. To coordinate individual consumers under incomplete information, we propose an iterative algorithm in which a virtual price signal is sent by the coordinator to induce consumers to shift demand. We prove that our algorithm converges to the central optimal solution. Additionally we analyze the convergence rate of the algorithm via simulations on randomly generated instances. The results indicate scalability with respect to the number of agents and consumption slots.

【Keywords】: Demand-Side Energy Management, Multi-Agent Systems, Consumer Cooperative

191. Large Landscape Conservation - Synthetic and Real-World Datasets.

Paper Link】 【Pages】:

【Authors】: Bistra N. Dilkina ; Katherine J. Lai ; Ronan LeBras ; Yexiang Xue ; Carla P. Gomes ; Ashish Sabharwal ; Jordan Suter ; Kevin S. McKelvey ; Michael K. Schwartz ; Claire A. Montgomery

【Abstract】: Biodiversity underpins ecosystem goods and services and hence protecting it is key to achieving sustainability. However, the persistence of many species is threatened by habitat loss and fragmentation due to human land use and climate change. Conservation efforts are implemented under very limited economic resources, and therefore designing scalable, cost-efficient and systematic approaches for conservation planning is an important and challenging computational task. In particular, preserving landscape connectivity between good habitat has become a key conservation priority in recent years. We give an overview of landscape connectivity conservation and some of the underlying graph-theoretic optimization problems. We present a synthetic generator capable of creating families of randomized structured problems, capturing the essential features of real-world instances but allowing for a thorough typical-case performance evaluation of different solution methods. We also present two large-scale real-world datasets, including economic data on land cost, and species data for grizzly bears, wolverines and lynx.

【Keywords】: dataset; conservation planning; sustainability; landscape; network design;

Robotics Special Track 11

Paper Link】 【Pages】:

【Authors】: Danilo Bruno ; Sylvain Calinon ; Darwin G. Caldwell

【Abstract】: Skills can often be performed in many different ways. In order to provide robots with human-like adaptation capabilities, it is of great interest to learn several ways of achieving the same skills in parallel, since eventual changes in the environment or in the robot can make some solutions unfeasible. In this case, the knowledge of multiple solutions can avoid relearning the task. This problem is addressed in this paper within the framework of Reinforcement Learning, as the automatic determination of multiple optimal parameterized policies. For this purpose, a model handling a variable number of policies is built using a Bayesian non-parametric approach. The algorithm is first compared to single policy algorithms on known benchmarks. It is then applied to a typical robotic problem presenting multiple solutions.

【Keywords】: Bayesian non-parametrics; Reinforcement Learning; Multi-optima Policy search

193. Multi-Target Detection and Recognition by UAVs Using Online POMDPs.

Paper Link】 【Pages】:

【Authors】: Caroline Ponzoni Carvalho Chanel ; Florent Teichteil-Königsbuch ; Charles Lesire

【Abstract】: This paper tackles high-level decision-making techniques for robotic missions, which involve both active sensing and symbolic goal reaching, under uncertain probabilistic environments and strong time constraints. Our case study is a POMDP model of an online multi-target detection and recognition mission by an autonomous UAV. The POMDP model of the multi-target detection and recognition problem is generated online from a list of areas of interest, which are automatically extracted at the beginning of the flight from a coarse-grained high altitude observation of the scene. The POMDP observation model relies on a statistical abstraction of an image processing algorithm's output used to detect targets. As the POMDP problem cannot be known and thus optimized before the beginning of the flight, our main contribution is an "optimize-while-execute" algorithmic framework: it drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints. We present new results from real outdoor flights and SAIL simulations, which highlight both the benefits of using POMDPs in multi-target detection and recognition missions, and of our "optimize-while-execute" paradigm.

【Keywords】: Perception and Mission Planning under Uncertainty; Partially Observable Markov Decision Processes; Parallel Anticipatory Planning and Execution; Offline Supervised Observation Model Learning

194. A Simple, but NP-Hard, Motion Planning Problem.

Paper Link】 【Pages】:

【Authors】: Lawrence H. Erickson ; Steven M. LaValle

【Abstract】: Determining the existence of a collision-free path between two points is one of the most fundamental questions in robotics.  However, in situations where crossing an obstacle is costly but not impossible, it may be more appropriate to ask for the path that crosses the fewest obstacles.  This may arise in both autonomous outdoor navigation (where the obstacles are rough but not completely impassable terrain) or indoor navigation (where the obstacles are doors that can be opened if necessary).  This problem, the minimum constraint removal problem , is at least as hard as the underlying path existence problem.  In this paper, we demonstrate that the minimum constraint removal problem is NP-hard for navigation in the plane even when the obstacles are all convex polygons, a case where the path existence problem is very easy.

【Keywords】: motion planning; robotics; complexity

195. Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior.

Paper Link】 【Pages】:

【Authors】: Been Kim ; Caleb M. Chacha ; Julie A. Shah

【Abstract】: We aim to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains, such as military field operations and disaster response. Deployment plans for these operations are frequently negotiated on-the-fly by teams of human planners. A human operator then translates the agreed upon plan into machine instructions for the robots. We present an algorithm that reduces this translation burden by inferring the final plan from a processed form of the human team's planning conversation. Our approach combines probabilistic generative modeling with logical plan validation used to compute a highly structured prior over possible plans. This hybrid approach enables us to overcome the challenge of performing inference over the large solution space with only a small amount of noisy data from the team planning session. We validate the algorithm through human subject experimentation and show we are able to infer a human team's final plan with 83% accuracy on average. We also describe a robot demonstration in which two people plan and execute a first-response collaborative task with a PR2 robot. To the best of our knowledge, this is the first work that integrates a logical planning technique within a generative model to perform plan inference.

【Keywords】: generative modeling; robotics; rescue mission; machine learning

196. Data-Efficient Generalization of Robot Skills with Contextual Policy Search.

Paper Link】 【Pages】:

【Authors】: Andras Gabor Kupcsik ; Marc Peter Deisenroth ; Jan Peters ; Gerhard Neumann

【Abstract】: In robotics, controllers make the robot solve a task within a specific context. The context can describe the objectives of the robot or physical properties of the environment and is always specified before task execution. To generalize the controller to multiple contexts, we follow a hierarchical approach for policy learning: A lower-level policy controls the robot for a given context and an upper-level policy generalizes among contexts. Current approaches for learning such upper-level policies are based on model-free policy search, which require an excessive number of interactions of the robot with its environment. More data-efficient policy search approaches are model based but, thus far, without the capability of learning hierarchical policies. We propose a new model-based policy search approach that can also learn contextual upper-level policies. Our approach is based on learning probabilistic forward models for long-term predictions. Using these predictions, we use information-theoretic insights to improve the upper-level policy. Our method achieves a substantial improvement in learning speed compared to existing methods on simulated and real robotic tasks.

【Keywords】: robotics; policy search; gaussian process; movement skill learning; generalization of skills

197. GSMDPs for Multi-Robot Sequential Decision-Making.

Paper Link】 【Pages】:

【Authors】: João Vicente Messias ; Matthijs T. J. Spaan ; Pedro U. Lima

【Abstract】: Markov Decision Processes (MDPs) provide an extensive theoretical background for problems of decision-making under uncertainty. In order to maintain computational tractability, however, real-world problems are typically discretized in states and actions as well as in time. Assuming synchronous state transitions and actions at fixed rates may result in models which are not strictly Markovian, or where agents are forced to idle between actions, losing their ability to react to sudden changes in the environment. In this work, we explore the application of Generalized Semi-Markov Decision Processes (GSMDPs) to a realistic multi-robot scenario. A case study will be presented in the domain of cooperative robotics, where real-time reactivity must be preserved, and synchronous discrete-time approaches are therefore sub-optimal. This case study is tested on a team of real robots, and also in realistic simulation. By allowing asynchronous events to be modeled over continuous time, the GSMDP approach is shown to provide greater solution quality than its discrete-time counterparts, while still being approximately solvable by existing methods.

【Keywords】: Markov Decision Processes; Sequential Decision Making; Multi-Robot Systems

198. Robot Motion Planning with Dynamics as Hybrid Search.

Paper Link】 【Pages】:

【Authors】: Erion Plaku

【Abstract】: This paper presents a framework for motion planning with dynamics as hybrid search over the continuous space of feasible motions and the discrete space of a low-dimensional workspace decomposition. Each step of the hybrid search consists of expanding a frontier of regions in the discrete space using cost heuristics as guide followed by sampling-based motion planning to expand a tree of feasible motions in the continuous space to reach the frontier. The approach is geared towards robots with many degrees-of-freedom (DOFs), nonlinear dynamics, and nonholonomic constraints, which make it difficult to follow discrete-search paths to the goal, and hence require a tight coupling of motion planning and discrete search. Comparisons to related work show significant computational speedups.

【Keywords】: Motion planning; discrete search; dynamics

199. Learning Collaborative Impedance-Based Robot Behaviors.

Paper Link】 【Pages】:

【Authors】: Leonel Dario Rozo ; Sylvain Calinon ; Darwin G. Caldwell ; Pablo Jiménez ; Carme Torras

【Abstract】: Research in learning from demonstration has focused on transferring movements from humans to robots. However, a need is arising for robots that do not just replicate the task on their own, but that also interact with humans in a safe and natural way to accomplish tasks cooperatively. Robots with variable impedance capabilities opens the door to new challenging applications, where the learning algorithms must be extended by encapsulating force and vision information. In this paper we propose a framework to transfer impedance-based behaviors to a torque-controlled robot by kinesthetic teaching. The proposed model encodes the examples as a task-parameterized statistical dynamical system, where the robot impedance is shaped by estimating virtual stiffness matrices from the set of demonstrations. A collaborative assembly task is used as testbed. The results show that the model can be used to modify the robot impedance along task execution to facilitate the collaboration, by triggering stiff and compliant behaviors in an on-line manner to adapt to the user's actions.

【Keywords】: Physical human-robot interaction; collaborative manipulation; impedance learning; task-parameterized model

200. Compact RGBD Surface Models Based on Sparse Coding.

Paper Link】 【Pages】:

【Authors】: Michael Ruhnke ; Liefeng Bo ; Dieter Fox ; Wolfram Burgard

【Abstract】: In this paper, we describe a novel approach to construct compact colored 3D environment models representing local surface attributes via sparse coding. Our method decomposes a set of colored point clouds into local surface patches and encodes them based on an overcomplete dictionary. Instead of storing the entire point cloud, we store a dictionary, surface patch positions, and a sparse code description of the depth and RGB attributes for every patch. The dictionary is learned in an unsupervised way from surface patches sampled from indoor maps. We show that better dictionaries can be learned by extending the K-SVD method with a binary weighting scheme that ignores undefined surface cells. Through experimental evaluation on real world laser and RGBD datasets we demonstrate that our method produces compact and accurate models. Furthermore, we clearly outperform an existing state of the art method in terms of compactness, accuracy, and computation time. Additionally, we demonstrate that our sparse code descriptions can be utilized for other important tasks such as object detection.

【Keywords】: RGBD, surface models, sparse coding

201. Open-Loop Planning in Large-Scale Stochastic Domains.

Paper Link】 【Pages】:

【Authors】: Ari Weinstein ; Michael L. Littman

【Abstract】: We focus on effective sample-based planning in the face of underactuation, high-dimensionality, drift, discrete system changes, and stochasticity. These are hallmark challenges for important problems, such as humanoid locomotion. In order to ensure broad applicability, we assume domain expertise is minimal and limited to a generative model. In order to make the method responsive, computational costs that scale linearly with the amount of samples taken from the generative model are required. We bring to bear a concrete method that satisfies all these requirements; it is a receding-horizon open-loop planner that employs cross-entropy optimization for policy construction. In simulation, we empirically demonstrate near-optimal decisions in a small domain and effective locomotion in several challenging humanoid control tasks.

【Keywords】: reinforcement learning; receding horizon control; continuous Markov decision processes

202. Structure and Intractability of Optimal Multi-Robot Path Planning on Graphs.

Paper Link】 【Pages】:

【Authors】: Jingjin Yu ; Steven M. LaValle

【Abstract】: In this paper, we study the structure and computational complexity of optimal multi-robot path planning problems on graphs. Our results encompass three formulations of the discrete multi-robot path planning problem, including a variant that allows synchronous rotations of robots along fully occupied, disjoint cycles on the graph. Allowing rotation of robots provides a more natural model for multi-robot path planning because robots can communicate. Our optimality objectives are to minimize the total arrival time, the makespan (last arrival time), and the total distance. On the structure side, we show that, in general, these objectives demonstrate a pairwise Pareto optimal structure and cannot be simultaneously optimized. On the computational complexity side, we extend previous work and show that, regardless of the underlying multi-robot path planning problem, these objectives are all intractable to compute. In particular, our NP-hardness proof for the time optimal versions, based on a minimal and direct reduction from the 3-satisfiability problem, shows that these problems remain NP-hard even when there are only two groups of robots (i.e. robots within each group are interchangeable).

【Keywords】: multi-robot path planning; Pareto optimality; NP-hardness

Pre-PhD Student Abstracts 25

203. Advice Provision in Multiple Prospect Selection Problems.

Paper Link】 【Pages】:

【Authors】: Amos Azaria ; Sarit Kraus

【Abstract】: When humans face a broad spectrum of topics, where each topic consists of several options, they usually make a decision on each topic separately. Usually, a person will perform better by making a global decision, however, taking all consequences into account is extremely difficult. We present a novel computational method for advice-generation in an environment where people need to decide among multiple selection problems. This method is based on the prospect theory and uses machine learning techniques. We graphically present this advice to the users and compare it with an advice which encourages the users to always select the option with a higher expected outcome. We show that our method outperforms the expected outcome approach in terms of user happiness and satisfaction.

【Keywords】: Human-Advise Interaction; Advice provision; Prospect Theory

204. A Maximum K-Min Approach for Classification.

Paper Link】 【Pages】:

【Authors】: Mingzhi Dong ; Liang Yin

【Abstract】: In this paper, a general Maximum K-Min approach for classification is proposed, which focuses on maximizing the gain obtained by the K worst-classified instances while ignoring the remaining ones. To make the original optimization problem with combinational constraints computationally tractable,  the optimization techniques are adopted and a general compact representation lemma is summarized. Based on the lemma, a Nonlinear Maximum K -Min (NMKM) classifier is presented and the experiment results demonstrate the superior performance of the Maximum K -Min Approach.

【Keywords】: Classification; Maximum K-Min

205. Does One-Against-All or One-Against-One Improve the Performance of Multiclass Classifications?

Paper Link】 【Pages】:

【Authors】: Robert Kyle Eichelberger ; Victor S. Sheng

【Abstract】: One-against-all and one-against-one are two popular methodologies for reducing multiclass classification problems into a set of binary classifications. In this paper, we are interested in the performance of both one-against-all and one-against-one for classification algorithms, such as decision tree, naïve bayes, support vector machine, and logistic regression. Since both one-against-all and one-against-one work like creating a classification committee, they are expected to improve the performance of classification algorithms. However, our experimental results surprisingly show that one-against-all worsens the performance of the algorithms on most datasets. One-against-one helps, but performs worse than the same iterations of bagging these algorithms. Thus, we conclude that both one-against-all and one-against-one should not be used for the algorithms that can perform multiclass classifications directly. Bagging is better approach for improving their performance.

【Keywords】: multi-class classification; One-Against-All; One-Against-One; All-at-Once

206. Selecting the Appropriate Consistency Algorithm for CSPs Using Machine Learning Classifiers.

Paper Link】 【Pages】:

【Authors】: Daniel J. Geschwender ; Shant Karakashian ; Robert J. Woodward ; Berthe Y. Choueiry ; Stephen D. Scott

【Abstract】: Computing the minimal network of a Constraint Satisfaction Problem (CSP) is a useful and difficult task. Two algorithms, PerTuple and AllSol, were proposed to this end. The performances of these algorithms vary with the problem instance. We use Machine Learning techniques to build a classifier that predicts which of the two algorithms is likely to be more effective.

【Keywords】: Constraint Satisfaction; Machine Learning

207. A Mediation Mechanism for Automated Negotiating Agents Whose Utility Changes over Time.

Paper Link】 【Pages】:

【Authors】: Keisuke Hara ; Takayuki Ito

【Abstract】: Multi-issue negotiation protocols are an important field of study because real-world negotiation problems are often complex and involve multiple issues. Although much previous work has only addressed linear utility function, that is, simple negotiations involving independent issues, recently, non-linear utility functions for complex negotiations involing interdependent issues have gained attention. Most studies, however, do not focus on the changes in utility space over time. In economic theory, it is often assumed that the utility function changes dynamically over time. It is important to seek the Pareto front, which refers to the set of Pareto optimal points, in negotiation problems. Therefore, in this paper we propose a complex utility space that changes over time and a negotiation mechanism in which the mediator takes the lead in negotiation based on the genetic algorithm (GA). The experimental results show that our approach is suitable for utility that dynamically changes over time, and finds and follows the Pareto front effectively.

【Keywords】: Multi-Issue Negotiation, Interdependent Issues, change over time, GA

208. The Role of Complex Network Dynamics in the Emergence of Multiagent Coalition.

Paper Link】 【Pages】:

【Authors】: Mohammad Rashedul Hasan ; Anita Raja

【Abstract】: Emergence of a single coalition among self-interested agents operating on large scale-free networks is a challenging task. Many existing approaches assume a given static network platform and do not use the network dynamics to facilitate the dynamics of agent interactions. In this paper, we present a decentralized game-theoretic approach to this single coalition emergence problem in which agent communications are limited only to their immediate neighbors. Our coalition emergence algorithm is based on the heuristic that agents benefit by forming coalitions with wealthy (higher payoff) and influential (higher accumulated coupling strength) neighbors. Simulation results show that the emergence phenomenon is significantly enhanced when the topological insights, such as increasing degree-heterogeneity and clustering, are embedded into the agent partner selection strategy.

【Keywords】: Scale-free network, emergence, complex network dynamics, multiagent coalition, game theory

209. Phase Transition and Network Structure in Realistic SAT Problems.

Paper Link】 【Pages】:

【Authors】: Soumya C. Kambhampati ; Thomas Liu

【Abstract】: Previous research has shown that 3-SAT problems are easy to solve both when the “constrainedness” (the ratio of the number of clauses to the number of variables) is low and when it is high, abruptly transitioning from easy to hard in a very narrow region of constrainedness. Most of these “phase transition” studies were done on SAT instances that follow uniform random distribution. In such a distribution, variables take part in clauses with uniform probability, and clauses are independent (uncorrelated). The assumptions of uniform random distribution are, however, not satisfied when we consider SAT instances that result from real problems. Our project aims for a deeper understanding of the hardness of SAT problems that arise in practice. In particular, we study two key questions: (1) How does the phase transition behavior change with more realistic and natural distributions of SAT problems? and (2) Can we gain an understanding of the phase transition in terms of the network structure of these SAT problems? Our hypothesis is that the network properties help predict and explain how the easy-to-hard problem transition for realistic SAT problems differs from those for uniform random distribution.

【Keywords】: Boolean Satisfiability, Phase Transition, Social Networks

210. Locate the Hate: Detecting Tweets against Blacks.

Paper Link】 【Pages】:

【Authors】: Irene Kwok ; Yuzhou Wang

【Abstract】: Although the social medium Twitter grants users freedom of speech, its instantaneous nature and retweeting features also amplify hate speech. Because Twitter has a sizeable black constituency, racist tweets against blacks are especially detrimental in the Twitter community, though this effect may not be obvious against a backdrop of half a billion tweets a day.1 We apply a supervised machine learning approach, employing inexpensively acquired labeled data from diverse Twitter accounts to learn a binary classifier for the labels “racist” and “nonracist.” The classifier has a 76% average accuracy on individual tweets, suggesting that with further improvements, our work can contribute data on the sources of anti-black hate speech.

【Keywords】: Machine Learning; Artificial Intelligence and the Web

211. Hybrid Model-Based Diagnosis of Web Service Compositions.

Paper Link】 【Pages】:

【Authors】: Zhichun Jia ; Rong Chen

【Abstract】: Fault diagnosis of web services composition at run time is appealing in creating a consolidated distributed application. For this purpose, we propose a hybrid model-based diagnosis method which exploits service process description or historical execution information to enhance service composition model, and localize faults by comparing the exceptional execution and the correct execution with the maximum likelihood. Experiments are conducted to evaluate the effectiveness of our method in web service composition fault diagnosis.

【Keywords】: fault diagnosis; web service; BPEL process

212. Crowd Formalization of Action Conditions.

Paper Link】 【Pages】:

【Authors】: Walter Stephen Lasecki ; Leon Weingard ; Jeffrey Philip Bigham ; George Ferguson

【Abstract】: Training intelligent systems is a time consuming and costly process that often limits their application to real-world problems. Prior work in crowdsourcing has attempted to compensate for this challenge by generating sets of labeled training data for machine learning algorithms. In this work, we seek to move beyond collecting just statistical data and explore how to gather structured, relational representations of a scenario using the crowd. We focus on activity recognition because of its broad applicability, high level of variation between individual instances, and difficulty of training systems a priori. We present ARchitect, a system that uses the crowd to ascertain pre and post conditions for actions observed in a video and find relations between actions. Our ultimate goal is to identify multiple valid execution paths from a single set of observations, which suggests one-off learning from the crowd is possible.

【Keywords】: crowdsourcing, activity recognition, plan constraints

213. Trading Space for Time in Grid-Based Path Finding.

Paper Link】 【Pages】:

【Authors】: William Lee ; Ramon Lawrence

【Abstract】: Grid-based path finding is required in many games to move agents. We present an algorithm called DBA that uses a database of pre-computed paths to reduce the time to solve search problems. When evaluated using benchmark maps from Dragon Age™, DBA requires less time for search and produces less suboptimal paths than the PRA* implementation used in Dragon Age™.

【Keywords】: path finding ; search ; games ; database ; real-time

214. Online Group Feature Selection from Feature Streams.

Paper Link】 【Pages】:

【Authors】: Hai-Guang Li ; Xindong Wu ; Zhao Li ; Wei Ding

【Abstract】: Standard feature selection algorithms deal with given candidate feature sets at the individual feature level. When features exhibit certain group structures, it is beneficial to conduct feature selection in a grouped manner. For high-dimensional features, it could be far more preferable to online generate and process features one at a time rather than wait for generating all features before learning begins. In this paper, we discuss a new and interesting problem of online group feature selection from feature streams at both the group and individual feature levels simultaneously from a feature stream. Extensive experiments on both real-world and synthetic datasets demonstrate the superiority of the proposed algorithm.

【Keywords】:

215. Making Simple Tabular ReductionWorks on Negative Table Constraints.

Paper Link】 【Pages】:

【Authors】: Hongbo Li ; Yanchun Liang ; Jinsong Guo ; Zhanshan Li

【Abstract】: Simple Tabular Reduction algorithms (STR) work well to establish Generalized Arc Consistency (GAC) on positive table constraints. However, the existing STR algorithms are useless for negative table constraints. In this work, we propose a novel STR algorithm and its improvement, which work on negative table constraints. Our preliminary experiments are performed on some random instances and a certain benchmark instances. The results show that the new algorithms outperform GAC-valid and the MDD-based GAC algorithm.

【Keywords】: Simple Tabular Reduction; Generalized Arc Consistency

216. Subchloroplast Location Prediction via Homolog Knowledge Transfer and Feature Selection.

Paper Link】 【Pages】:

【Authors】: Xiaomei Li ; Xindong Wu ; Gong-Qing Wu ; Xuegang Hu

【Abstract】: The accuracy of subchloroplast location prediction algorithms often depends on predictive and succinct features derived from proteins. Thus, to improve the prediction accuracy, this paper proposes a novel SubChloroplast location prediction method, called SCHOTS, which integrates the HOmolog knowledge Transfer and feature Selection methods. SCHOTS contains two stages. First, discriminating features are generated by WS-LCHI, a Weighted Gene Ontology (GO) transfer model based on bit-Score of proteins and Logarithmic transformation of CHI-square. Second, the more informative GO terms are selected from the features. Extensive studies conducted on three real datasets demonstrate that SCHOTS outperforms three off-the-shelf subchloroplast prediction methods.

【Keywords】: Subchloroplast location prediction; Bit-score; Term-selection method; Gene Ontology

217. An Effective Approach for Imbalanced Classification: Unevenly Balanced Bagging.

Paper Link】 【Pages】:

【Authors】: Guohua Liang ; Anthony G. Cohn

【Abstract】: Learning from imbalanced data is an important problem in data mining research. Much research has addressed the problem of imbalanced data by using sampling methods to generate an equally balanced training set to improve the performance of the prediction models, but it is unclear what ratio of class distribution is best for training a prediction model. Bagging is one of the most popular and effective ensemble learning methods for improving the performance of prediction models; however, there is a major drawback on extremely imbalanced data-sets. It is unclear under which conditions bagging is outperformed by other sampling schemes in terms of imbalanced classification. These issues motivate us to propose a novel approach, unevenly balanced bagging (UBagging) to boost the performance of the prediction model for imbalanced binary classification. Our experimental results demonstrate that UBagging is effective and statistically significantly superior to single learner decision trees J48 (SingleJ48), bagging, and equally balanced bagging (BBagging) on 32 imbalanced data-sets.

【Keywords】: Imbalanced class distribution, Ensemble Learning

218. A First-Order Logic Based Framework for Verifying Simulations.

Paper Link】 【Pages】:

【Authors】: Hui Meen Nyew ; Nilufer Onder ; Soner Önder ; Zhenlin Wang

【Abstract】: Modern science relies on simulation techniques for understanding phenomenon, exploring design options, or evaluating models. Assuring the correctness of simulators is a key problem where a multitude of solutions ranging from manual inspection to formal verification are applicable. Formal verification incorporates the rigor necessary but not all simulators are generated from formal specifications. Manual inspection is readily available but lacks the rigor and is prone to errors. In this paper, we describe an automated verification system (AVS) where the constraints that the system must adhere to are specified by the user in general purpose first-order logic. AVS translates these constraints into a verification program that scans the simulator traceand verifies that no constraints are violated. Computer microarchitecture simulations were successfully used to demonstrate the proposed approach. This paper describes the preliminary results and discusses how artificial intelligence techniques can be used to facilitate effective run-time verification of simulators.

【Keywords】: Runtime Verification;Constraint Satisfaction Problem; Data Stream Mining;First-Order Logic

219. Concurrent Reasoning with Inference Graphs.

Paper Link】 【Pages】:

【Authors】: Daniel R. Schlegel ; Stuart C. Shapiro

【Abstract】: Since their popularity began to rise in the mid-2000s there has been significant growth in the number of multi-core and multi-processor computers available. Knowledge representation systems using logical inference have been slow to embrace this new technology. We present the concept of inference graphs, a natural deduction inference system which scales well on multi-core and multi-processor machines. Inference graphs enhance propositional graphs by treating propositional nodes as tasks which can be scheduled to operate upon messages sent between nodes via the arcs that already exist as part of the propositional graph representation. The use of scheduling heuristics within a prioritized message passing architecture allows inference graphs to perform very well in forward, backward, bi-directional, and focused reasoning. Tests demonstrate the usefulness of our scheduling heuristics, and show significant speedup in both best case and worst case inference scenarios as the number of processors increases.

【Keywords】: Automated Inference; Inference Graphs; SNePS

220. Simplified Lattice Models for Protein Structure Prediction: How Good Are They?

Paper Link】 【Pages】:

【Authors】: Swakkhar Shatabda ; M. A. Hakim Newton ; Abdul Sattar

【Abstract】: In this paper, we present a local search framework for lattice fit problem of proteins. Our algorithm significantly improves state-of-the-art results and justifies the significance of the lattice models. In addition to these, our analysis reveals the weakness of several energy functions used.

【Keywords】: Protein Structure Prediction; Simplified Models; Lattice Fit;Cubic Lattice; FCC Lattice; Energy Models, Local Search, Constraints

221. Graphical Model-Based Learning in High Dimensional Feature Spaces.

Paper Link】 【Pages】:

【Authors】: Zhao Song ; Yuke Zhu

【Abstract】: Digital media tend to combine text and images to express richer information, especially on image hosting and online shopping websites. This trend presents a challenge in understanding the contents from different forms of information. Features representing visual information are usually sparse in high dimensional space, which makes the learning process intractable. In order to understand text and its related visual information, we present a new graphical model-based approach to discover more meaningful information in rich media. We extend the standard Latent Dirichlet Allocation (LDA) framework to learn in high dimensional feature spaces.

【Keywords】: Data Mining; Machine Learning; Knowledge Discovery

222. On a Noun-Driven Syntactic Paradigm.

Paper Link】 【Pages】:

【Authors】: Lauren M. Stuart

【Abstract】: In addressing a verb bias in syntactic analysis we present the beginnings of a noun-driven analysis paradigm. Such a paradigm may complement and compensate for some weaknesses in the existing verb-driven paradigm (in which the noun is subordinated) as applied to information sources or contexts in which the data is structured in objects and more than in events.

【Keywords】: syntax; computational syntax; noun-driven syntax; natural language processing; computational linguistics

223. Empirical Comparison of Multi-Label Classification Algorithms.

Paper Link】 【Pages】:

【Authors】: Clifford A. Tawiah ; Victor S. Sheng

【Abstract】: Multi-label classifications exist in many real world applications. This paper empirically studies the performance of a variety of multi-label classification algorithms. Some of them are developed based on problem transformation. Some of them are developed based on adaption. Our experimental results show that the adaptive Multi-Label K-Nearest Neighbor performs the best, followed by Random k-Label Set, followed by Classifier Chain and Binary Relevance. Adaboost.MH performs the worst, followed by Pruned Problem Transformation. Our experimental results also provide us the confidence of the correlations among multi-labels. These insights shed light for future research directions on multi-label classifications.

【Keywords】: Classification; Algorithms; Multi Label classification; Classification algorithms

224. WordNet Based Multi-Way Concept Hierarchy Construction from Text Corpus.

Paper Link】 【Pages】:

【Authors】: Ding Tu ; Ling Chen ; Gencai Chen

【Abstract】: In this paper, we propose an approach to build a multi-way concept hierarchy from a text corpus, which is based on WordNet and multi-way hierarchical clustering. In addition, a new evaluation metric is presented, and our approach is compared with 4 kinds of existing methods on the Amazon Customer Review data set.

【Keywords】:

225. Personalized Recommendation Based on Co-Ranking and Query-Based Collaborative Diffusion.

Paper Link】 【Pages】:

【Authors】: Xiao Yang ; Zhaoxin Zhang ; Qiang Wang

【Abstract】: In this paper, we present an adaptive graph-based personalized recommendation method based on co-ranking and query-based collaborative diffusion. By utilizing the unique network structure of n-partite heterogeneous graph, we attempt to address the problem of personalized recommendation in a two-layer ranking process with the help of reasonable measure of high and low order relationships and analyzing the representation of user’s preference in the graph. The experiments show that this algorithm can outperform the traditional CF methods and achieve competitive performance compared with many model-based and graph-based recommendation methods, and have better scalability and flexibility.

【Keywords】: Personalized Recommendation;Graph Ranking

226. Imbalanced Multiple Noisy Labeling for Supervised Learning.

Paper Link】 【Pages】:

【Authors】: Jing Zhang ; Xindong Wu ; Victor Shengli Sheng

【Abstract】: When labeling objects via Internet-based outsourcing systems, the labelers may have bias, because they lack expertise, dedication and personal preference. These reasons cause Imbalanced Multiple Noisy Labeling. To deal with the imbalance labeling issue, we propose an agnostic algorithm PLAT (Positive LAbel frequency Threshold) which does not need any information about quality of labelers and underlying class distribution. Simulations on eight real-world datasets with different underlying class distributions demonstrate that PLAT not only effectively deals with the imbalanced multiple noisy labeling problem that off-the-shelf agnostic methods cannot cope with, but also performs nearly the same as majority voting under the circumstances that labelers have no bias.

【Keywords】: Crowdsourcing; Multiple Noisy Labeling; Supervised Leaning

227. Planning with Multi-Valued Landmarks.

Paper Link】 【Pages】:

【Authors】: Lei Zhang ; Chong-Jun Wang ; Jun Wu ; Meilin Liu ; Jun-Yuan Xie

【Abstract】: Landmark heuristics are perhaps the most accurate current known admissible heuristics for optimal planning. A disjunctive action landmark can be seen a form of at-least-one constraint on the actions it contains. In many domains, some critical propositions have to be established for a number of times.Propositional landmarks are too weak to express this kind of constraints.In this paper, we propose to generalize landmarks to multi-valued landmarks to represent the more general cardinality constraints. We present a class of local multi-valued landmarks that can be efficiently extracted from propositional landmarks.By encoding multi-valued landmarks into CNF formulas, we can also use SAT solvers to systematically extract multi-valued landmarks.Experiment evaluations show that multi-valued landmark based heuristics are more close to $h^*$ andcompete favorably with the state-of-the-art of admissible landmark heuristics on benchmark domains.

【Keywords】: Landmarks; Heuristics Planning; SAT

AAAI-SIGART Doctoral Consortium 16

228. Understanding Descriptions of Visual Scenes Using Graph Grammars.

Paper Link】 【Pages】:

【Authors】: Daniel Bauer

【Abstract】: Automatic generation of 3D scenes from descriptions has applications in communication, education, and entertainment, but requires deep understanding of the input text. I propose thesis work on language understanding using graph-based meaning representations that can be decomposed into primitive spatial relations. The techniques used for analyzing text and transforming it into a scene representation are based on context-free graph grammars. The thesis develops methods for semantic parsing with graphs, acquisition of graph grammars, and satisfaction of spatial and world-knowledge constraints during parsing.

【Keywords】: Language Understanding; Deep Semantic Parsing; Visual Scenes; Graph Grammars; Text-to-Scene Generation

229. Artificial Conversational Companions.

Paper Link】 【Pages】:

【Authors】: Sviatlana Danilava

【Abstract】: This document describes the problem statement, the methodological framework, the current state of the work and the expected contribution of my doctoral dissertation. The main focus of my dissertation is long-term interaction with an Artificial Conversational Companion in the context of conversation training for second language acquisition. I use a data-driven approach and conversation analysis methods to build computational models for long-term interaction as a meaningful activity. I work on the concept of interaction profiles for human-agent interaction. The resulting models will be integrated in an AIML-based chatbot that helps to practice conversation in a foreign language.

【Keywords】: Artificial Companions, Instant Messaging, Interaction Profiles, Language Learning

230. Distribution Kernel Methods for Multiple-Instance Learning.

Paper Link】 【Pages】:

【Authors】: Gary Doran

【Abstract】: I propose to investigate learning in the multiple-instance (MI) framework as a problem of learning from distributions. In many MI applications, bags of instances can be thought of as samples from bag-generating distributions. Recent kernel approaches for learning from distributions have the potential to be successfully applied to these domains and other MI learning problems. Understanding when distribution-based techniques work for MI learning will lead to new theoretical insights, improved algorithms, and more accurate solutions for real-world problems.

【Keywords】: multiple-instance learning

231. Backdoors to Tractability of Answer-Set Programming.

Paper Link】 【Pages】:

【Authors】: Johannes Klaus Fichte

【Abstract】: The practical results of answer-set programming indicate that classical complexity theory is insufficient as a theoretical framework to explain why modern answer-set programming solvers work fast on industrial applications. Complexity analysis by means of parameterized complexity theory seems to be promising, because we think that the reason for the gap between theory and practice is the presence of a "hidden structure" in real-world instances. The application of parameterized complexity theory to answer-set programming would give a crucial understanding of how solver heuristics work. This profound understanding can be used to improve the decision heuristics of modern solvers and yields new efficient algorithms for decision problems in the nonmonotonic setting. My research aims to explain the gap between theoretical upper bounds and the effort to solve real-world instances. I will further develop by means of parameterized complexity exact algorithms which work efficiently for real-world instances. The approach is based on backdoors which are small sets of atoms that represent "clever reasoning shortcuts" through the search space. The concept of backdoors is widely used in the areas of propositional satisfiability and constraint satisfaction. I will show how this concept can be adapted to the nonmonotonic setting and how it can be utilized to improve common algorithms.

【Keywords】: knowledge representation; commonsense reasoning; answer-set programming; nonmonotonic reasoning; parameterized complexity theory; backdoors

232. An Optimal Task Assignment Policy and Performance Diagnosis Strategy for Heterogeneous Hadoop Cluster.

Paper Link】 【Pages】:

【Authors】: Shekhar Gupta

【Abstract】: The goal of the proposed research is to improve the performance of Hadoop-based software running on a heterogeneous cluster. My approach lies in the intersection of machine learning, scheduling and diagnosis. We mainly focus on heterogeneous Hadoop clusters and try to improve the performance by implementing a more efficient scheduler for this class of cluster.

【Keywords】:

233. Creating Model-Based Adaptive Environments Using Game-Specific and Game-Independent Analytics.

Paper Link】 【Pages】:

【Authors】: Brent Edward Harrison

【Abstract】: My research involves creating and evaluating adaptive game environments using player models created with data-driven techniques and algorithms. I hypothesize that I will be able to change parts of a game to elicit certain behaviors from players,and that these changes will also result in an increase of engagement and/or intrinsic motivation. Initial results in my testbed game, Scrabblesque, indicate that data-driven models and techniques can be used to influence player behavior and that these changes in play behavior manifest themselves as an increase in engagement and intrinsic motivation.

【Keywords】:

Paper Link】 【Pages】:

【Authors】: Matthew Hatem

【Abstract】: Heuristic search is a fundamental technique for solving problems in artificial intelligence. However, many heuristic search algorithms, such as A are limited by the amount of main memory available. External memory search overcomes the memory limitation of A by taking advantage of cheap secondary storage, such as disk. Previous work in this area assumes that edge costs fall within a narrow range of integer values and relies on uniformed search order. The goal of this dissertation research is to develop novel techniques that enable heuristic search algorithms to solve problems with real values using a best-first search order while exploiting external memory and multiple processors. This work will be organized into four components. The first component will discuss external memory search and present a novel technique for incorporating real-valued edge costs. The second component will present a novel algorithm for solving problems with large branching factors with application to the challenging problem of Multiple Sequence Alignment (MSA). The third component will cover bounded suboptimal external search for practical MSA applications. The final component of this research will be the development of a novel distributed search framework; allowing parallel and external memory heuristic search algorithms to run cooperatively on a commodity computing cluster. Together these four components will enable heuristic search to scale to large problems in practical settings while exploiting modern hardware.

【Keywords】: heuristic search; multiple sequence alignment; distributed; parallel; external memory;

235. Crowdsourcing for Deployable Intelligent Systems.

Paper Link】 【Pages】:

【Authors】: Walter Stephen Lasecki

【Abstract】: My work aims to create a scaffold for deployable intelligent systems using crowdsourcing. Current approaches in artificial intelligence (AI) typically focus on solving a narrow subset of problems in a given space - for example: automatic speech recognition as part of a conversational assistant, machine vision as part of a question answering service for blind people, or planning as part of a home assistive robot. This approach is necessary to scope the solution, but often results in a large number of systems that are rarely deployed in real-world setting, but instead operate in toy domains, or in situations where other parts of the problem are assumed to be solved. The framework I have developed aims to use the crowd to help in two ways: (i) make it possible to use human intelligence to power parts of a system that automated approaches cannot or do not yet handle, and (ii) provide a means of enabling more effective deployable systems by people to provide reliable training data on-demand. This summary begins with a brief review of prior work, then outlines a number of different system that I have developed to demonstrate the capabilities of this framework, and concludes with future work to be completed as part of my thesis.

【Keywords】: crowdsourcing, human computation, intelligent systems

236. Multiagent Stochastic Planning With Bayesian Policy Recognition.

Paper Link】 【Pages】:

【Authors】: Alessandro Panella

【Abstract】: When operating in stochastic, partially observable, multiagent settings, it is crucial to accurately predict the actions of other agents. In my thesis work, I propose methodologies for learning the policy of external agents from their observed behavior, in the form of finite state controllers. To perform this task, I adopt Bayesian learning algorithms based on nonparametric prior distributions, that provide the flexibility required to infer models of unknown complexity. These methods are to be embedded in decision making frameworks for autonomous planning in partially observable multiagent systems.

【Keywords】: Multiagent Systems, Stochastic Planning, Bayesian Learning, Nonparametric Bayesian Models, Dirichlet Process

237. Efficient Algorithms for Strong Local Consistencies in Constraint Satisfaction Problems.

Paper Link】 【Pages】:

【Authors】: Anastasia Paparrizou

【Abstract】: The existing complete methods for solving Constraint Satisfaction Problems (CSPs) are usually based on a combination of exhaustive search and constraint propagation techniques for the reduction of the search space. Such propagation techniques are the local consistency algorithms. Arc Consistency (AC) and Generalized Arc Consistency (GAC) are the most widely studied local consistencies that are predominantly used in constraint solvers. However, many stronger local consistencies than (G)AC have been proposed, even recently, but have been rather overlooked due to their prohibitive cost. This research proposes efficient algorithms for strong consistencies for both binary and non-binary constraints that can be easily adopted by standard CP solvers. Experimental results have so far demonstrated that the proposed algorithms are quite competitive and often more efficient than state-of-the-art methods, being orders of magnitude faster on various problem classes.

【Keywords】: Constraint Propagation; Strong Local Consistencies; Table Constraints

238. Optimization of Heterogeneous Computing Resources for Robotic Mapping.

Paper Link】 【Pages】:

【Authors】: Adrian Ratter

【Abstract】: The efficient use of computing resources on a heterogeneous robotics platform, both in terms of run time performance and power usage, presents an interesting research problem, and is the focus of my research. It is envisaged that this will be achieved by both finding parallel approaches to algorithms commonly used in robotics, and investigating the use of a scheduler to efficiently allocate resources across a heterogeneous hardware platform. In particular, while there has been much research on using specialized hardware for image and video processing algorithms, work on areas specific to robotics, such as position tracking, mapping and sensor fusion, is not as common.

【Keywords】: Robotics; mapping; GPU; parallel; position tracking; SLAM

239. The Wisdom of Crowds in Bioinformatics: What Can We Learn (and Gain) from Ensemble Predictions?

Paper Link】 【Pages】:

【Authors】: Mariana Recamonde Mendoza ; Ana Lúcia C. Bazzan

【Abstract】: The combination of distinct algorithms expertise to improve prediction accuracy, inspired by the theory of wisdom of crowds, has been increasingly discussed in literature. However, its application to bioinformatics-related tasks is still in its infancy. This thesis aims at investigating the potential and limitations of ensemble-based solutions for two bioinformatics prediction tasks, namely inference of gene regulatory networks and prediction of microRNAs targets, as well as propose new integration methods. We approach this by considering heterogeneity in the contexts of data and methods, and adopting machine learning methods and concepts from multiagent systems, such as social choice functions, for integration purposes.

【Keywords】: classification; ensemble predictions; wisdom of crowds; social choice functions; gene expression;gene regulation

240. Concurrent Inference Graphs.

Paper Link】 【Pages】:

【Authors】: Daniel R. Schlegel

【Abstract】: Since their popularity began to rise in the mid-2000s there has been significant growth in the number of multi-core and multi-processor computers available. Knowledge representation systems using logical inference have been slow to embrace this new technology. We present the concept of inference graphs, a natural deduction inference system which scales well on multi-core and multi-processor machines. Inference graphs enhance propositional graphs by treating propositional nodes as tasks which can be scheduled to operate upon messages sent between nodes via the arcs that already exist as part of the propositional graph representation. The use of scheduling heuristics within a prioritized message passing architecture allows inference graphs to perform very well in forward, backward, bi-directional, and focused reasoning. Tests demonstrate the usefulness of our scheduling heuristics, and show significant speedup in both best case and worst case inference scenarios as the number of processors increases.

【Keywords】: Automated Inference; Inference Graphs; SNePS

241. Multi-Strategy Learning of Robotic Behaviours via Qualitative Reasoning.

Paper Link】 【Pages】:

【Authors】: Timothy Wiley

【Abstract】: When given a task, an autonomous agent must plan a series of actions to perform in order to complete the goal. In robotics, planners face additional challenges as the domain is typically large (even infinite) continuous, noisy, and non- deterministic. Typically stochastic planning has been used to solve robotic control tasks. Such planners have been very successful in their various domains. The downside to such approaches is that the models and planners are highly specialised to a single control task. To change the control task, requires developing an entirely new planner. The research in my thesis focuses on the problem of specialisation in continuous, noisy and non-deterministic robotic domains by developing a more generic planner. It builds on previous research in the area, specifically using the technique of Multi-Strategy Learning. Qualitative Modelling and Qualitative Reasoning is used to provide the generality, from which specific, Quantitative controllers can be quickly learnt. The resulting system is applied to a real world robotic platform for rough terrain navigation.

【Keywords】: Machine Learning;Robotics;Qualitative Modelling and Reasoning

242. Steps Towards a Science of Heuristic Search.

Paper Link】 【Pages】:

【Authors】: Christopher Makoto Wilt

【Abstract】: There are many algorithms designed to solve the shortest path problem. Each of the published algorithms has a demonstrated use; a situation in which it is the clear choice.  Unfortunately, if faced with a novel problem, there is no reliable robust way to figure out which algorithm should be used to solve the new problem. When classifying things, the first step is to identify relevant features for classifications.  In the context of heuristic search, it not clear what pieces of information should be used to predict search algorithm performance, and the question of algorithm selection for a novel domain is an open question. We first analyze which domain attributes common algorithms leverage, and discuss how to identify domains containing these attributes.  In addition to discussing how to classify domains, we also discuss why the classifications matter for various algorithms.  Ultimately, this will allow us to offer more accurate runtime predictions for various algorithms we analyze, allowing us to determine which algorithm will likely offer the best performance.

【Keywords】: heuristic search; search; algorithm performance;

243. Tools for Preference Reasoning.

Paper Link】 【Pages】:

【Authors】: Ying Zhu

【Abstract】: The problem of computing similar and dissimilar solutions to a given one has received much attention in constraint satisfaction and answer set programming (ASP). In many practical applications involving product configuration or planning, it is often the case that there are many valid solutions. To help the user see a small but representative sample, one needs algorithms that compute sets of dissimilar solutions. Once the user "zooms" in on one or two that she likes the most, it still makes sense to present several alternatives that are similar to the selected ones so that the user can find one that truly corresponds to her needs.

【Keywords】: