Kersting, Kristian; De Raedt, Luc; Gutmann, Bernd; Karwath, Andreas; Landwehr, Niels Relational Sequence Learning (Book Chapter) Probabilistic Inductive Logic Programming – Theory and Applications, 4911 , pp. 2855, Springer Verlag, Berlin Heidelberg, Germany, 2008, ISBN: 9783540786511. (Abstract  Links  BibTeX  Tags: inductive logic programming, machine learning, relational learning, scientific knowledge) @inbook{kersting2008,
title = {Relational Sequence Learning}, author = {Kristian Kersting and De Raedt, Luc and Bernd Gutmann and Andreas Karwath and Niels Landwehr}, url = {http://dx.doi.org/10.1007/9783540786528_2}, doi = {10.1007/9783540786528_2}, isbn = {9783540786511}, year = {2008}, date = {20080101}, booktitle = {Probabilistic Inductive Logic Programming – Theory and Applications}, volume = {4911}, pages = {2855}, publisher = {Springer Verlag}, address = {Berlin Heidelberg, Germany}, organization = {SpringerVerlag Berlin Heidelberg}, crossref = {DBLP:conf/ilp/2008p}, abstract = {Sequential behavior and sequence learning are essential to intelligence. Often the elements of sequences exhibit an internal structure that can elegantly be represented using relational atoms. Applying traditional sequential learning techniques to such relational sequences requires one either to ignore the internal structure or to live with a combinatorial explosion of the model complexity. This chapter briefly reviews relational sequence learning and describes several techniques tailored towards realizing this, such as local pattern mining techniques, (hidden) Markov models, conditional random fields, dynamic programming and reinforcement learning.}, keywords = {inductive logic programming, machine learning, relational learning, scientific knowledge}, pubstate = {published}, tppubtype = {inbook} } Sequential behavior and sequence learning are essential to intelligence. Often the elements of sequences exhibit an internal structure that can elegantly be represented using relational atoms. Applying traditional sequential learning techniques to such relational sequences requires one either to ignore the internal structure or to live with a combinatorial explosion of the model complexity. This chapter briefly reviews relational sequence learning and describes several techniques tailored towards realizing this, such as local pattern mining techniques, (hidden) Markov models, conditional random fields, dynamic programming and reinforcement learning.

Karwath, Andreas; Kersting, Kristian; Landwehr, Niels Boosting Relational Sequence Alignments (Inproceeding) The 8th IEEE International Conference on Data Mining, ICDM 2008, pp. 857862, IEEE, 2008, ISBN: 9780769535029. (Abstract  Links  BibTeX  Tags: inductive logic programming, machine learning, relational learning, scientific knowledge) @inproceedings{karwath2008,
title = {Boosting Relational Sequence Alignments}, author = {Andreas Karwath and Kristian Kersting and Niels Landwehr}, url = {http://dx.doi.org/10.1109/ICDM.2008.127}, doi = {10.1109/ICDM.2008.127}, isbn = {9780769535029}, year = {2008}, date = {20081215}, booktitle = {The 8th IEEE International Conference on Data Mining, ICDM 2008}, pages = {857862}, publisher = {IEEE}, crossref = {DBLP:conf/icdm/2008}, abstract = {The task of aligning sequences arises in many applications. Classical dynamic programming approaches require the explicit state enumeration in the reward model. This is often impractical: the number of states grows very quickly with the number of domain objects and relations among these objects. Relational sequence alignment aims at exploiting symbolic structure to avoid the full enumeration. This comes at the expense of a more complex reward model selection problem: virtually infinitely many abstraction levels have to be explored. In this paper, we apply gradientbased boosting to leverage this problem. Specifically, we show how to reduce the learning problem to a series of relational regressions problems. The main benefit of this is that interactions between states variables are introduced only as needed, so that the potentially infinite search space is not explicitly considered. As our experimental results show, this boosting approach can significantly improve upon established results in challenging applications.}, keywords = {inductive logic programming, machine learning, relational learning, scientific knowledge}, pubstate = {published}, tppubtype = {inproceedings} } The task of aligning sequences arises in many applications. Classical dynamic programming approaches require the explicit state enumeration in the reward model. This is often impractical: the number of states grows very quickly with the number of domain objects and relations among these objects. Relational sequence alignment aims at exploiting symbolic structure to avoid the full enumeration. This comes at the expense of a more complex reward model selection problem: virtually infinitely many abstraction levels have to be explored. In this paper, we apply gradientbased boosting to leverage this problem. Specifically, we show how to reduce the learning problem to a series of relational regressions problems. The main benefit of this is that interactions between states variables are introduced only as needed, so that the potentially infinite search space is not explicitly considered. As our experimental results show, this boosting approach can significantly improve upon established results in challenging applications.

Karwath, Andreas; Kersting, Kristian Relational Sequence Alignments and Logos (Inproceeding) Inductive Logic Programming, 16th International Conference, ILP 2006, pp. 290304, SpringerVerlag Berlin Heidelberg Springer Verlag, Berlin Heidelberg, Germany, 2007, ISBN: 9783540738466. (Abstract  Links  BibTeX  Tags: bioinformatics, inductive logic programming, relational learning, scientific knowledge) @inproceedings{karwath2007,
title = {Relational Sequence Alignments and Logos}, author = {Andreas Karwath and Kristian Kersting}, url = {http://dx.doi.org/10.1007/9783540738473_29}, doi = {10.1007/9783540738473_29}, isbn = {9783540738466}, year = {2007}, date = {20070101}, booktitle = {Inductive Logic Programming, 16th International Conference, ILP 2006}, volume = {4455}, pages = {290304}, publisher = {Springer Verlag}, address = {Berlin Heidelberg, Germany}, organization = {SpringerVerlag Berlin Heidelberg}, series = {Lecture Notes in Computer Science}, crossref = {DBLP:conf/ilp/2006}, abstract = {The need to measure sequence similarity arises in many applicitation domains and often coincides with sequence alignment: the more similar two sequences are, the better they can be aligned. Aligning sequences not only shows how similar sequences are, it also shows where there are differences and correspondences between the sequences. Traditionally, the alignment has been considered for sequences of flat symbols only. Many real world sequences such as natural language sentences and protein secondary structures, however, exhibit rich internal structures. This is akin to the problem of dealing with structured examples studied in the field of inductive logic programming (ILP). In this paper, we introduce Real, which is a powerful, yet simple approach to align sequence of structured symbols using wellestablished ILP distance measures within traditional alignment methods. Although straightforward, experiments on protein data and Medline abstracts show that this approach works well in practice, that the resulting alignments can indeed provide more information than flat ones, and that they are meaningful to experts when represented graphically.}, keywords = {bioinformatics, inductive logic programming, relational learning, scientific knowledge}, pubstate = {published}, tppubtype = {inproceedings} } The need to measure sequence similarity arises in many applicitation domains and often coincides with sequence alignment: the more similar two sequences are, the better they can be aligned. Aligning sequences not only shows how similar sequences are, it also shows where there are differences and correspondences between the sequences.
Traditionally, the alignment has been considered for sequences of flat symbols only. Many real world sequences such as natural language sentences and protein secondary structures, however, exhibit rich internal structures. This is akin to the problem of dealing with structured examples studied in the field of inductive logic programming (ILP). In this paper, we introduce Real, which is a powerful, yet simple approach to align sequence of structured symbols using wellestablished ILP distance measures within traditional alignment methods. Although straightforward, experiments on protein data and Medline abstracts show that this approach works well in practice, that the resulting alignments can indeed provide more information than flat ones, and that they are meaningful to experts when represented graphically. 