<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Péter Bodnár</style></author><author><style face="normal" font="default" size="100%">László Gábor Nyúl</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">QR Code Localization Using Boosted Cascade of Weak Classifiers</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition (ICIAR)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes In Computer Science</style></tertiary-title><alt-title><style face="normal" font="default" size="100%">LNCS</style></alt-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Oct 2014</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">8814</style></number><publisher><style face="normal" font="default" size="100%">Springer-Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Vilamura, Portugal</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Usage of computer-readable visual codes became common in oureveryday life at industrial environments and private use. The reading process of visual codes consists of two steps: localization and data decoding. Unsupervised localization is desirable at industrial setups and for visually impaired people. This paper examines localization efficiency of cascade classifiers using Haar-like features, Local Binary Patterns and Histograms of Oriented Gradients, trained for the finder patterns of QR codes and for the whole code region as well, and proposes improvements in post-processing.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">Art. No.: 225Accepted for publication</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Péter Bodnár</style></author><author><style face="normal" font="default" size="100%">László Gábor Nyúl</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">A Novel Method for Barcode Localization in Image Domain</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition (ICIAR) </style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><alt-title><style face="normal" font="default" size="100%">LNCS</style></alt-title></titles><dates><year><style  face="normal" font="default" size="100%">2013</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2013</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">Springer-Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Berlin</style></pub-location><pages><style face="normal" font="default" size="100%">189 - 196</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Barcode localization is an essential step of the barcode readingprocess. For industrial environments, having high-resolution cameras and eventful scenarios, fast and reliable localization is crucial. Images acquired in those setups have limited parameters, however, they vary at each application. In earlier works we have already presented various barcode features to track for localization process. In this paper, we present a novel approach for fast barcode localization using a limited set of pixels in image domain.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">doi: 10.1007/978-3-642-39094-4_22</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Péter Balázs</style></author><author><style face="normal" font="default" size="100%">Joost K Batenburg</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">A central reconstruction based strategy for selecting projection angles in binary tomography</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><short-title><style face="normal" font="default" size="100%">LNCS</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2012</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">7324</style></number><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Berlin; Heidelberg; New York; London; Paris; Tokyo</style></pub-location><pages><style face="normal" font="default" size="100%">382 - 391</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;In this paper we propose a novel strategy for selecting projection angles in binary tomography which yields significantly more accurate reconstructions than others. In contrast with previous works which are of experimental nature, the method we present is based on theoretical observations. We report on experiments for different phantom images to show the effectiveness and roboustness of our procedure. The practically important case of noisy projections is also studied. © 2012 Springer-Verlag.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">ScopusID: 84864128031doi: 10.1007/978-3-642-31295-3_45</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zoltan Kato</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">A Unifying Framework for Correspondence-less Linear Shape Alignment</style></title><secondary-title><style face="normal" font="default" size="100%">International Conference on Image Analysis and Recognition (ICIAR)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><short-title><style face="normal" font="default" size="100%">LNCS</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2012</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">7324</style></number><publisher><style face="normal" font="default" size="100%">Springer Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Aveiro, Portugal</style></pub-location><pages><style face="normal" font="default" size="100%">277 - 284</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-31294-6</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;div class=&quot;abstract-content formatted&quot; itemprop=&quot;description&quot;&gt;&lt;p class=&quot;a-plus-plus&quot;&gt;We consider the estimation of linear transformations aligning a known binary shape and its distorted observation. The classical way to solve this registration problem is to find correspondences between the two images and then compute the transformation parameters from these landmarks. Here we propose a unified framework where the exact transformation is obtained as the solution of either a polynomial or a linear system of equations without establishing correspondences. The advantages of the proposed solutions are that they are fast, easy to implement, have linear time complexity, work without landmark correspondences and are independent of the magnitude of transformation.&lt;/p&gt;&lt;/div&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">UT: 000323558000033</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gábor Németh</style></author><author><style face="normal" font="default" size="100%">Péter Kardos</style></author><author><style face="normal" font="default" size="100%">Kálmán Palágyi</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Topology Preserving 3D Thinning Algorithms using Four and Eight Subfields</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the International Conference on Image Analysis and Recognition (ICIAR)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><short-title><style face="normal" font="default" size="100%">LNCS</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2010</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">Springer Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Póvoa de Varzim, Portugal</style></pub-location><volume><style face="normal" font="default" size="100%">6111</style></volume><pages><style face="normal" font="default" size="100%">316 - 325</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Thinning is a frequently applied technique for extracting skeleton-like shape features (i.e., centerline, medial surface, and topological kernel) from volumetric binary images. Subfield-based thinning algorithms partition the image into some subsets which are alternatively activated, and some points in the active subfield are deleted. This paper presents a set of new 3D parallel subfield-based thinning algorithms that use four and eight subfields. The three major contributions of this paper are: 1) The deletion rules of the presented algorithms are derived from some sufficient conditions for topology preservation. 2) A novel thinning scheme is proposed that uses iteration-level endpoint checking. 3) Various characterizations of endpoints yield different algorithms. © 2010 Springer-Verlag.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">ScopusID: 77955432947doi: 10.1007/978-3-642-13772-3_32</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Csaba Domokos</style></author><author><style face="normal" font="default" size="100%">Zoltan Kato</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Binary image registration using covariant gaussian densities</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><short-title><style face="normal" font="default" size="100%">LNCS</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2008</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">5112</style></number><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Póvoa de Varzim, Portugal</style></pub-location><pages><style face="normal" font="default" size="100%">455 - 464</style></pages><isbn><style face="normal" font="default" size="100%">978-3-540-69811-1</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;We consider the estimation of 2D affine transformations aligning a known binary shape and its distorted observation. The classical way to solve this registration problem is to find correspondences between the two images and then compute the transformation parameters from these landmarks. In this paper, we propose a novel approach where the exact transformation is obtained as a least-squares solution of a linear system. The basic idea is to fit a Gaussian density to the shapes which preserves the effect of the unknown transformation. It can also be regarded as a consistent coloring of the shapes yielding two rich functions defined over the two shapes to be matched. The advantage of the proposed solution is that it is fast, easy to implement, works without established correspondences and provides a unique and exact solution regardless of the magnitude of transformation. © 2008 Springer-Verlag Berlin Heidelberg.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">UT: 000257302500045ScopusID: 47749098390doi: 10.1007/978-3-540-69812-8_45</style></notes></record></records></xml>