<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Péter Bodnár</style></author><author><style face="normal" font="default" size="100%">László Gábor Nyúl</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">QR Code Localization Using Boosted Cascade of Weak Classifiers</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition (ICIAR)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes In Computer Science</style></tertiary-title><alt-title><style face="normal" font="default" size="100%">LNCS</style></alt-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Oct 2014</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">8814</style></number><publisher><style face="normal" font="default" size="100%">Springer-Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Vilamura, Portugal</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Usage of computer-readable visual codes became common in oureveryday life at industrial environments and private use. The reading process of visual codes consists of two steps: localization and data decoding. Unsupervised localization is desirable at industrial setups and for visually impaired people. This paper examines localization efficiency of cascade classifiers using Haar-like features, Local Binary Patterns and Histograms of Oriented Gradients, trained for the finder patterns of QR codes and for the whole code region as well, and proposes improvements in post-processing.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">Art. No.: 225Accepted for publication</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Péter Bodnár</style></author><author><style face="normal" font="default" size="100%">László Gábor Nyúl</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">A Novel Method for Barcode Localization in Image Domain</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition (ICIAR) </style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><alt-title><style face="normal" font="default" size="100%">LNCS</style></alt-title></titles><dates><year><style  face="normal" font="default" size="100%">2013</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2013</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">Springer-Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Berlin</style></pub-location><pages><style face="normal" font="default" size="100%">189 - 196</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Barcode localization is an essential step of the barcode readingprocess. For industrial environments, having high-resolution cameras and eventful scenarios, fast and reliable localization is crucial. Images acquired in those setups have limited parameters, however, they vary at each application. In earlier works we have already presented various barcode features to track for localization process. In this paper, we present a novel approach for fast barcode localization using a limited set of pixels in image domain.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">doi: 10.1007/978-3-642-39094-4_22</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Péter Balázs</style></author><author><style face="normal" font="default" size="100%">Joost K Batenburg</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">A central reconstruction based strategy for selecting projection angles in binary tomography</style></title><secondary-title><style face="normal" font="default" size="100%">Image Analysis and Recognition</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><short-title><style face="normal" font="default" size="100%">LNCS</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2012</style></date></pub-dates></dates><number><style face="normal" font="default" size="100%">7324</style></number><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Berlin; Heidelberg; New York; London; Paris; Tokyo</style></pub-location><pages><style face="normal" font="default" size="100%">382 - 391</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;In this paper we propose a novel strategy for selecting projection angles in binary tomography which yields significantly more accurate reconstructions than others. In contrast with previous works which are of experimental nature, the method we present is based on theoretical observations. We report on experiments for different phantom images to show the effectiveness and roboustness of our procedure. The practically important case of noisy projections is also studied. © 2012 Springer-Verlag.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">ScopusID: 84864128031doi: 10.1007/978-3-642-31295-3_45</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gábor Németh</style></author><author><style face="normal" font="default" size="100%">Péter Kardos</style></author><author><style face="normal" font="default" size="100%">Kálmán Palágyi</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Aurélio Campilho</style></author><author><style face="normal" font="default" size="100%">Mohamed Kamel</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Topology Preserving 3D Thinning Algorithms using Four and Eight Subfields</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the International Conference on Image Analysis and Recognition (ICIAR)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title><short-title><style face="normal" font="default" size="100%">LNCS</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year><pub-dates><date><style  face="normal" font="default" size="100%">June 2010</style></date></pub-dates></dates><publisher><style face="normal" font="default" size="100%">Springer Verlag</style></publisher><pub-location><style face="normal" font="default" size="100%">Póvoa de Varzim, Portugal</style></pub-location><volume><style face="normal" font="default" size="100%">6111</style></volume><pages><style face="normal" font="default" size="100%">316 - 325</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Thinning is a frequently applied technique for extracting skeleton-like shape features (i.e., centerline, medial surface, and topological kernel) from volumetric binary images. Subfield-based thinning algorithms partition the image into some subsets which are alternatively activated, and some points in the active subfield are deleted. This paper presents a set of new 3D parallel subfield-based thinning algorithms that use four and eight subfields. The three major contributions of this paper are: 1) The deletion rules of the presented algorithms are derived from some sufficient conditions for topology preservation. 2) A novel thinning scheme is proposed that uses iteration-level endpoint checking. 3) Various characterizations of endpoints yield different algorithms. © 2010 Springer-Verlag.&lt;/p&gt;</style></abstract><work-type><style face="normal" font="default" size="100%">Conference paper</style></work-type><notes><style face="normal" font="default" size="100%">ScopusID: 77955432947doi: 10.1007/978-3-642-13772-3_32</style></notes></record></records></xml>