HCI Bibliography Home | HCI Conferences | IITM Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IITM Tables of Contents: 1013

Proceedings of the 2010 International Conference on Intelligent Interactive Technologies and Multimedia

Fullname:Proceedings of the First International Conference on Intelligent Interactive Technologies and Multimedia
Editors:M. D. Tiwari; Ramesh C. Tripathi; Anupam Agrawal
Location:Allahabad, India
Dates:2010-Dec-28 to 2010-Dec-30
Standard No:ISBN: 1-4503-0408-7, 978-1-4503-0408-5; ACM DL: Table of Contents hcibib: IITM10
Links:Conference Home Page
From human-computer interactions to human-companion relationships BIBAFull-Text 1-9
  David Benyon; Oli Mival
In this paper we introduce and explore the challenges of what we believe is the next generation of interface technology; companions. Companions are intelligent, persistent, personalized, multimodal interfaces. Companions change interactions into relationships. The paper describes the characteristics of companions and the changes that are needed for interaction designers to design for companion relationships. It provides a brief history of the development of commercial companions and presents three empirical studies of companions that illustrate many of the design issues. These are elaborated in some detail and the implications for interaction design are considered.
Information visualization and the arts-science-social science interface BIBAFull-Text 9-17
  J. Bown; K. Fee; A. Sampson; M. Shovman; R. Falconer; A. Goltsov; J. Issacs; P. Robertson; K. Scott-Brown; A. Szymkowiak
In a world of ever-increasing and newly discovered complexities, and rapidly expanding data sets describing man-made and natural phenomena, information visualization offers a means of structuring and enabling interpretation of these data in the context of that complexity. Advances in graphics hardware, art asset pipelines and parallelized computational platforms offer unprecedented potential. However, harnessing this potential to good effect is challenging and requires the integration of skills from the arts and social sciences to support scientific endeavor in the physical and life sciences. Here, we consider those skills and describe four case studies that highlight interoperation among disciplines at this arts-science-social science interface.
The smart, the intelligent and the wise: roles and values of interactive technologies BIBAFull-Text 17-26
  Michael Herczeg
Since the early days of computer-based interactive technologies it has been a challenge to make them work or behave according to their user's needs, capabilities and expectations. As an interdisciplinary challenge, researchers from computer science, psychology, engineering, work sciences, human factors, design, and architecture discussed and implemented ideas and theories over many years for interactive systems and media that do somehow what they shall do from their user's point of view. Some of these interactive technologies have been called "smart" or "intelligent". Systems collecting and providing information from social groups have even been attributed as reflecting a kind of "wisdom". Are we able to define and systematically implement interactive technologies as being smart, intelligent and even wise opposed to systems being plain, dull or ignorant? If so, what are the proper domains and system paradigms to apply these technologies? How can their users be enabled to understand, apply and master these technologies by fostering the development of appropriate mental models and skills? How shall the interaction methods been designed to let the users work with these systems in a effective, efficient, engaging and satisfying way. This paper will discuss these questions using examples of interactive systems designed for work, education, entertainment and daily life.
Segmentation of lines and words in handwritten Gurmukhi script documents BIBAFull-Text 25-28
  Munish Kumar; R. K. Sharma; M. K. Jindal
Optical Character Recognition (OCR) is an essential part of Document Analysis System. Among few phases of an OCR system, segmentation is an important phase. After preprocessing phase, it is necessary to segment the text into lines, words and characters before the recognition of text. Segmentation is one of the most important and challenging tasks in a handwritten recognition system. Gurmukhi script can be segmented into paragraphs, lines, words and characters. This paper describes a technique for segmentation of handwritten Gurmukhi script documents into lines with the use of strip based projection profile technique, and to segment lines into words using white space and pitch method.
A GA-based approach to improve web page aesthetics BIBAFull-Text 29-32
  Nahar Singh; Samit Bhattacharya
The field of human-computer interaction traditionally deals with the problem of improving usability of interactive systems. The concept of usability is defined in terms of user's task performance. While task is undoubtedly one important factor in "usable" interface design, recent research shows that form (aesthetics) performs an equally important role in shaping the overall user experience of an interactive system, which in turn leads to increased usability. Keeping the form factor in focus, in this paper we present an approach to improve aesthetics of web interfaces using a genetic algorithm (GA). An existing computational model of aesthetics has been used to develop the fitness function of the GA. In order to ascertain the efficacy of the approach, an empirical study with 30 web pages and 10 subjects was carried out. Results of the study show that in certain situations, the proposed approach is able to improve perceived interface aesthetics.
A dimensionality reduction based on feature quality measure BIBAFull-Text 33-36
  A Veerabhadrappa; Lalitha Rangarajan
This paper presents a novel feature selection method called Feature Quality (FQ) measure based on the quality measure of individual feature values which has very low computational complexity. To evaluate the performance of the proposed method several experiments are conducted on standard datasets and the results obtained show the superiority of the proposed method over the other dimensionality reduction techniques.
MIMO channel modeling using temporal artificial neural network (ANN) architectures BIBAFull-Text 37-44
  Kandarpa Kumar Sarma; Abhijit Mitra
Stochastic nature of wireless channels has continued to make channel estimation a challenging issue. The statistical nature of wireless channels can be tackled using an Artificial Neural Network (ANN) like a multi layer perceptron (MLP) which can be used to provide channel estimation and symbol recovery to minimize the deficiencies of multi-user transmission under multipath fading. MLP based MIMO modeling, however, doesn't consider time varying nature of the wireless channel. This work describes two MLP architectures with temporal characteristics which are found to be better suited for time varying channel conditions especially for slow fading conditions for applications with indoor networks with Multiple-Input Multiple-Output (MIMO) systems using Orthogonal Frequency Division Multiplexing (OFDM) together called MIMO-OFDM.
Application of fractal parameters for unsupervised classification of SAR images: a simulation based study BIBAFull-Text 45-50
  Triloki Pant; Dharmendra Singh; Tanuja Srivastava
Classification of any satellite image with unsupervised or supervised technique is still very challenging task. A lot of researchers are working for this but still uncertainties exist to label the different classes. Land cover classification with satellite images is very much dependent on the type of satellite images. Nowadays, Synthetic Aperture Radar (SAR) images are giving very promising results in comparison to optical images. Therefore, in this paper a contextual classification has been performed in an unsupervised way for SAR image. For this purpose, fractal parameters viz. fractal dimension and lacunarity is used. In order to apply the methodology, first of all a set of simulated SAR images has been generated and tested for classification and then the proposed methodology is applied on satellite SAR images, i.e., ERS-2 SAR. The classification accuracy for simulated images comes up to 85% whereas for satellite SAR it reaches to 76%.
An improved algorithm for face recognition using wavelet and facial parameters BIBAFull-Text 51-58
  Kanchan Singh; Ashok K. Sinha
In this paper, the problem of face recognition in still color images is addressed. An improved algorithm for face recognition is proposed here. The algorithm comprises of designing a feature vector which has discrete wavelet coefficients of the face and, a coefficient representing parameters of the face. Global features of the face are captured by wavelet coefficients and the local feature of the face is captured by facial parameter. The coefficients of the feature vector are used as inputs to the back-propagation architecture of the neural network. The network is trained for different images in the database. The proposed algorithm has been tested on various real images and its performance is found to be quite satisfactory when compared with the performance of conventional methods of face recognition such as the Eigen-face method.
Users search trends on WWW and their analysis BIBAFull-Text 59-66
  Divakar Yadav; A. K. Sharma; J. P. Gupta
World Wide Web (WWW) is a huge repository of interlinked hypertext documents known as web pages, spread all over the world on thousands of web servers. User looking to get the required information from WWW uses Search engine interface where they provide their search queries. In response, Search engines use their database to search the relevant documents and produce the result after ranking on the basis of relevance. It generally happens that the entire results produced by search engines may not be relevant to the user. When the users do not get the desired information, they modify the search query again and again till they get the desired information or get tired. The situation becomes more cumbersome when the results produced by the search tools are outdated especially when a bad URL is reported. Owing to the problems faced by users, a survey was being conducted to carry out some quantitative studies that can supplement in better understanding of the user's behavior/requirement while using search engine and consequently helping to improve its working.
Extraction of 3D coordinates from 2D coordinates for precise wireframe building: an approach for single view image based modeling BIBAFull-Text 67-72
  S. Mohan; S. Murali
Construction of 3D models from 2D images is yet challenging as achieving photo realism with accurate representation of objects is complex. The users, while navigating the 3D scene may not feel the reality in the 3D models when the objects are not represented as in the real world. The reason for most of the misrepresentation is improper computation of 3D coordinates and construction of wireframes which are the building blocks of the 3D scenes. In this paper, a method is proposed to represent the 3D coordinates more precisely which will result in more photo realistic models. The approach is carried out in two folds. First each surface of planar objects is rectified for perspective distortion using plane homography. Then based on the width and height of each of the surfaces, a wireframe is drawn. The orientations of surfaces are assumed to be orthogonal to each other. The 3D coordinates of each surface is computed based on the orientation and dimension (width and height) of each surfaces. Keeping the very first coordinate of the scene as reference, other coordinates of all surfaces are computed very precisely. This approach is compared with our previous method based on depth cueing which is constructs the wireframes approximately. The experimentation on objects of planar surfaces like buildings, floors and walls had proven that this method is robust for image based modeling.
An automated hybrid technique for detecting the stage of non-proliferative diabetic retinopathy BIBAFull-Text 73-80
  Neera Singh; Atul Kumar; Ramesh Chandra Tripathi
Diabetes is fast becoming a scourge in the modern day society both in the developing and the developed societies. Diabetes related complications lead to a lot of morbidity and Diabetic Retinopathy is fast becoming the cause of preventable blindness. Early detection and treatment with Laser will go a long way in checking this disease.
   Non-proliferative Diabetic Retinopathy (NPDR) is the set of early changes that take place in the Retina. It is divided into 3 categories into mild, moderate and severe. Initial changes when the microaneurysms (MA) start appearing. Then it is followed by hemorrhages. Finally appearance of cotton wool spots and hard exudates categorize it into severe NPDR. The stage of neo vascularization (NV) when new blood vessels begin to appear (to compensate for the reduced blood supply and nutrition to the retina) finally qualifies for proliferative Diabetic Retinopathy.
   The idea is to extract the features of NPDR and depending on their intensity and frequency they can be graded into mild, moderate and severe. This automated grading can be matched with the specialist's perception and its accuracy can be tested. In this work, we have proposed a computer based automated hybrid technique for the detection of stages of Non-Proliferative Diabetic Retinopathy (NPDR) retinopathy stage using the color fundus images. The features are extracted from the Sample images using the image processing techniques and fed to the support vector machine (SVM). After color normalization preprocessing stage, an evidence value for every pixel is calculated by SVM. Then a mathematical morphological technique, a fuzzy c-means clustering technique, PCA, a support vector machine and a nearest neighbor classifier for further processing. The SVM classifier uses features extracted by combined 2DPCA instead of explicit image features as the input vector Combined 2DPCA is proposed and virtual SVM is applied to achieve the higher accuracy of classification. We demonstrate a Sensitivity of 97.1% for the classifier with the Specificity of 98.3%.
   Thus, an automated system for diagnosis of NPDR can be a useful tool for the Specialist to support in screening an detection of early Diabetic Retinopathy changes and hence timely intervention leading to reduced DR (Diabetic retinopathy) related blindness.
Cognitive processes underlying the meaning of complex predicates and serial verbs from the perspective of individuating and ordering situations in Banla BIBAFull-Text 81-87
  Samir Karmakar; Rajesh Kasturirangan
This paper presents a model of individuation and ordering of situations in discourse with special emphasis on complex predicate and serial verb constructions in Banla. We argue that individuation and ordering are a consequence of intending and contending functions underlying the act of languaging. Unlike earlier models that focus on the syntactic structure of serial verb and complex predicate constructions, this paper proposes the incorporation of syntagmatic and paradigmatic aspects of meaning construction in a processing model in order to come up with a cognitive account of situation individuation and situation ordering.
Evaluation of reinforcement learning techniques BIBAFull-Text 88-92
  Anil Kumar Yadav; Shaillendra Kumar Shrivastava
Reinforcement learning is became one of the most important approaches to machine intelligence. Now RL is widely use by different research field as intelligent control, robotics and neuroscience. It provides us possible solution within unknown environment, but at the same time we have to take care of its decision because RL can independently learn without prior knowledge or training and it take decision by learning experience through trial-and-error interaction with its environment. In recent time many research works was done for RL and researchers has also proposed various algorithm and model such as SARSA [2], TDN [3] which tries to solve sequential decision making problems of continuous state and action space.
   In this paper we proposed Q-learning algorithm and evaluation of RL techniques (Reinforcement learning architecture, algorithms for making training matrix in the form of state-action pair Q-table) containing learner (decision making agent) that takes actions in an environment and receive reward for (or penalty) its actions in trying to solves a problems. Learning agent, the fundamental element of reinforcement learning, there is a decision maker that receive and select an action for the system.
   In reinforcement learning technique especially in Query base self learning the learner (Agent) required a lot of training input of execution cycle. In order to assess and comparison of QA and TDN based reinforcement learning, we found that QA is better in the context of discount rate, learning time, memory usage.
A family of multiple sub-filters based acoustic echo cancellers BIBAFull-Text 93-97
  Alaka Barik; Tarkeshwar P. Bhardwaj; Ravinder Nath
In this paper, a multiple sub-filter (MSF) approach is discussed in which, a single long filter (SLF) is partitioned into multiple subfilters to achieve fast convergence rate. The performance of the MSF parallel structure adaptation is studied for common error and different error modes using least mean square (LMS) adaptive algorithm. Simulation results show that the MSF structure provides better convergence over the SLF for both error signals. However, the steady state error performance of the different error adaptation algorithm (DEA) is poor as compared to that of common error adaptation algorithm (CEA) as well as that of the SLF adaptation algorithm. In order to achieve a trade-off between steady state error and convergence speed, a combination of both the algorithms is studied and is named as combined error adaptation algorithm (COMBEA). Further to reduce the computational load of updating all coefficients, called full update (FU) algorithm, of the MSF and SLF, a scheme named as selective coefficient update (SCU) algorithm is proposed in which only few coefficients are updated at each iteration. Finally the tracking performance of the MSF and SLF for time-varying acoustic channel is demonstrated.
Speaker verification using combinational features and adaptive neuro-fuzzy inference systems BIBAFull-Text 98-103
  V. Srihari; R. Karthik; R. Anitha; S. D. Suganthi
A new efficacious Speaker Verification System is proposed in this paper. Scrutinized study is made on different features and finally a combination of them is used. These combinational features have been modeled with ANFIS and SVM classifier. The performance of both the systems are evaluated with detection error trade-off curves and Bayes Risk function. Results have shown that proposed system using combinational features with ANFIS is more efficient compared to combinational features with SVM classifier.
A symbolic approach for text classification based on dissimilarity measure BIBAFull-Text 104-108
  B. S. Harish; D. S. Guru; S. Manjunath; Bapu B. Kiranagi
In this paper, a simple and efficient symbolic text classification is presented. A text document is represented by the use of interval valued symbolic features. Subsequently, a new feature selection method based on a new dissimilarity measure is also presented. The new feature selection method reduces the features in the representation phase for effective text classification. It keeps the best features for effective text representation and simultaneously reduces the time taken to classify a given document. To corroborate the efficacy of the proposed method, experimentation has been conducted on four different datasets to evaluate the performance. Experimental results reveal that the proposed method gives better results when compare to state of the art techniques. In addition, as it is based on simple matching scheme it achieves classification within negligible time and thus it appear to be more effective in classification.
Relationship visualization between books and users based on mining library circulation data BIBAFull-Text 109-113
  Sumit Goswami; Susheel Verma; Nabanita R. Krishnan
A library has a lot of databases from which information can be extracted. The data retrieved is superficial and tells about the transactions of library. The library circulation data does not tell much about the relationship between the users based on their issued books. The intention of this paper is to find a way to develop communities based on issue pattern and find its application to library to know how many similar users are there and which type of books a particular group of users prefers to read. It forms a community based on relationships among the library users and books and visualize the relationship through a graph to present the extracted information more effectively.
Development of Hindi-Punjabi parallel corpus using existing Hindi-Punjabi machine translation system BIBAFull-Text 114-118
  Pardeep Kumar; Vishal Goyal
This paper describes the development of Hindi-Punjabi sentence aligned parallel corpus consisting of 50K sentences using existing Hindi-Punjabi Machine Translation (MT) system (available at http://h2p.learnpunjabi.org). This parallel corpus is utmost important resource for Natural Language applications and research in this field. Thus, it was the need of hour to develop this parallel corpus for working on latest and better techniques. The corpus has been sentence aligned and it is available in both .doc and .xml formats. Shortly, this parallel corpus will be made available on the internet freely to use by the researchers working in NLP. During the development of parallel corpus, errors of different categories present in the Hindi-Punjabi MT System like -- transliteration, out-of-vocabulary, grammar agreement etc. were found. The complete analysis for these errors has also been presented. These errors were removed manually from parallel corpus to develop clean and accurate parallel corpus. The new words list from the out-of-vocabulary words was generated and added into the lexicon of the existing MT System. Thus, adding these words into the dictionary of used Hindi-Punjabi machine translation system has increased its accuracy from 94% to 94.5%.
Interactive 3D rendering to assist the processing of distributed medical data BIBAFull-Text 119-126
  Hui Wei; Enjie Liu; Gordon Clapworthy
Medical data is often large in size and there is an evident trend for it to become even larger. Such data is also being stored more frequently in distributed databases, which give users access to a much greater diversity of data than ever before. However, this implies that users now have to decide upon which data will be the most appropriate for their use and whether interaction with the data should take place locally (so they will have to download the large datasets necessary for the task) or remotely (so the tools to be used must be present on the sever).
   Web-based applications are restricted by bandwidth and other networking features; thus, for large 3D datasets, it is a challenge to provide, on the client side, real-time interactive operations to view the result that has been processed on the server. In this paper, we propose a generic, effective and secure approach to performing powerful interactive 3D visualisation operations in a web-based environment. The approach uses Java applets to achieve real-time interactive rendering of 3D graphics based on a two-layer server-side architecture. On the client side, users use a browser embedded with a Sun Java plug-in. A Java applet binds with VTK for visualising the data and VTK widgets are integrated into the Swing GUI. The rich set of widgets available in VTK provides a wide variety of interactive functions.
Fast hybrid rough-set theoretic fuzzy clustering technique with application to multispectral image segmentation BIBAFull-Text 126-129
  Arpit Srivastava; Abhinav Asati; Mahua Bhattacharya
Remotely sensed Multispectral Images are of high significance, for the analysis of landscape change detection and land use/cover classification etc. A Novel method for Segmentation, of such images is presented in this paper. The proposed method collaborates the Fuzzy Clustering Algorithm with Rough Set theoretic approach and a convergence improving Mechanism. Hybridization of Fuzzy Sets with Rough Sets results into an unsupervised framework which can handle uncertainties associated with the process, while allowing overlapping of partitions at the same time. But this process is time taking, due to highly correlated nature of dataset, thereby increasing the computation time. Therefore, the Fuzzy-Rough hybrid clustering is supplemented with a suppression mechanism to enhance the speed of convergence, which eventually reduces the time of computation. The conducted experiments demonstrate the merits of the proposed algorithm in segmenting small objects and sharp boundaries, which plays an important role in remote-sensing image segmentation applications. The comparison of proposed method with other similar models is presented in terms of time of computation, to prove the suitability of presented method for real-time applications.
Video cut detection using dominant color features BIBAFull-Text 130-134
  G. G. Lakshmi Priya; S. Domnic
Video shot boundary detection has been an area of active research in recent years. It plays major role in digital video analysis domain: video compression, video indexing, video content based retrieval, video scene detection and video object tracking. This paper approaches the video cut transition detection based on the block wise histogram differences of the dominant color features in the HSV color space. Most of the cut identification techniques uses a thresholding operation to discriminate between the inter frame difference metrics values and thus identify the video breakpoints. An automatic threshold calculation algorithm is used for cut identification process. Experimental results show that the proposed method gives better results than the existing methods.
Pronunciation scoring for Indian English learners using a phone recognition system BIBAFull-Text 135-139
  Chitralekha Bhat; K. L. Srinivas; Preeti Rao
Feedback on pronunciation or articulation is an important component of spoken language teaching. Automating this aspect with speech recognition technology has been an active area of research in the context of computer-aided language-learning systems. Well-known limitations in the accuracy of automatic speech recognition (ASR) systems pose challenges to the reliable detection of pronunciation errors in the speech of non-native speakers. We present the design of a pronunciation scoring system using a phone recognizer developed with the popular HTK and CMU Sphinx HMM-based ASR toolkits. The system is evaluated on Indian English speech in the realistic situation where there is no matching database available for training the speech recognizer. Different approaches to the training of acoustic models and to constraining the phone recognition system are investigated.
Adaptive noise cancellation for system with multi channel modulation using BPNN BIBAFull-Text 140-144
  Pankaj Vyas; Paresh Rawat; Ankita Chouhan; Garima Upreti
Signals acquired through any modern sensors suffer from variety of noises resulting from stochastic variations and deterministic distortions or shading. Hence it is desired to smooth the noisy signal to obtain a signal with higher quality. The paper proposed a neural network based adaptive noise cancellation technique for a system with multichannel modulation. Noise cancellation is then performed on the noisy signals by using the BACK PROPAGATION neutral network & performance is compared with the ADALINE method, the performance, evaluation of the results are based on estimated error.
   The performance of the system is also checked by varying the learning rate and momentum and order of the filtering. The proposed method is tested on large variety of multichannel signals. It is found that the performance of the Back-propagation is better than ADALINE in term of mean Square error.
An effective CBVR system based on motion, quantized color and edge density features BIBAFull-Text 145-149
  Kalpana S. Thakre; Archana M. Rajurkar; Ramchandra Manthalkar
Rapid development of the multimedia and the associated technologies urge the processing of a huge database of video clips. The processing efficiency lies on the search methodologies utilized in the video processing system. Usage of inappropriate search methodologies may make the processing system ineffective. Hence, an effective video retrieval system is an essential pre-requisite for searching relevant videos from a huge collection of videos. In this paper, an effective content based video retrieval system based on some dominant features such as motion, color and edge is proposed. The system is evaluated using the video clips of format MPEG-2 and then precision-recall is determined for the test clip.
Curved-straight lines-analysis (CSLA) algorithm for handwritten digit recognition enhancement BIBAFull-Text 150-154
  Young Suk Cho; Hyungsin Kim; Ellen Yi-Luen Do
In this paper, we propose a new recognition algorithm for handwritten digit recognition. This algorithm is designed to enhance the recognition accuracy of current Microsoft SDK recognizer. The algorithm recognizes the unique signature of each number by comparing curved and straight lines, and writing sequences of the stroke. Through the trial experiments, we achieved 97.67% of positive recognition accuracy.
Gesture recognition by stereo vision BIBAFull-Text 155-162
  J. S. Prasad; Advitiya Saxena; Nilesh Javar; K. B. Kaushik; P. Chakraborty; G. C. Nandi
Recently a lot of hand held devices have been commercially employed, but they suffer from the problems such as un-natural interaction and comfort of use. The computer vision based HCI (Human Computer Interaction) is an effective and straightforward method. We have used a new effective gesture recognition method with the stereo vision technique. The stereoscopic system is implemented using two standard (VGA) webcams. The webcams are calibrated using chessboard pattern and subsequently the rectified images, disparity and depth images are established. We applied two types of techniques and compared outcome on the basis of feature extraction and gesture recognition. The first technique is block matching and the second method utilizes three dimensional position, velocity, acceleration and orientation features with Euclidean distance metrics. To extract features from three dimensional information is computationally complex process but provides better recognition results. Our experimentation with above mentioned approaches provided far better results than gesture recognition techniques using one webcam. Hence stereo vision based system can be used for real-time applications in a simple and cost effective way.
An intelligent prediction system for time series data using periodic pattern mining in temporal databases BIBAFull-Text 163-171
  S. Sridevi; S. Rajaram; C. Swadhikar
Data mining is concerned with analyzing large volumes of unstructured data to discover interesting regularities or relationships which in turn lead to better understanding of the underlying processes. Existing algorithms like association rule mining, incremental mining and frequent pattern mining can be used to find out valid periodic pattern and it can't be used to find out peculiar data. In this paper, two algorithms namely Peculiarity factor algorithm and Chi-Square test algorithm are used to find out peculiar data from a temporal database which is presented in vertical format. If peculiar data are found in two different relations there is need to use a value in a key as the relevance factor in order to find out the relevance between those relations. Thus a new dataset is formed from an existing dataset after the removal of peculiar data. From a new dataset Periodic Patterns were found by applying four phase algorithms namely singular periodic pattern mining, multi-event periodic pattern mining, complex periodic pattern mining and asynchronous sequence mining. Our proposed work focuses on prediction of time series data. This can be done with the help of correlation estimation. After determining strong and weak attributes using correlation estimation only strong attributes are considered to find out how each attribute is correlated with other attributes. Based on the correlation we predicted the required attribute values under given test conditions. Based on the prediction output precision and recall are calculated and hence accuracy is measured. Experimental results on real-life datasets demonstrate that the proposed algorithm is effective and efficient to predict the time series data.
Whorl identification in flower: a Gabor based approach BIBAFull-Text 172-178
  D. S. Guru; Y. H. Sharath Kumar; S. Manjunath
In this paper a novel approach for identification of whorl part of flowers useful for flower classification is presented. The problem is challenging because of the sheer variety of flower classes, intra-class variability, variation within a particular flower, and variability of imaging conditions like lighting, pose, foreshortening etc. A flower image is segmented using color information obtained using HYPE's color specifier. To identify the whorl of flowers the Gabor response of the segmented flower image is extracted and based on the Gabor response we present a method of identifying the whorl part of the flower. For experimentation we have created our own dataset of 20 classes of flowers each with 20 samples. To study the efficiency of the proposed method we have compared the obtained results with the results provided by two human experts and the results are more encouraging.
IPR protection in IC's using watermarks BIBAFull-Text 179-185
  R. Balamurugan; R. Radhakrishnan
Creation and innovation drive the world economy. Intellectual property rights system provides wealth for organization and individuals through the incentives for their inventions. Due to rapid proliferation and globalization, importance of intellectual property rights system has grown. At the same time awareness regarding IP protection also has started to come up. Integrated circuits reuse resulted in the possibilities of infringement of IP by product vendors or users. The protection of integrated circuits IP Rights using watermarking discussed in this paper. Further elaborated about various research activities related to Intellectual property rights protection of Integrated circuit using watermarking.
Feature level fusion of multi-instance finger knuckle print for person identification BIBAFull-Text 186-190
  D. S. Guru; K. B. Nagasundara; S. Manjunath
The aim of this paper is to study the effect of feature level fusion of multi instances of finger knuckle prints. Initially, Zernike moments are extracted for a single instance of finger knuckle print of a person and study the identification accuracy. Subsequently, the effect of identification accuracy using feature level fusion of multi-instances of knuckle prints of a person is studied. As the length of the feature vectors of different instances of knuckle print is same, one could augment the feature vectors to generate a new feature vector. The process of concatenation of feature vectors may lead to the curse of dimensionality problem. In order to handle the curse of dimensionality, the feature dimensions are reduced prior and after the feature sets fusion using Principal Component Analysis (PCA). Experiments are conducted on PolyU finger knuckle print database to assess the actual advantage of the fusion of multi-instance knuckle prints performed at the feature extraction level, in comparison to the single instance knuckle print. Further, extensive experimentations are conducted to evaluate the performance of the proposed method against subspace methods.
Reversible data hiding by alternate shifting of peaks in the histogram BIBAFull-Text 191-197
  Arijit Sur; Udit Singh
Recently the reversible data hiding technology has been discussed extensively because of its major characteristic which enables the exact reconstruction of image from watermarked image. In this paper, a high capacity and high quality reversible watermarking method based on alternate shifting of peaks in the histogram of the image has been described. The embedded data has been extracted correctly, and the original image has been restored completely from watermarked images without any distortion. Performance comparisons with other histogram based scheme demonstrate the superiority of the proposed scheme.
Spots and color based ripeness evaluation of tobacco leaves for automatic harvesting BIBAFull-Text 198-202
  D. S. Guru; P. B. Mallikarjuna
In this paper, we propose a model based on ripeness evaluation for classification of tobacco leaves useful for automatic harvesting in a complex agriculture environment. The CIELAB color space model is used to segment the leaf from the background. We propose a spot detection algorithm to estimate density of maturity spots on a leaf using Laplacian filter and Sobel edge detector. We have computed degree of ripeness of leaf by density of mature spots on a leaf and greenness of leaf. Then, leaves are classified into three classes viz., ripe, unripe, and over-ripe based on computed degree of ripeness. Experimentation is conducted on our own dataset consisting of 274 images of tobacco leaves captured in both sunny and cloudy lighting conditions in a real tobacco field. The experimental results indicate that proposed model achieves a good average classification accuracy.
Structured testing using ant colony optimization BIBAFull-Text 203-207
  Praveen Ranjan Srivastava
Structural testing is one of the most widely used testing paradigms to test software. The aim of this paper is to present a simple and efficient algorithm that can automatically generate all possible paths in a Control Flow Graph for structural testing. Pheromone releasing behavior of ants is used in this algorithm for extracting optimal paths. This algorithm generates paths equal to the cyclomatic complexity.
DCT-based unique faces for face recognition using Mahalanobis distance BIBAFull-Text 208-212
  Vikas Maheshkar; Sushila Kamble; Suneeta Agarwal; Vinay Kumar Srivastava
In this paper, we propose a technique to generate DCT based unique normalized face using Principal Component Analysis (PCA). The idea of the PCA is to decompose face images into a small set of characteristic feature images. In the proposed technique we generate feature image by finding the peak values in the absolute DCT matrix followed by normalization. This maximizes the scatter between training dataset to give more discriminating power. The feature images so generated are called unique normalized faces as each image is different and unique from all other training faces. They have high recognition performance since they capture the global features onto a low dimensional linear "face space" extracted from the individual face of training dataset. We use Mahalanobis distance to measure the recognition between original face and the test face. The algorithm is tested on ORL face datasets. In the proposed technique we improved face recognition rate as compared to Eigenface, DCT-normalization and Wavelet-Denoising.
Listing elements of rich information environments BIBAFull-Text 213-216
  Lakshmi Kumar
An information rich environment creates a host of privacy related issues that have been addressed through frameworks, principles and models [6, 4, 2, 10, 11, 12]. But when a product that handles rich information is created, users need to be informed about how their information is collected, stored and distributed. The author uses Weiser's ubiquitous scenario as a reference point to create as well as validate a list of elements to represent this information through: 1) Cue Points, 2) Identity Recognition, 3) Data Share, 4) Data Storage and 5) Data Access.
Finding number of clusters using VAT image, PBM index and genetic algorithms BIBAFull-Text 217-221
  Malay K. Pakhira; Amrita Dutta
Determining number of clusters present in a data set is an important problem in clustering. There exist very few techniques that can solve this problem satisfactorily. Most of these techniques are expensive with regard to computation time. This paper proposes an alternative solution for the concerned problem that makes use of the concepts of genetic algorithms, the PBM cluster validity index and a recently developed visual mechanism for determining the clustering tendency (VAT, Visual Assessment of Tendency for clustering). It is shown that the present approach is able to find the appropriate number of clusters very efficiently.
Patent classification of the new invention using PLSA BIBAFull-Text 222-225
  Ranjeet Kumar; Shrishail Math; R. C. Tripathi; M. D. Tiwari
In the current scenario of the world for Research and Development leading to patenting, content classification in accordance with the subject areas to which it belongs to is a challenging task. This is because today's R&D draws its novelty/newness not in one technical area but a unique combination of different technical areas. For example, a Typical ICT patent may be a composite effect for advancing the knowledge in some combination of Control Engg, Electronic Components, Databases Technology, Information retrieval methodology, Internet and Wireless technology, Speech, Signal, and Image Processing etc. In this paper, the work has been reported for the content classification for a newly drafted patent document using Probabilistic Latent Semantic Analysis technique. The probabilistic latent semantic analysis (PLSA) is used for automated indexing of the document by creating an indexer which tokenizes the documents and creates a proper generative model. Herein a singular value decomposition model is used for compacting the size of term document matrix and their co-occurrences in the matrix. The objective is to take up the large document corpora generated from the past patent document to categorize documents based on the concept generated model. The approach is illustrated and has been tested for by an example classification of the content for two typical US Patent Classes, and has been found to work well for them.
Evidentiary usage of e-mail forensics: real life design of a case BIBAFull-Text 226-230
  Lokendra Kumar Tiwari; Shefalika Ghosh Samaddar; Arun Kumar Singh; C. K. Dwivedi
Computer Forensic, the upcoming branch of forensic science where acquiring, preserving, retrieving and presenting content processed electronically and stored digitally, is used for legal evidence in computer related crimes or any other unethical practice involving manipulation of digital content. Such digital content can take many forms which are manifested by different file formats and digital artifacts.
   This paper concentrates on evidential usage of recovered deleted e-mail from off-line mail boxes to provide digital evidence in case of non-repudiation either by the sender or by the receiver. This is simply accomplished by using a digital forensic tool Encase 6.0 and applying a capturing mechanism to prove legitimacy of the evidence. The step-by-step procedure is able to increase the practical insight in the capturing of deleted e-mail as digital evidence of non-repudiation and able to provide an example for preparing evidentiary e-mail for presentation in the court of Law or for preparation of any legal procedure. Recovery of deleted e-mails in the form of digital evidence requires certain legal bindings which may be provided under this mechanism. This paper contributes to that extent that recovered files are ready digital evidence in the Court of Law.
iVolBrush: an intelligent interactive tool for efficient volume selection and editing BIBAFull-Text 231-235
  Youbing Zhao; Gordon J. Clapworthy; Feng Dong; Xiangrong Zhang; Wei Chen; Marco Viceconti
Volume rendering is frequently used for visualising 3D volume data. In practical applications, there may be a need not only to render the volume data but also to directly select and edit it in a paradigm similar to image editing. Being the basis of volume editing, efficient volume selection tools can help users to locate and select the volume region of interest quickly and conveniently. However, semi-automatic volume selection has not been studied in depth in existing work. This short paper presents a status report of our current research on semi-automatic volume selection -- the intelligent volume brush iVolBrush -- as a segmentation tool for the EC-funded VPHOP project. The initial stage -- efficient 3D painting -- has been completed. Major improvements of VTK image selection, including the re-use of brush stencil data to avoid expensive stencil regeneration, have been made to enhance performance and meet the challenge of interactivity. Further intelligent selection features such as region-based or learning-based selection are being investigated and will be the focus of the next stage of the work.
Adaptive pragmatic analysis of natural language BIBAFull-Text 236-240
  Bhavesh Kumar; Hima Bindu Maringanti; Krishna Asawa
Natural language is a mechanism and a way for humans to express himself or herself. In order to understand what has been expressed, language analysis at different levels has to be done. Just like formal languages, natural language also has definite structure and follows a grammar, which may be an exhaustive one. Sometimes it is dynamic, as the natural language, especially English is ever-evolving. All natural languages are inherently ambiguous, but in a given context, a sentence has a single meaning only. So, in addition to syntax analysis and semantic analysis, a higher level mechanism of understanding has to be developed. This is termed by linguists as Pragmatic analysis. The objective of the present paper is to come up with a working model of Pragmatic Analysis over a sample set of sentences, chosen from British English, which could further be extended. The machine learning approach of Neuro-Fuzzy interpretation is used to achieve an adaptive language understanding, with focus on the Intentions of the speaker/writer.
Rhythm pattern representations for tempo detection in music BIBAFull-Text 241-244
  Sankalp Gulati; Preeti Rao
Detection of perceived tempo of music is an important aspect of music information retrieval. Perceived tempo depends in a complex manner on the rhythm structure of the audio signal. Machine learning approaches, proposed recently, avoid peak picking and use rhythm pattern matching with stored tempo annotated songs in the database. We investigate different signal processing methods for rhythm pattern extraction and evaluate these for the music tempo detection task. We also investigate the effect of using additional information about the rhythmic style on the performance of the tempo detection system. The different systems are comparatively evaluated on a standard Ballroom Dance music database and an Indian music database.
NEMO: the network environment for multimedia objects BIBAFull-Text 245-249
  Sebastian Lob; Jörg Cassens; Michael Herczeg; Jan Stoddart
In this article, we present the basic architecture of the Network Environment for Multimedia Objects (NEMO). NEMO is a smart media environment for contextualized, personalized, and device-specific interaction with multimedia objects. It provides its users access to interactive multimedia objects across a variety of computing platforms and devices, such as mobile phones, multi-touch tables, desktop computers and interactive whiteboards. NEMO Multimedia Objects are containers for metadata and media objects. Such media objects can be, for example, images, texts, animations, videos, audio files. Dedicated NEMO clients do not only offer means for presentation of media objects but also a runtime environment for applications on such objects. The system is suitable for application domains ranging from work environments to educational use and recreational activities.
Steps towards a system for inferring the interruptibility status of knowledge workers BIBAFull-Text 250-253
  Lukas Ruge; Jörg Cassens; Martin Christof Kindsmüller; Michael Herczeg
In teams working closely together, interruptions of coworkers are normal and necessary. One of the goals of the ambient intelligent computing framework MATe (Mate for Awareness in Teams) is to prevent unwanted interruptions and at the same time improve social interaction. By creating awareness of each other's situation, users are able to judge how interruptible colleagues are. We describe the concept of MATe and its components and present related work on interruption handling and ontology-based reasoning as well as outline our current and future research in the area of context-aware systems.
Feature-based tracking approach for detection of moving vehicle in traffic videos BIBAFull-Text 254-260
  Elham Dallalzadeh; D. S. Guru
In this paper, we present a novel approach for detection of moving vehicles in traffic videos. We propose a feature-based (corner-based) tracking to track and classify moving vehicles from the extracted ghost or cast shadow. The corner points of the vehicles are detected, labeled and grouped to generate a unique label per vehicle. This approach is able to deal with different types of deformations on the shape of the vehicles due to changes in size, direction and viewpoint. Also, the proposed method is totally free from motion estimation. To demonstrate the robustness and accuracy of our system, the results of the experiments are conducted on traffic videos including different complex background, illumination, motion, camera position, clutter and direction of the vehicles taken from outdoor boulevards and city roads. We detect moving vehicles on an average of 98.8% in a scene. The results show the robustness of our proposed algorithm.
A writer-independent off-line signature verification system based on signature morphology BIBAFull-Text 261-265
  Rajesh Kumar; Lopamudra Kundu; Bhabatosh Chanda; J. D. Sharma
In this work, we address off-line signature verification as a writer-independent system. We propose a set of morphological features, extracted from off-line signature images. To examine the effectiveness of the features, a publicly available signature database, namely CEDAR signature database is used. A pair of signatures is fed to the system to give an inference for their (dis)similarity. To get a compact set of features, a multilayer perceptron based feature analysis technique is utilized. A 10-fold cross-validation framework based on support vector machine is used for verification. Receiver operator curve (ROC) analysis gives an equal error rate (EER) of 11.59%, which is comparable to the state-of-the-arts reported on this database.
Demand based approach to control data load on email servers BIBAFull-Text 266-270
  Gaurav Kumar Tak; Anubhav Kakkar
E-mail is the most popular and used application on the internet. Emails are not only a popular mean but also fast, cheap, handy and considerably secured way of electronic communication. With the growing popularity of emails, we have also the issues like data load, data traffic and congestion growing up which are needed to be controlled. We have to look up to identify such modification and extension of features which may cut short the load on the network because trying to solve those issues by increasing the hardware required may not always be a feasible option.
   This paper proposes such methods which will not just contribute to solve these growing issues but also ease the users in many ways. We have introduced the innovative methods where the not-so-required or even not-required mails can be deleted automatically or manually by the demand approach. We have proposed a novel algorithm by which sender can have a track on whether the email has been read or not (if receiver has no issue). Also, the same or multiple mails can be referenced in a number of other mails which are sent after them by a logical links to them. Our performed implementation shows the better accuracy in memory-control and congestion control. The given results represent average 12.8% memory saving of total occupied memory in a period of one month.
SVM based context awareness using body area sensor network for pervasive healthcare monitoring BIBAFull-Text 271-278
  Sonali Agarwal; A Divya; G. N. Pandey
In the present growing era advancement of computer processing power, data communication capabilities, low power micro electronics devices and micro sensors increases the popularity of wireless sensor network in real life. Body area sensor network is a group of sensor nodes inside and outside the human body for continuous monitoring of health conditions, behavior and activities. Context awareness in pervasive health care is a proactive approach which is different from a conventional event-driven model (for example: visiting doctor when sick) and here we are continuously monitoring a patient health conditions through the use of Body area sensor network. This paper presents a layered architecture of Wide Area Wireless Sensor Body Area Network (WA-WSBAN) along with data fusion techniques, standards and sensor network hardware requirement for context awareness.
   A BodyMedia sensor dataset collected from 9 different sensor nodes has been used to classify the user activities with reference to different sensor readings. The context information derived from the proposed Wide Area Wireless Sensor Body Area Network (WA-WSBAN) architecture may be used in pervasive healthcare monitoring to detect various events and accurate episodes and unusual patterns and activities obtained from the study can be marked for later review. In this research work patient activity and gender classification has been done by using one to all and multi kernel based support vector data classification. The similar practices may be utilized for the study of various observations in real time health care applications and proactive measures may be initiated based on results obtained from data classification.
A semantic search system using query definitions BIBAFull-Text 279-283
  A. K. Sharma; Neelam Duhan; Bharti Sharma
A Web Search Engine is designed to search for information over World Wide Web. When user submits a query, the generated information is often very large and inaccurate, that results in increased user perceived latency. In this paper, a novel approach of Definition-based search is being introduced that solves this problem. The proposed system searches and displays results based on themes, definitions and synonyms of the query keywords generally extracted from the web resources and stored in a separate definition repository. It extends the traditional keyword based web search in order to provide semantic and context based search. The system works as a layer above the keyword based search engine to generate sub-queries based on different meanings of query keywords, which in turn, are sent to the next layer i.e. to keyword-based search engine to perform Web search. The experiments show that this approach is efficient as it results in relevant pages and reduces the search space to a large extent.
Rotated complex wavelet transform with vocabulary tree for content based image retrieval BIBAFull-Text 284-291
  Anil Balaji Gonde; R. P. Maheshwari; R. Balasubramanian
In this paper, we propose a new approach for image retrieval by using texture features. Texture features are grabbed by using the combinations of two-dimensional complex wavelet transform (CWT) and rotated complex wavelet filters (RCWF) in combination with spatial orientation tree (SOT) and vocabulary tree (VT). The parent-offspring relationship among the wavelet coefficients in multi-resolution wavelet sub-bands are demonstrated with the help of SOT which gives the set of descriptor vectors for each image that are further indexed by the use of vocabulary tree. The directional information has been captured precisely with the help of CWT and RCWT as when compared with discrete wavelet transform. The proposed method is well established on Texture database and significant improvement in average recall rate is seen as compared to the method adopted using complex wavelet transform and rotated complex wavelet filter.
A novel human computer interface based on hand gesture recognition using computer vision techniques BIBAFull-Text 292-296
  Siddharth Swarup Rautaray; Anupam Agrawal
In daily life, human beings communicate with each other and use broad range of gestures in the process of interaction. Apart of the interpersonal communication, many hours are spent in the interaction with the electronic devices. In the last decade, new classes of devices for accessing information have emerged along with increased connectivity. In parallel to the proliferation of these devices, new interaction styles have been explored. The objective of this paper is to provide a gesture based interface for controlling applications like media player using computer vision techniques. The human computer interface application consists of a central computational module which uses the Principal Component Analysis for gesture images and finds the feature vectors of the gesture and save it into a XML file. The Recognition of the gesture is done by K Nearest Neighbour algorithm. The Training Images are made by cropping the hand gesture from static background by detecting the hand motion using Lucas Kanade Pyramidical Optical Flow algorithm. This hand gesture recognition technique will not only replace the use of mouse to control the media player but also provide different gesture commands which will be useful in controlling the application.
Unusual activity detection for video surveillance BIBAFull-Text 297-305
  Rajat Singh; Sarvesh Vishwakarma; Anupam Agrawal; M. D. Tiwari
Video surveillance has gained importance in law enforcement, security and military applications. The system consists of processing steps such as object detection, movement tracking, and activity monitoring. The paper contribution is to present the human activity analysis system that both detect a human with carrying or abandoning an object and segments the object from the human so that it can be tracked. Segmentation of objects is done from the background using advance Gaussian mixture model. The tracking algorithm considers the human as whole from frame to frame, it does not track the human parts such as limbs. Object features such as center of mass, size, and bounding box are used in this paper to estimate a matching between objects in consecutive frames. As the object is segmented and tracked, Bayesian inference framework is used for event analysis. This system uses a single camera view and unusual activity is detected using the detected objects and object tracking result. The operator is notified if an unusual activity is detected.
"Dinner Party" sociable interfaces in a tabletop art project BIBAFull-Text 306-310
  Hye Yeon Nam; Carl DiSalvo; Ellen Yi-Luen Do; Sam Mendenhall
This paper explores the topic of sociable interfaces, demonstrated in an embedded tabletop application and a psychological friendship framework called "Dinner Party," in which a user can have a dinner party with friendly virtual creatures while dining alone. In this project, we are interested in determining how everyday objects can be transformed into sociable creatures that interact with people on a psychological level.
Installation to teach energy conservation to kids through tangible objects in an outdoor environment BIBAFull-Text 311-315
  Samiksha Kothari; Arnab Chakravarty
This paper presents a design case of an interactive installation in a contextual outdoor environment for helping kids learn about how to save energy in their day to day life activities. Playful Interaction, physical movement and manipulation of tangible objects were combined to create a playful learning experience linked to their daily behaviors. By eliminating traditional input devices like buttons and mouse, the underlying technology was made as invisible as possible to ensure an engaging and curiosity-arousing experience.
Towards tabbing aware recommendations BIBAFull-Text 316-323
  Geoffray Bonnin; Armelle Brun; Anne Boyer
Present-day web browsers possess several features that facilitate browsing tasks. Among these features, one of the most useful is the possibility of using tabs. Nowadays, it is very common for web users to use several tabs and to switch from one to another while navigating. Taking into account parallel browsing is thus becoming very important in the frame of web usage mining. Although many studies about web users' navigational behavior have been conducted, few of these studies deal with parallel browsing. This paper is dedicated to such a study.
   Taking into account parallel browsing involves to have some information about when tab switches are performed in user sessions. However, current browsers do not allow to explicitly acquire such an information, and the data available for web usage mining is usually made of raw navigation logs in which parallel sessions are mixed. Therefore, we propose to get this information in an implicit way.
   We thus propose the TABAKO model, which is able to detect tab switches in raw navigation logs and to benefit from such a knowledge in order to improve the quality of web recommendations. Experimental studies are performed on an open browsing dataset. Results validate the ability of our algorithm to detect parallel sessions, and to exploit them to enhance the results compared to a state-of-the-art recommendation model.