|
ARPN Journal of Engineering and
Applied Sciences
March 2015 | Vol. 10 No. 5 |
|
|
Title: |
Homomorphic encryption based data security on federated cloud computing |
Author (s): |
Anitha R. and Vijayakumar V. |
Abstract: |
Cloud computing usage has increased rapidly in both industries and in
research. In recent days as the data grows rapidly, in order to meet the
business needs federated cloud is adopted. In federated cloud, as the
data is stored and processed away from the user and the cloud service
provider, privacy and integrity of the data plays a crucial role. This
paper proposes a practical and efficient method for providing security
to the data stored at the federated cloud environment using homomorphic
techniques. This method provides security by storing the encrypted data
in the cloud. The cipher key which is generated for encrypting the data
plays a major role. This paper explores important aspects within this
context and examines the role of metadata in data security which
improves the performance in a secured manner. The proposed novel
homomorphic based key distribution protocol is the key area under focus.
This proposed work aims to promote the use of homomorphism in
multi-clouds due to its ability to reduce security risks using the
enhanced modified feistel technique. |
|
|
Full Text |
|
|
Title: |
Intrusion detection model using integrated clustering and decision trees |
Author (s): |
Ranita Pal and Sumaiya Thaseen |
Abstract: |
This paper proposes a hybrid technique for intrusion detection model
using K-means clustering, attribute selection and decision tree. K-means
clustering is a very simple and convenient clustering method when it
comes to grouping anomalies and the different attack types in network
traffic. An enhanced mechanism is developed using the Cluster center
initialization algorithm for k-means clustering and decision trees using
the entropy method. After the clustering is done, attribute subset
selection is done using entropy method and final classification of
attack categories is done using decision trees. It works in two modes:
online and offline. Offline mode works on the sample data which is
processed to obtain the rule set of the decision tree. The data from the
online mode is then compared against those rules to determine their
category and identify the intrusions in the packet. |
|
|
Full Text |
|
|
Title: |
An insight on reputation based incentive scheme and throughput feedback
routing in MANET’s |
Author (s): |
Christy Jackson, V. Vijayakumar, Subramaniyaswamy V. and Anusooya G. |
Abstract: |
A Dynamic wireless network which is composed without any actual
infrastructure is a mobile ad-hoc network (MANET). Every potential node
in the network deports as a router. These mobile networks are much more
vulnerable than wired networks because of their restricted physical
security, power constraints, network topology which keeps altering
dynamically, and due to improper centralized administration. This paper
portrays few attacks on each of OSI’s network layer. It also confines
some of the attacks faced by MANET. These attacks include packet drop,
flooding, black hole, link spoofing, and wormhole. The intention of this
paper is to survey the attacks on mobile ad-hoc networks and routing
protocols. |
|
|
Full Text |
|
|
Title: |
Capacity predictor with varying pricing scheme and interoperablity
decision in a bursting situation for cloud computing environemnt |
Author (s): |
N. S. Gowri Ganesh and A. Rajiv Kannan |
Abstract: |
The usage of the cloud computing services is increasing rapidly across
all the domains, thereby inducing the cloud service providers to
increase their underlying cloud resource capacity. If service providers
keep on increasing the capacity, it may not be an optimal utilization of
resources. Since one of the main characteristics of cloud is optimal
utilization of resources, we provide an analysis method, to aid both
cloud service provider and cloud service consumer that both are
benefited from the cloud. Our first approach is to determine whether the
resource request from user and actual usage is over provisioned or under
provisioned or optimally provisioned. Based on the usage pattern, we
propose a capacity predictor for service providers, that for a specified
period of time, the predicted capacity requirement for his entire cloud.
This helps the administrators to make a decision among the options of
whether to augmenting more resources into cloud, fetching resources from
other cloud, migrating workload into other cloud. Along with this
decision, we also propose a dynamic pricing scheme that the service cost
pattern varies based on the demand and supply. The pricing scheme should
be beneficial to both the service providers and service consumers. Our
approach ensures optimal utilization of resources with increased service
usage with available resources. |
|
|
Full Text |
|
|
Title: |
Penalty-based pagerank algorithm |
Author (s): |
B. Jaganathan and Kalyani Desikan |
Abstract: |
In this paper we give a brief overview of the original Page Rank
algorithm that is used in the Google search engine. This algorithm
exploits the link structure of the web and greatly improves the results
of Web search. We propose a new method for the computation of page rank
on the basis of penalty scores assigned to web pages which are accessed
through Advertisement links. We compare the page ranks obtained using
the original page rank algorithm and our proposed penalty-based page
rank method. |
|
|
Full Text |
|
|
Title: |
An
efficient and fast brain CT image classification using hybrid technique |
Author (s): |
A.
Veeramuthu, S. Meenakshi and Yalavarthi Dharma Tejaswi |
Abstract: |
Nowadays, brain tumor is a
standout amongst the most hazardous maladies happening regularly among
people. The shots of survival can be expanded if the tumor is located
accurately at its initial stage. A CT brain imaging technique is
extensively used to conceptualize the perusal and structure of the
brain. The images created by CT are high in tissue variance and have
fewer artifacts. It has a few points of interest over other imaging
procedures, giving high differentiation between delicate tissues.
Nonetheless, the measure of information is a great deal excessively for
manual examination, which has been one of the greatest deterrents in the
compelling utilization of CT image. The recognition and forecast of
tumor obliges a few courses of action on CT image which incorporates
image preprocessing, segmentation, feature extraction, feature selection
and classification. The concluding classification process using hybrid
technique concludes that a status of CT image like normal, begnin,
moderate or malignant. Finally, we shown experimentally our proposed
framework is very effective and efficient prediction of tumor disease
rather than other frameworks. |
|
|
Full Text |
|
|
Title: |
Distributed
cloud brokerage: solution to real world service provisioning problems |
Author (s): |
Prashant Khanna, Sonal Jain and BV Babu |
Abstract: |
This research analyzes the
performance of a distributed cloud broker in a live cloud environment,
utilizing a government owned, private, federated cloud. The paper
explores functioning of a distributed cloud broker that assists in the
provisioning of services to geographically distributed data centers. The
data centers have volunteered to federate and expose their utilization
metrics to each other through the cloud broker. The experimental
infrastructure utilizes the closed schema for federation. The cloud
broker is responsible for match making and bundling/provisioning of services
from multiple private cloud providers, through volunteer and federated
data centers. This has been tested under multiple load conditions. The
proposed distributed cloud broker handles the load on the cloud ecosystem
through a strictly controlled mechanism in a private cloud ecosystem,
custom routing all overload conditions (cloud bursting scenario) on
specific private clouds through a common interface visible to the Amazon
Web Services (AWS). The proposed broker mechanism shows high efficiency
and lesser cloud bursting instances compared to a pure AWS based
ecosystem. The work also analyzes real world issues faced by
organizations handling cloud brokerage frameworks in a distributed
manner. The research asserts that it is possible to create customized distributed
cloud brokers and the perspective of using a hybrid cloud approach using
distributed broker in federated clouds is feasible, albeit in a tightly
integrated and fine tuned cloud environment. |
|
|
Full Text |
|
|
Title: |
Seasons:
A scalable 4-tier cloud architecture for handling heterogeneity of mobile
devices in live multimedia streaming |
Author (s): |
Preetha Evangeline D. and Anandhakumar P. |
Abstract: |
Cloud based live Media
streaming is becoming one of the prominent scope among researchers. The
growing popularity of the cloud has moved the Multimedia technology from
being traditional to be highly modernized. Mobile devices have replaced the
traditional way of viewing videos through PC’s. According to the recent survey there is an exponential hike in mobile video
consumption, approximately 90% of consumers use a mobile devices to watch
online videos when compared with 61% in 2012. Dealing with Device
heterogeneity serves to be one of the biggest challenges for live media
streaming as Mobile devices comes with various resolutions, OS’s, audio
and video codec. It is hard to adapt live streaming contents according to
the specification of the device on the fly. This paper proposes SEASONS, a 4-Tier Novel architecture that handles
Device heterogeneity and concentrates on providing efficient and scalable
media dissemination system. The proposed system takes 9 sec on an average
for transcoding on the fly and 14 sec on an average for end-to-end
delivery of videos when heterogeneous devices are considered. |
|
|
Full Text |
|
|
Title: |
Effect
of big data characteristics on security-leveraging existing security
mechanisms for protection |
Author (s): |
K.V.S.N.
Rama Rao, M. Pranava and A. Mounika |
Abstract: |
Big
Data is the surge of data which was caused by growing technology
and the increase in online computing. Several characteristics that define
Big Data are Volume, Velocity and Variety. The inherent nature of these
characteristics will certainly introduce several vulnerabilities and
threats for the entire data. These security concerns in Big Data must be
addressed. The traditional systems security concerns were addressed by
several strong security mechanisms. All these mechanisms are proved to be
efficient and well functioning. In this paper, we discuss about the
security issues that arise due to Big Data characteristics like Volume
and Variety. Further, we focus on leveraging existing security mechanisms
to overcome the effects of Big Data characteristics. |
|
|
Full Text |
|
|
Title: |
Radial
basis function neural network for software engineering measures- A
survey |
Author (s): |
Umamaheswari.
E and D. K. Ghosh |
Abstract: |
In
Software Quality, the Software reliability is the very essential part
where it has the capability to manage its individual functions at various
conditions. Now-a-days, Software measurements are entirely depends on
different techniques like Fuzzy Logic, Neural Network, and Genetic
Algorithm etc. This paper reviews SVM (Support Vector Machine) and RBFN
to the issues of software measurement in order to increase the
correctness as well as the performance. RBF (Radial Basis Function) and
SVM has some secure relationship among them where they both are
identified in many applications like in face verification, optical
character recognition, text categorization and object detection etc. The
results examines both the performance analyzes about RBFN and SVM
Gaussian Radial Basis Kernel Function. This paper also compares the RBFN
and SVM with parameter MRE. |
|
|
Full Text |
|
|
Title: |
Semantic
retrieval of spatial objects on location based services for everyday
essentials |
Author (s): |
R. Jeberson Retna Raj and
T. Sasipraba |
Abstract: |
Omnipresence
of internet and advance of technologies helps the user to locate and
access various socio physical services. Geospatial Information System
(GIS) integrate GPS data and location information for providing spatial
objects to the user. Location based services for every day is essential
for an Information system which can provides the desired services to the
user in a day to day life. In a city like Chennai one who wander around
the places where to get the services like hospitals, insurance, community
certificates, licenses for running shops, educational institutions and
other government services etc. It is a tiresome process as no system
physically available to fulfill the needs. Therefore, the need of an hour
is to propose a system which can able to provide the details of day to
day needs of a user. The system covers 600 sqkm of Chennai city, and
large number of data is collected for implementation. |
|
|
Full Text |
|
|
Title: |
Towards
greater customer experience: role of network parameters on key business
drivers |
Author (s): |
JoshiSujata,
Bhatia Sanjay, Raikar Kiran and Athnikar Rohan |
Abstract: |
The purpose of the
present study is to propose the importance of Network experience on
Customer experience and Customer behavioral intentions for Cellular
Service Providers. This research paper examines customers’ feedback on
experience across various stages in customer lifecycles and interactions
with their telecom operator and attempts identifying the determinants of
network experience and their significant impact on customer behavioral
intentions of churn, advocacy and purchase more. Primary research was
conducted and more than 5000 respondents spread over 36 centres all over
the country were surveyed. Questionnaire was used as the primary
research instrument along with personal interviews. This paper
establishes that Network experience has the highest impact among the 6
determinants of Customer experience for Cellular Service Providers. It
also establishes the relation between Network experience parameters and
customer behavioral intentions (churn, advocacy and intention to
purchase more) through statistical backing of EFA and logistic
regression tests. This is an initial paper to identify the determinants
of network experience and customer experience in telecom industry. With
the growth of data usage, further research is required to drill down
into data experience parameters as well to establish the overall impact
of network on customer experience and customer behavioral intentions.
This paper helps to establish the tangible and intangible parameters of
network experience and customer experience which in turn helps to
understand the impact on customer behavioral intentions (churn,
advocacy, purchase more, complaints). The Cellular Service Provider’s
can use this relation between Network experience and Customer behavior
for strategizing their investments and customer offers. The paper
identifies the determinants of network experience and its direct impact
on customer experience through a measurement yardstick. The method
adopted incorporates various determinants across the customer lifecycle
which are sufficient in defining customer experience holistically. The
paper also establishes relation between Network experience parameters
and customer behavioral intentions of churn, advocacy and purchase more. |
|
|
Full Text |
|
|
Title: |
Evaluating
metrics at class and method level for java programs using knowledge based
systems |
Author (s): |
Umamaheswari.
E, N. Bhalaji and D.K. Ghosh |
Abstract: |
Software
metrics is considered to be the most important tool in software process
management. It is also measure of property for a specific piece of
software. Metrics also serves as a resource to anticipate and avoid
the problems. Since there are only few measurement tools available, the
need for metrics tool in testing the software is increasing. Although
many metrics have been proposed by researchers they are used in isolation
or ignored because they are not focused much. Therefore, an open source
tool called “JAM (java metrics)” is to be developed to calculate various
metrics and to display the metrics in graphical representation for java
code. This learning tool allows software engineers to measure their code
and to improve their software quality. It calculates the metrics at class
level and method level. This tool also provides some basic information
about the metrics calculated. |
|
|
Full Text |
|
|
Title: |
Two factor authentications for secured login in support of
effective information preservation and network security |
Author (s): |
S. Vaithyasubramanian, A. Christy and D. Saravanan |
Abstract: |
In the present digital day with remarkable development in
Computer sector, Single factor authentication, e.g. passwords, is no more
examined as secure in the World Wide Web. It has never been less
difficult in Securing the system and remote access. Simple, obvious and
easy-to-guess passwords, such as names and age, are effortlessly found
via computerized secret key gathering programs. The security and privacy
threats through malware are always constantly growing both in quantity as
well as quality. Expanded access to information increases weakness to
hacking, cracking of passwords and online frauds. In this association the
conventional login/password authentication is taken into account
inadequately secure for several security-critical applications such as
login to Mailing Accounts, Social Networks, Gadgets, Financial accounts,
official secured networks, commercial websites online etc. Obliging more
than one independent factor increases the difficulty of providing false
credentials. Two-factor authentication proposal guarantee a higher
protection level by extending the single authentication factor. This
paper focuses on the implementation of two-factor authentication methods
by using both users friendly traditional Alphanumeric Password and
graphical Password as gateway for authentication. An attempt has been
made by using two factor Authentication, and in this paper we describe
the two factor Authentication system design and design implementation.
Thus affording an additional password adds an extra layer of security. |
|
|
Full Text |
|
|
Title: |
Open
platform cloud infrastructure model with enhanced virtualization |
Author (s): |
Abilash
Rajasekaran, Sountharrajan Sehar, Elangovan Manickasundaram, Ezhilan
Elangovan and Gowtham Kumar Thangaraj |
Abstract: |
Evolution
of International network had led to growth in Cloud computing technology.
Cost efficient personal computers with vast resource have been possible
only by the evolution of Cloud Computing. The
recent trend in Cloud computing technology emerges with the advancement
of computing resources and hardware in single platform, provided to the
user on-demand. The
cloud computing has led to chief growth in the field of computer science
and gave a major impact in business field with
widespread adaption of virtualization and Service Oriented Architecture
(SOA). In this paper, Infrastructure as a Service (IaaS) is
provided on-demand to the user with enhanced virtualization technique to
utilize third party resources. This may eventually lead third party user
to compensate only addition charges towards their cloud vendor. Bare
metal hypervisor maintains third party resource through Application
Program Interface (API) during failure and data replication upon Service
Level Agreement (SLA). This cloud setup would urge Business Enterprise to
extend their resource across network and attract more users to cloud.
Implementation of this setup provides openness to the world of Cloud
Computing. |
|
|
Full Text |
|
|
Title: |
Enhanced
quality of service in visualizing the malaria data using cloud computing |
Author (s): |
A.
Vijayalakshmi and B. Rajesh Kanna |
Abstract: |
Malaria disease is a major issue in public health problem
and its spread in various tropical countries. Malaria forecasting was
conducted in many tropical countries and typically uses data on
environmental factors, human factors. There are variations in time on
malaria causing incidence respect to geographical regions. The goal of
the proposed work is to develop a cloud computing based application to
insight the root cause of malaria. It provides the enhanced quality
of service like data filter, statistical data model, data view, data
relation, which helps to forecast the environmental risk of malaria.
This forecast is inevitable because malaria prediction data are
independent to each other, static in nature, hierarchical and complex.
The proposed malaria forecasting application has been programmed using
'R' script and integrated with built-in IBM Bluemix container. Finally,
it has been deployed as platform as a service under IBM cloud. The
developed application assists the malaria predictors to do seamless view
of dense graphical informatics simultaneously. |
|
|
Full Text |
|
|
Title: |
Banking
on big data: A case study |
Author (s): |
Arti
Chandani, Mita Mehta, B. Neeraja and Om Prakash |
Abstract: |
Big
data; how big, is bigger than what the traditional application can handle
and this gives a feel about the quantum of data which is being talked in
the big data. Each day the technology is changing and everybody else is
trying to cope up with the changes in the macro technological environment.
Banks do generate a huge amount of data in their ordinary course of
business which was being dumped in the books almost a decade back. Today
the same data is being processed, analyzed and used for the benefits of
the banks and customer. The data so generated can be used to customize
services to the customer, to understand his needs, to design the most
appealing marketing strategy to name a few. The big data, Peta-byte, can
be efficiently used to analyze the financial behavior of a customer. A customer,
who would have defaulted on a loan, may relocate making it difficult for
the banks to trace but he still might be active on the social media,
which can be used to trace the customer. This is one odd benefit which
big data has to offer. All said and done, there are challenges to
implement the big data technology for any bank. The biggest constraint
comes from the finance front where any new technology requires a huge
outlay of cash in the form of infrastructure, training and development
cost and data warehouse and storage cost. The researchers have taken a
hypothetical, yet practical, example to demonstrate the possible benefits
of the adoption of the big data into a bank by calculating the net
present value of the project. The researchers have used multiple rates
instead of a single rate to help the users to take the net present value
according to the rate applicable to them. The internal rate of return has
also been calculated to understand the return which the project is
generated itself and the same can be used by the users to compare with
their internal rate of return to judge the viability of the project. |
|
|
Full Text |
|
|
Title: |
Electronic
medical records using NFC technology |
Author (s): |
A. Devendran, R. Jayam and P. Sindhuja |
Abstract: |
EMR-Electronic
Medical Records are replacing Paper Medical Records which is now
considered a key initiative in the Healthcare industry. It is because
Paper medical records are easily lost and damaged and also disappears
during emergencies. They are often incomplete with incorrect or missing
information. Doctors therefore end up duplicating tests, making
uninformed decisions and delaying care. But are EMR/Electronic charts
really any better? - Unless it is available to providers at the right
time. |
|
|
Full Text |
|
|
Title: |
Analysis
of the effectiveness in image compression for cloud storage for various
image formats |
Author (s): |
Dasaradha
Ramaiah K. and T. Venugopal |
Abstract: |
Digital image
compression technology is of special interest for the fast transmission
and real-time processing of digital image. However image compression is
a trend of research grown for a long time and there are multiple
sophisticated approaches which significantly improve the compression
rate and down grade computation time, a basic comparison with the aspect
of storage on cloud environment is required. This work analyzes about
the background of image compression, including when image compression is
needed, categories of techniques and their properties. However
compression uses many algorithms that store an exact representation or
an approximation of the original image in a smaller number of bytes that
can be expanded back to its uncompressed form with a corresponding
decompression algorithm. This work also analyzes the performance of
multiple image formats for multiple compression algorithms over multiple
cloud storage. |
|
|
Full Text |
|
|
Title: |
Heterogeneous
information management using ontology mapping |
Author (s): |
Kaladevi
Ramar and T. T. Mirnalinee |
Abstract: |
Increase
in web and information technologies has made available to large number of
independently created and managed information systems. These systems
include similar information from disparate sources cause information
heterogeneity. To achieve interoperability between heterogeneous
information systems and unified integration of those systems
heterogeneities between information systems needs to be reduced. Mostly
information heterogeneity occurs in three levels: syntactic, structural
and semantic. The semantic heterogeneity issue is not completely
addressed yet. In this research syntactic, structural, data and semantic
heterogeneities between information systems is considered and a novel
ontology mapping technique is developed to resolve semantic heterogeneity
achieving semantic interoperability between ontologies. Background
knowledge has been taken as reference ontology as a part of this work.
The Ontology Mapping For Information Management (OMFIM) algorithm is
evaluated with OAEI (Ontology alignment Evaluation Initiative) benchmark
dataset and the performance is compared against S-match algorithm. Result
shows that our proposed method outperforms the S-match algorithm for
solving semantic heterogeneity and also best suitable for the systems
with insufficient lexical overlap and poor structural correspondence. |
|
|
Full Text |
|
|
Title: |
Cloud based search engine |
Author (s): |
Nithya
G., Engels M. S., Gayathri S and Ganesh Kumar D. |
Abstract: |
With
the advancement in science and technology, the problem of managing and
maintaining expensive computing resources have become more complicated.
In order to overcome this burden the recent trend is to effectively use
cloud computing, supporting resource sharing, with many services. The
goal is to aggregate idle network and to preserve resources such as CPU
cycles and storage spaces to facilitate effective utilization and
computing. For the above mentiones, we need to find an efficient method
for identifying the services based on all the results from a cloud based
search engine. The assignment and choosing of a cloud service should
facilitate efficient problem solving and promote optimal use of resources.
One such solution must be able to apply in a large array of information
processing units. This project proposes to address the above problem
using the Multi-Agent Brokering Approach, for the identification of
services from the results of a cloud based search engine in a cloud
environment. The Multi-Agent Approach ensures that the agents can
specialize in identification of services made to process the requests. It
also orders the results of the search with various ordering options such
as CPU speed, memory and storage. For achieving this, the proper
representation of provider capabilities and ontology relationships are
essential. |
|
|
Full Text |
|
|
Title: |
Analysis
of "air-moving on schedule" big data based on crisp-dm
methodology |
Author (s): |
Man-Seok Ha, Jung-II Namgung and Soo-Hyun Park |
Abstract: |
Punctuality of
air traffic is one of the most important criteria for choosing an air
service. In this paper, we would like to develop and implement an
experimental model based on the CRISP-DM (Cross Industry Standard Process
for Data Mining) methodology applied to Big Data Mining. In this case, we
choose the data from the ASA (The American Statistical Association) air
traffic data for the experiment and then analyzed the data by using the
Hadoop Distributed File System, Hive and R studio. The using the
analysis, the arrival delay can be proposed for optimal airports. In fact
there was a way to take advantage of the leverage results, so we got the
best results when applying ANN (Artificial Neural Network) model. |
|
|
Full Text |
|
|
Title: |
Adaptive
modulation with multi-level securityusing sparse matrices |
Author (s): |
Navaneethan
C. and K. Helen Prabha |
Abstract: |
In the wireless sensor network system, wide ranges
of techniques have been developed for securing the data before
transferring to the concerned destination. Cryptography and modulation
are distinct techniques that are used in wide range to protect the
information from the attackers. In this slog we came up with a work of
“Adaptive Modulation with Multi-Level Security for Networks”, in which
the plain text is encrypted by using the newly proposed Encryption and
Decryption Based on Sparse Matrices algorithm. This algorithm is a
multi-staged encryption and decryption. By deploying encryption algorithm
at the sender side message is encrypted before operating with modulation
and demodulation algorithm. Encrypted message is then modulated through
modulator. At the receiver side demodulation is performed followed by
decryption operation. This approach results in the secure adoption of
modulation with effective cryptography in unsecured channel more
effectively. |
|
|
Full Text |
|
|
Title: |
Enhanced
hybrid framework of reliability analysis for safety critical network
infrastructure |
Author (s): |
Chandana
Priyanka G. H., Aarthi R. S., Chakaravarthi S., Selvamani K. and Kannan A. |
Abstract: |
This
work proposes enhanced hybrid frame work for reliability analysis for
Safety Critical Network Infrastructure. The proposed frame work enables
design and development of web service for the safety critical systems to
identify the component failures in the network infrastructure. The
enhanced hybrid frame work of reliability analysis for multilayer in
Safety Critical Network Infrastructure results in accuracy of identifying
the failure of the component, failure modes of the component that makes
the system more reliable and reduces the error rate in the network
infrastructure. The Safety Critical Network Infrastructure requires a
monitoring mechanism that can be used for public sector enables to detect
failures as early as possible in the layers of the network. In order to
work as a service provider for the critical components to the user a web
service was designed and these failure results are used as database for
efficient utilization of the data. |
|
|
Full Text |
|
|
Title: |
Big
data analysis based on mathematical model: A comprehensive survey |
Author (s): |
Vijaylakshmi S. and Priyadarshini J. |
Abstract: |
Increasing web services day
by day and huge volume of data is also increasing exponentially.
Processing a large amount of data efficiently can be a substantial
problem. Currently, the method for processing a large amount of data
comprises adopting parallel computing. Big data is an all-encompassing
term for any collection of data sets so large and complex that it becomes
difficult to process them using traditional data processing applications.
The challenges comprise analysis, capture, creation, search, sharing,
storage, transfer, visualization, and privacy violations. With pervasive
sensors continuously collecting and storing enormous amounts of
information leads to data flood. Learning from these large volumes of
data is expected to bring significant science and engineering advances along
with improvements in quality of life. However, with such a big blessing
come big challenges. Billions of Internet users and machine-to-machine
connections are producing a huge volume of data growth. Utilizing big
data requires transforming information infrastructure into a more
flexible, distributed, and open environment. In this paper, a survey has
been prepared about the techniques available for optimization in big data
with the presence of swarm intelligence. Using mathematical model based
algorithm for optimization (Swarm Intelligence) in big data will yield
better performance while handling of dynamic data in the non-stationary
environments and dynamic environments. |
|
|
Full Text |
|
|
Title: |
uCLUST- A new
algorithm for clustering unstructured data |
Author (s): |
D. Venkatavara Prasad, Sathya Madhusudanan and
Suresh Jaganathan |
Abstract: |
Data that resides in a fixed
field within a record or file is called structured data and have a
defined schema. Unstructured Data refers to information that either does
not have a pre-defined data model and does not fit well into relational
tables. Clustering gains importance in the fields of Libraries (book
ordering), Insurance (identifying groups and identifying frauds), WWW
(document classification and clustering weblog data). Available
clustering algorithms work only with structured data and use medoids as
parameter for clustering. Clustering big data is not feasible, as they
are mostly unstructured. It is not possible to label large collection of
objects and identifying subspace clusters in unstructured data is a
difficult task because of time complexity. In this paper, we proposed and
designed a new algorithm called uCLUST, which identifies clusters in
unstructured data as traditional distance functions cannot capture the
pattern similarity among the objects. The proposed algorithm is applied
in 6 different datasets and results are tabulated. |
|
|
Full Text |
|
|
Title: |
Group search optimizer
algorithm for localization in wireless sensor networks |
Author (s): |
Harikrishnan R., Jawahar
Senthil Kumar V. and Sridevi Ponmalar P. |
Abstract: |
In wireless sensor
network, sensor nodes are deployed randomly depending on the
application. The sensor node location is very much important to make a
meaningful sense of the data gathered by the sensor network. The
intelligence of the environment is assisted by wireless sensor network
by using the location information of the sensor nodes. In this paper a
novel algorithm named group search optimizer localization algorithm is
proposed for sensor node location information detection. This algorithm
is based on producer scrounger model of animal behavior. The location
information detection is required to increase the performance and
reliability of wireless sensor networks (WSN). It also increases the
lifetime of the network by guiding the network without unwanted routing. |
|
|
Full Text |
|
|
Title: |
Nature inspired flower
pollen algorithm for WSN localization problem |
Author (s): |
Harikrishnan R., Jawahar
Senthil Kumar V. and Sridevi Ponmalar P. |
Abstract: |
Location of sensor node
is required for improving efficiency and performance of node management.
An accurate localization algorithm with better efficiency and lesser
computing time is required. Moreover the complexity of the algorithm and
the memory required should be less. The algorithm should be faster for
the usage in sensor node self localization. In this paper a novel nature
inspired based algorithm called flower pollen localization algorithm is
introduced for sensor node localization problem. Flower pollination is a
process of reproduction of plant species and survival of the fittest. |
|
|
Full Text |
|
|
Title: |
ASE noise analysis in
cascaded EDFA-EYCDFA |
Author (s): |
S. Semmalar and S.
Malarkkan |
Abstract: |
The scope of this paper
is to analyze ASE (Amplified Spontaneous Emission ) noise power using
the simulation model EDFA (Erbium Doped Fiber Amplifier) cascaded with
EYCDFA (erbium-ytterbium co-doped fiber amplifier) in 4-16 channels of
transmitters combined by optical multiplexer and sent the output to EDFA
in series with EYCDFA in single backward pumping using the wavelength of
980nm. This simulation model performance was analyzed with the
parameters Gain, forward output signal power and ASE noise was measured
and the values are tabulated. The simulation model consists of 2- 16
channels of RZ transmitter and 2- 16 channels of NRZ transmitter’s
outputs were multiplexed with optical multiplexer and multiplexed signal
sent to cascaded Erbium amplifiers with pumping CW (continuous wave)
Laser source with wavelength 980nm and Filter. The resulting model
accurately represents EDFA Gain and output signal power and ASE noise.
Simulation results shows that by choosing careful fiber length 20m and
pump power 1mw in single pumping gives ASE noise 0.005mw using EDFA and
EYCDFA gives zero mill watts. |
|
|
Full Text |
|
|
Title: |
Design and analysis of
silicon diaphragm of a MEMS pressure sensor |
Author (s): |
S. Maflin Shaby |
Abstract: |
Pressure measurements
in industries, biomedical and marine environment are of utmost
importance to better understand the process stability, ocean processes.
The influence of in-plane stresses of silicon plate with square,
rectangular and circular shape have been investigated. The area of
square ,rectangular and circular form of elastic element has been
approximated to be equal and the thickness is about 1μm. It was shown,
that in-plane stresses can have a great influence on plate deflection
and stresses distributions that should be taken into account at
designing of piezoresistive pressure sensors. Finite element method
(FEM) is adopted to optimize the sensor parameters, such as the membrane
shape the deflection and stress caused by the different elastic membrane
was analysised to achieve higher sensitivity, larger full scale span and
linearity. |
|
|
Full Text |
|
|
Title: |
Implementation of message
authentication scheme with elliptic curve cryptography |
Author (s): |
G. Indumathi and T.
Kiragapushpam |
Abstract: |
Transmission of private
information over the public channels requires security or data
protection against unauthorized access. Elliptic Curve Cryptography (ECC)
is one of the efficient encryption technique can be used to secure the
private data. High level security requirement of Restricted Services of
Indian Regional Navigation Satellite System (IRNSS) to transmit the
navigation data through wireless channel, can be achieved by ECC with
minimum key size. ECC is based on Elliptic Curve Scalar Multiplication (ECSM)
which is the process of multiplying a point on elliptic curve by a
scalar value. The operations has been performed on National Institute of
Standards and Technology (NIST) recommended elliptic curves over binary
field E (2233).The performance of ECC algorithm is influenced by the
implementation of elliptic curve finite field operations. Therefore,
field operations play vital role in ECC. Among finite field operations
such as squaring, multiplication and inversion, multiplication is very
important in cryptosystem. Karatsuba algorithm with polynomial
multiplication is more efficient for large numbers. The encryption
algorithm, point operations and field operations have been implemented
in Xilinx Virtex-5 FPGA board. |
|
|
Full Text |
|
|
Title: |
A fully differential
read-decoupled 7-T SRAM cell to reduce dynamic power consumption |
Author (s): |
Soumitra Pal and Shahnawaz
Arif |
Abstract: |
To improve the
performance of an SRAM cell and reduce the area consumption, researchers
are scaling down the technology node of MOSFET. But power consumption is
not yet improved below 65-nm technology node. Since then the VDD (supply
voltage) remains more or less constant and dynamic power consumption
improvement is almost stagnated, while leakage current increases
exponentially. Hence, prime area of concern of present days circuit is
to reduce the power consumption with minimum device size. In this
article a fully differential read decoupled 7T SRAM cell is proposed
that consumes substantial amount of less read and write power. Side by
side it shows 4% (10.57×) shorter read (write) delay and 4×/9.24%
improvement in RSNM/WSNM (Read static noise margin/write static noise
margin) @ 700 mV. |
|
|
Full Text |
|
|
Title: |
Ontogeny smart bulletin
board |
Author (s): |
S. Karthikeyan |
Abstract: |
At present days every
advertisement is going to be digital. Some large shopping malls and
shopping centers are using the digital moving displays. In railway
stations and bus Stations, ticket information, platform number etc. are
displaying in digital moving message display. But in these case if they
wants to change the information they have to go there and connect the
display to PC or laptop and then change it. Suppose the same
information, if the person wants to display in main centers of the
cities or longer distance places, he has to go there with laptop and
change the information by connecting to the PC. In this application we
are implementing wireless communication to change the information which
is displayed on the VGA monitor. Here we are implementing this idea for
college notice board with the help of VGA monitor. This paper is to
display the color image and text by using ARM7 (LPC2148). |
|
|
Full Text |
|
|
Title: |
Tone mapping and image
enhancement using recursive mean separate histogram equalization (RMSHE)
technique |
Author (s): |
J. Kanimozhi, P. Vasuki
and S. P. Shamilee |
Abstract: |
This work aims to
develop a Novel Image Enhancement technique to enhance contrast and tone
of digital Imagery. Contrast Enhancement and White Balancing used for
Image Enhancement. Contrast Enhancement is achieved by Recursive Mean
Separate Histogram Equalization (RMSHE). White Balancing is used for
Tonal correction. Parameter such as PSNR, MSE, MAE are calculated to
identify the better Histogram Equalization for contrast enhancement. |
|
|
Full Text |
|
|
Title: |
Design methodology for the
field orientation control of a non-linear vector controlled sinusoidal
permanent magnet AC motor |
Author (s): |
P. Ramana, M. Surya
Kalavathi, K. Alice Mary and V. Dinesh Gowri Kumar |
Abstract: |
Nearly all of the
electrical power used throughout the world is generated by synchronous
machines driven by either hydro or steam turbines or by combustion
engines. Synchronous Machine is generally dedicated to high power due to
the fact that they have a controllable power factor and a higher
efficiency than the induction motor of corresponding rating. In recent
years different control schemes using synchronous motors operating from
static power converter have been a real competitor to both DC and
Induction Motor drives, especially in high power, low speed range. Among
them field oriented control employing vector control strategies has
become quite popular in recent years. A disadvantage of the scheme when
applied to synchronous motor drive is that the motor always operates at
a lagging power factor. In this work a generalized design strategy is
suggested for speed control loop of an inverter fed synchronous motor
drive, in which, its inherent flexibility to generate the same torque
for different combinations of currents is exploited. The closed loop
system for the Permanent Magnet AC Motor is simulated using MATLAB and
the performance figures of some typical cases such as unity power factor
control, torque angle control and internal angle control are obtained. |
|
|
Full Text |
|
|
Title: |
Analysis of MEL based
features for audio retrieval |
Author (s): |
R. Christopher Praveen
Kumar and S. Suguna |
Abstract: |
Nowadays the electronic
gadgets have been updated to store large amount of music information. It
is necessary to have an efficient retrieval system to choose the
required data. The important task in audio retrieval system is feature
extraction. In the feature extraction stage, the feature which gives
relevant information about music has to be extracted. In this paper,
various Mel based feature which includes Mel Frequency Cepstral
Coefficient (MFCC), Delta MFCC (DMFCC), Double Delta MFCC (DDMFCC),
hybrid feature (MFCC+DMFCC+DDMFCC) has been analyzed for audio retrieval
system. It has been found out that the audio retrieval system which
makes use of hybrid feature will provide better result compared to the
other features. |
|
|
Full Text |
|
|
Title: |
Energy efficient wireless
classroom and bus monitoring system |
Author (s): |
Vijaya baskar V. and
Sakthivel E. |
Abstract: |
The main objective of
this work is to develop a system for monitoring the class room to update
the students’ strength in the class room automatically. The students
entry and exit can be tracked through the IR transceiver pair. This
system also uses the RF reader to find the active staff present in the
class. Then finally the desired information can be sent to the control
centre, which may be the principal or HOD room. The control centre and
class room module communicates through the ZigBee; thereby the
information monitored is stored in the centre PC. The purpose of
monitoring is to find the number of students present in the class in
each hour. This system minimizes energy consumption and human
intervention. This system can also be used to monitor the bus arrival
time at the university campus. The ‘IN’ time, ‘OUT’ time of a bus and
the number of persons boarding on the buses can also be monitored and
these data can be sent to the admin location. |
|
|
Full Text |
|
|
Title: |
Development of real time
monitoring system under smart grid environment |
Author (s): |
M. Krishna Paramathma, D.
Devaraj and Malaikannan R. |
Abstract: |
Real time monitoring of
power system is essential for its continuous and reliable operation.
This paper presents a low cost, low power consuming system that can be
used for quick and accurate power system parameter monitoring under
smart grid environment. The designed system will continuously measures,
processes and display the power system parameters like voltage, current,
phasor difference, power factor, power consumption using ultra low power
microcontroller. Measurement of power system parameters of resistive and
inductive loads are monitored using PIC16F877A microcontroller. Sampling
theorem is used to calculate the phase difference between voltage and
current utilizing zero crossing detectors. Simulations of the voltage,
current, phasor difference were done using NI Multisim software. |
|
|
Full Text |
|
|
Title: |
DCT based partial transmit
sequence technique for PAPR reduction in OFDM transmission |
Author (s): |
R. Jayashri, S. Sujatha
and P. Dananjayan |
Abstract: |
OFDM is used as a
resource sharing multiple access technique in 4G technology which has
several advantages of high data rate, good spectral efficiency, robust
against frequency selective fading, etc. Major disadvantage of this
system is high peak-to-average power ratio which degrades the
performance of power amplifier. In order to overcome this problem,
several distortion and distortion-less algorithms have been proposed.
Partial Transmit Sequence (PTS) is one of the distortions-less technique
which divides the input sequence into subsequences and chooses the phase
optimized minimum PAPR signal for transmission. In turn to minimize the
PAPR further, signal energy compaction and reduced autocorrelation of
input data sequences is required which will be provided by Discrete
Cosine Transform. The proposed method combines Discrete Cosine Transform
(DCT) with PTS technique. The proposed scheme applies DCT before and
after PTS technique. Simulation results shows that DCT before PTS is
having better PAPR reduction performance than the DCT after PTS,
conventional PTS and conventional OFDM system. The simulation result for
the proposed approaches are discussed which reduces PAPR efficiently. |
|
|
Full Text |
|
|
Title: |
On the metric dimension of
silicate stars |
Author (s): |
F. Simon Raj and A. George |
Abstract: |
A minimum resolving set
or a metric basis M for a graph G(V, E) is a small subset of vertices of
V such that for every pair of vertices x and y of V \ M, there exist at
least one vertex m in M such that the distance between x and m is not
equal to the distance between y and m. The number of elements of the
metric basis M of G is called metric dimension and the elements of a
metric basis are called landmarks. A metric dimension problem for a
graph G is to find a metric basis for G. In this paper a new silicate
graph called Silicate Stars or Star of Silicate Networks SSL(n) has been
derived from Star of David Networks SD(n). The metric dimension problem
has been solved for SSL(n) , Single Oxide chain, and single Silicate
chain. The problem of finding the metric dimension of a general graph is
an NP Complete Problem. |
|
|
Full Text |
|
|
Title: |
Implementation of signed
VEDIC multiplier targeted at FPGA architectures |
Author (s): |
Paldurai K. and K.
Hariharan |
Abstract: |
Signed Multiplications
are very expensive and used in many of the Digital Signal Processing (DSP)
applications such as Multiple-Accumulate unit and Fast Fourier
Transforms (FFT). The performance of DSP computational blocks are often
dominated by the speed at which a multiplication operation can be
executed. Therefore a high speed signed multiplier is highly desirable
to achieve high speed DSps. This paper proposes design and
implementation of a novel high speed signed multiplier based on Vedic
mathematics. The proposed architecture has the advantages of reduced
delay and less area over conventional Booth radix-2 multiplier. It uses
unsigned multiplier based on Urdhva Tiryakbhyam and 2s complement
circuit. The proposed signed multiplier and conventional booth
multiplier are coded in Verilog, synthesized and simulated using ISE
simulator. It is implemented on the iwavesystems Unified Learning Kit
Spartan6 family xc6slx25t-2fgg484 FPGA. The Area and maximum
combinational path delay of proposed Signed Multiplier and conventional
Booth Multiplier are compared. |
|
|
Full Text |
|
|
Title: |
Multi-user MIMO
transmission modes for transmit diversity and spatial multiplexing in
long term evolution |
Author (s): |
Patteti Krishna, Kalithkar
Kishan Rao and Tipparti Anil Kumar |
Abstract: |
Third Generation
Partnership Projects Long Term Evolution –LTE downlink multiuser
Multiple Input Multiple Output (MIMO) systems are analyzed in this
paper. Two Spatial Multiplexing and transmit diversity multiuser MIMO
schemes .To achieve high throughput required by the downlink LTE system,
Adaptive Modulation and Coding (AMC) has to ensure a Block Error
Rate(BLER) value . In this paper, we made a comprehensive study to
evaluate the performances of open loop spatial multiplexing (OLSM) and
transmit diversity (TxD) in downlink LTE system for different
transmission mode are investigated. |
|
|
Full Text |
|
|
Title: |
Performance of four phase
switched reluctance motor drive using single pulse width modulation
technique under constant turn off angle and random turn off angle |
Author (s): |
J. Uma and A. Jeevanandham |
Abstract: |
This paper describes a
comparative analysis of electronics switching control schemes to
minimize the torque ripples and speed oscillation for 8/6 Sensorless
Switched Reluctance Motor (SSRM) drive and development of Fuzzy
supervisory control scheme to control the speed of the drive. The Fuzzy
logic was used to adjust the classical PI controller parameter in
on-line. The electronic control schemes include single pulse width
modulation at both level switches and random turn-off angle generation.
The switching methods and speed controller have been developed and
tested using Matlab/Simulink. To demonstrate the effectiveness, the
switching methods have been implemented in PI based speed control system
drive and its performance was evaluated. The performance of PI-Fuzzy
speed controller was compared with conventional PI controller. |
|
|
Full Text |
|
|
Title: |
Traffic sign detection and
recognition in driver support system for safety precaution |
Author (s): |
Y. Mary Reeja, T. Latha
and A. Mary Ansalin Shalini |
Abstract: |
In this paper, an
efficient algorithm for the detection and recognition of traffic signs
is presented. The proposed system identifies candidate regions as
interest region extraction, which offers strength to variations in
lighting conditions and wave equation (WaDe) algorithm. A HOG based
Support Vector Machine (SVM) is used to classify the traffic signs. The
methodology is established on videos under changing weather conditions
and poor illumination. The image preprocessing based on the red color
channel enhancement improves the detection rate. The SVM classifier also
achieve high classification rate. |
|
|
Full Text |
|
|
Title: |
Design of CSRR embedded
metamaterial monopole antenna for WIMAX applications |
Author (s): |
Anandhi Meena B.,
Thiruvalar Selvan P., Raghavan S. and Suganthi S. |
Abstract: |
A novel design and
development of a simple monopole antenna based on composite metamaterial
resonators for multiband operation is presented. The antenna has
frequency notched function since the composite CPW (Co-Planar Waveguide)
metamaterial resonators CSRR (closed-ring resonator and SRR) which is
embedded on the planar monopole that resonates for multiple frequency
bands. The antenna resonates for the frequency of (Wi-MAX) 3.4 to 3.5GHz
and 5.725 – 5.875GHz which have good impedance matching and radiation
performance. |
|
|
Full Text |
|
|
Title: |
Tensile properties of
reinforced plastic material composites with natural fiber and filler
material |
Author (s): |
Rakshit Agarwal, M.
Ramachandran and Stanly Jones Ratnam |
Abstract: |
Usage of natural fibers
in reinforced plastic material with natural fibers as a composite had a
positive approach for the development of green composites in our day
today life. In this paper we are studying the tensile properties of
woven bamboo bidirectional natural fiber with coconut shell powder in
micro and nano size reinforced polymer composite with an angle of 0o/90o
orientation. The tensile properties were studied before and after water
absorption test on specimens. The water absorption test will shows the
deviation in the tensile properties of the natural fiber reinforced
composites before and after water absorption in material. More
deviations can be reduced by various chemical treatment of the natural
bamboo fiber. It is analyzed and proved that bamboo fiber absorbs less
water when compared to all other natural fibers. Bonding between the
matrix and the natural fibers are shown in the SEM analysis report.
|
|
|
Full Text |
|
|
Title: |
Study on pressure drop
characteristics of single phase turbulent flow in pipe bend for high
Reynolds number |
Author (s): |
P. Dutta and N. Nandi |
Abstract: |
Pressure drop
characteristics of turbulent flow through 90 degree pipe bends are
numerically investigated by computational fluid dynamic simulation using
k-ε RNG turbulence model with standard wall function. After the
validation of present model against existing experimental results, a
detailed study has been performed to investigate the pressure
distribution and pressure drop characteristics over a wide range of
Reynolds number (Re = 1×105 to 10×105) and for different curvature ratio
(RC/D = 1 to 5) to study pressure loss coefficient in terms of Reynolds
number and curvature ration to provide cost effective solutions to
design of the pipe bends. A number of important results have been
achieved showing the distribution of pressure at different location
throughout the bend for different Re and Rc/D. Numerical results shows
the dependency of pressure distribution and pressure loss coefficient
for different Reynolds number and curvature ratio throughout the bend. |
|
|
Full Text |
|
|
Title: |
FE analysis of knuckle
joint pin used in tractor trailer |
Author (s): |
Dinesh Shinde and Kanak
Kalita |
Abstract: |
Tractor trailer is a
useful equipment used in agriculture field for carrying heavy goods. To
connect trailer to the tractor flexibly, a knuckle joint is used which
consist of forks and a pin, a fork is attached to tractor rigidly and
another fork is attached to the trailer by a pin. During acceleration of
tractor, force acting on the joint is tensile and during deceleration it
is compressive. Force acting over the joint is calculated by considering
Newton’s Second Law of motion. At the time of carrying heavy weights,
due to its fluctuation a pin is subjected to high stresses. As the pin
is a flexible element which can easily be replaced, it is considered
separately for the analysis and finite element analysis is done on it. |
|
|
Full Text |
|
|
Title: |
A numerical study of SCF
convergence using ANSYS |
Author (s): |
Mohit Thirumump, Kanak
Kalita, M. Ramachandran and Ranjan Ghadai |
Abstract: |
In this paper a
metallic plate made of steel with an elliptical hole, having fixed long
radius and variable short radius is pressure loaded. A comparison is
made between the results obtained from analytical equations (from
reference 16) for a plate with an elliptical hole and the results
obtained from FEA. To show that increasing the order of the element can
be one way to improve the results obtained from FEA model two different
finite elements (4 node and 8 node plane element) were used on the
model. This was shown by measuring the length of the element at the tip
of the ellipse. It is possible to produce a more accurate FEA model by
increasing the number of elements in a mesh. To show this, the number of
elements used to mesh the model were recorded and compared for each
ellipse size. |
|
|
Full Text |
|
|
Title: |
Urban
green cover assessment and site analysis in Chennai, Tamil Nadu - a
remote sensing and GIS approach |
Author (s): |
Meera Gandhi. G, Nagaraj
Thummala and Christy. A |
Abstract: |
Green
space distribution plays a imperative role in urban planning since they
contribute significantly in enhancing ecological quality of metropolitan
areas. It improves air quality, urban health, conserving biodiversity,
reducing noise, etc. Removal of vegetation cover can be identified as one
of the poorest effects of urbanization. Proper distribution of green
spaces in urban environments is consequently more inevitable for the
sustainable development and healthy living. Hence, it is necessary to
identify the green space requirement quantitatively and spatially. To
achieve the goal, high resolution Cartosat-1 satellite data, were used to
analyse the spatial pattern. Spatial features like Point feature and
polygon features were demarcated from imagery. Individual trees, group of
trees, bushes, building area (covering both residential/industrial area),
water bodies (lakes, ponds, reservoir, streams, rivers etc.), parks and
temples has been considered. The tree cover area covers 72.82Sqkm,
Buildings covers 241Sqkm, Parks covers 9.28Sqkm, Water bodies covers
35.73 Sq.km and other area 104.40Sqkm out of 464sq.km area coverage of
Chennai municipality. Subsequently, green spaces required to be created
are calculated with respect to WHO standards of green spaces per capita
for healthy living (9.5 m2/ person) and a methodology is developed to
spatially define appropriate areas to establish them. |
|
|
Full Text |
|
|
Title: |
Energy efficient dynamic
adaptive reclustering protocol for heterogeneous wireless sensor
networks |
Author (s): |
C. P. Subha and S.
Malarkkan |
Abstract: |
Wireless sensor
networks are composed of a large number of sensor nodes with limited
energy resources. One critical issue in wireless sensor networks is how
to gather sensed information in an energy efficient way since the energy
is limited. The clustering algorithm is a technique used to reduce
energy consumption. It can improve the scalability and lifetime of
wireless sensor network. In this paper, we introduce an adaptive
clustering protocol for wireless sensor networks, which is called
Adaptive Decentralized Re-Clustering Heterogeneous Protocol (ADRHP) for
Wireless Sensor Networks. In ADRHP, the cluster heads and next heads are
elected based on residual energy of each node and the average energy of
each cluster. Clustering has been well received as an effective way to
reduce the energy consumption of a wireless sensor network. Clustering
is defined as the process of choosing a set of wireless sensor nodes to
be cluster heads for a given wireless sensor network. Therefore, data
traffic generated at each sensor node can be sent via cluster heads to
the base station. The selection of cluster heads and next heads are
weighted by the remaining energy of sensor nodes and the average energy
of each cluster. ADRHP is an adaptive clustering protocol; cluster heads
rotate over time to balance the energy dissipation of sensor nodes. The
simulation results show that ADRHP achieves longer lifetime and more
data message transmissions than current artificial neural network (ANN)
based clustering protocol such as Residual Energy Based Clustering Self
organizing map (R-EBCS) in wireless sensor networks. |
|
|
Full Text |
|
|
Title: |
Dynamic analysis of
rotating composite cantilever blades with piezoelectric layers |
Author (s): |
Abir Dutta, Kanak
Kalita and Dinesh Shinde |
Abstract: |
Rotating plates in form
of turbine blades or machinery parts are often encountered in industrial
engineering. The dynamic characteristics of these plates are useful
information from design point of view. This paper deals with vibrational
analysis of the skew composite plates with piezoelectric layers. In this
paper an attempt has been made to study the influence of skew angle and
rotational velocity on the free vibration frequencies of a cantilever
composite plate with piezoelectric layers. A commercial finite element
package ANSYS is used as solver for the problem. The obtained results
are compared with existing literature and good convergence in results is
seen. |
|
|
Full Text |
|
|
Title: |
A comparative analysis of different color
spaces for recognizing orange fruits on tree |
Author (s): |
R. Thendral and A. Suhasini |
Abstract: |
Segmenting ripe fruits
region in the foliage is an important step in agriculture sector
applications of yield measurement, robot harvesting, and fruit grading.
In this paper, we present a fundamental study of different color spaces
RGB, HSV, L*a*b and YIQ with the motivation of analyzing, which color
space is convenient for ripe fruits recognition from the background. The
results show that ‘I’ component of the YIQ color space has the best
criterion for recognizing the ripe fruits from the other regions. |
|
|
Full Text |
|
|
Title: |
Clustering
with shared nearest neighbor-unscented transform based
estimation |
Author (s): |
M. Ravichandran and A. Shanmugam |
Abstract: |
Subspace clustering developed
from the group of cluster objects in all subspaces of a
dataset. When clustering high dimensional objects, the
accuracy and efficiency of traditional clustering algorithms
are very poor, because data objects may belong to diverse
clusters in different subspaces comprised of different
combinations of dimensions. To overcome the above issue, we
are going to implement a new technique termed Opportunistic
Subspace and Estimated Clustering (OSEC) model on high
Dimensional Data to improve the accuracy in the search
retrieval. Still to improve the quality of clustering hubness
is a mechanism related to vector-space data deliberated by
the propensity of certain data points also referred to as
the hubs with a miniature distance to numerous added data
points in high dimensional spaces which is associated to the
phenomenon of distance concentration. The performance of
hubness on high dimensional data has an incapable impact on
many machine learning tasks namely classification, nearest
neighbor, outlier detection and clustering. Hubness is a
newly unexplored problem of machine learning in high
dimensional data spaces, which fails in automatically
determining the number of clusters in the data. Subspace
clustering discovers the efficient cluster validation but
problem of hubness is not discussed effectively. To overcome
clustering based hubness problem with sub spacing, high
dimensionality of data employs the nearest neighbor machine
learning methods. Shared Nearest Neighbor Clustering based
on Unscented Transform (SNNC-UT) estimation method is
developed to overcome the hubness problem with determination
of cluster data. The core objective of SNNC is to find the
number of cluster points such that the points within a
cluster are more similar to each other than to other points
in a different cluster. SNNC-UT estimates the relative
density, i.e., probability density, in a nearest region and
obtains a more robust definition of density. SNNC-UT handle
overlapping situations based on the unscented transform and
calculate the statistical distance of a random variable
which undergoes a nonlinear transformation. The experimental
performance of SNNC-UT and k-nearest neighbor hubness in
clustering is evaluated in terms of clustering quality,
distance measurement ratio, clustering time, and energy
consumption. |
|
|
Full Text |
|
|
Title: |
Computation and
optimisation of electroless Ni-Cu-P coating using evolutionary
algorithms |
Author (s): |
J. De, N. Biswas, P.
Rakshit, R. S. Sen, B. Oraon and G. Majumdar |
Abstract: |
Electroless Ni-Cu-P
coating was staked on a large concentration of pure Copper substrate.
The purpose of present study is to analysis the variation in mass
deposition with basis three parameters namely Ni-ion concentration,
Cu-ion concentration and reducing agent concentration of the chemical
bath. A central composite design of experiments has been considered here
as the statistical analysis tool. The mass deposition is treated as the
feedback in student’s test and it has been found that the concentrations
of Ni –ion source and all other interactions significantly influence the
mass deposition at 0.05% level of significance. A mathematical model has
been developed considering response surface methodology. The optimum
concentrations of Ni-ion source, Cu-ion source and reducing agent are
obtained using evolutionary algorithms to maximize the mass-deposition
per unit area. The coating is again deposited with the optimum
concentrations of the parameters and maximum mass-deposition is
observed. The XRD of the coatings has revealed that the coating is
amorphous in nature. |
|
|
Full Text |
|
|
Title: |
Modified K-best detection
algorithm for MIMO systems |
Author (s): |
Shirly Edward A. and
Malarvizhi S. |
Abstract: |
This paper presents a
VLSI implementation of reduced hardware-complexity and reconfigurable
signal detector for MIMO (Multiple-Input Multiple-Output)
systems.MACROBUTTON NoMacro In recent wireless communication system,
MIMO technique is being adopted to meet the rapidly increasing demands
for the multimedia services and to achieve better QoS(Quality of
Service). Maximum likelihood (ML) detection is the optimal hard decision
detection for MIMO systems. FPGA implementation of ML detector becomes
infeasible as its complexity grows exponentially with the number of
antennas. Therefore, we propose a modified K-best detector algorithm
which employs parallel and distributed sorting strategy that has a
constant throughput and near-ML detection solution. The proposed MIMO
detector was implemented targeting Xilinx Virtex 6 device for a 2x2, 4
QAM system and it achieves throughput of 12.23Mbps. The resource
utilization results are listed and compared with the existing algorithm.
The total on-chip power estimated is 1.57W. |
|
|
Full Text |
|
|
Title: |
Distance based reordering
for test data compression |
Author (s): |
Muthiah M. A. and E.
Logashanmugam |
Abstract: |
The system-on-chip (SoC)
revolution imposes a threat in the area of power dissipation by
challenging designing as well as the testing process. Basically, a
circuit or a system consumes more power in test mode than in normal
mode. This increase in test power is due to increase in the number of
switching activity in the device due to test pattern used for testing.
This extra power consumption gives rise to severe hazards in circuit
reliability and also can provoke instant circuit damage. Many techniques
are available for test data compression. The “Proposed Tanimoto distance
Based Reordering” technique is a modification to the earlier proposed
“Hamming Distance based Reordering - Columnwise Bit Filling and
Difference vector”. |
|
|
Full Text |
|
|
|
|
|
|