Latest

DHS-IEEE Projects SINCE 15 YEARS

8. IEEE 2017: Attribute-Based Storage Supporting Secure Deduplication of Encrypted Data in Cloud

Abstract: Attribute-based encryption (ABE) has been widely used in cloud computing where a data provider outsources his/her encrypted data to a cloud service provider, and can share the data with users possessing specific credentials (or attributes). However, the standard ABE system does not support secure deduplication, which is crucial for eliminating duplicate copies of identical data in order to save storage space and network bandwidth. In this paper, we present an attribute-based storage system with secure deduplication in a hybrid cloud setting, where a private cloud is responsible for duplicate detection and a public cloud manages the storage. Compared with the prior data deduplication systems, our system has two advantages. Firstly, it can be used to confidentially share data with users by specifying access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodology to modify a ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext.

 

9. IEEE 2017: Secure Big Data Storage and Sharing Scheme for Cloud Tenants

Abstract: The Cloud is increasingly being used to store and process big data for its ten-ants and classical security mechanisms using encryption are neither sufficiently efficient nor suited to the task of protecting big data in the Cloud. In this paper, we present an alternative approach which divides big data into sequenced parts and stores them among multiple Cloud storage service providers. Instead of protecting the big data itself, the proposed scheme protects the mapping of the various data elements to each provider using a trapdoor function. Analysis, comparison and simulation prove that the proposed scheme is efficient and secure for the big data of Cloud tenants.

 

10. IEEE 2016: On Traffic-Aware Partition and Aggregation in Map Reduce for Big Data Applications

 Abstract: The Map Reduce programming model simplifies large-scale data processing on commodity cluster by exploiting parallel map tasks and reduces tasks. Although many efforts have been made to improve the performance of Map Reduce jobs, they ignore the network traffic generated in the shuffle phase, which plays a critical role in performance enhancement. Traditionally, a hash function is used to partition intermediate data among reduce tasks, which, however, is not traffic-efficient because network topology and data size associated with each key are not taken into consideration. In this paper, we study to reduce network traffic cost for a Map Reduce job by designing a novel intermediate data partition scheme. Furthermore, we jointly consider the aggregator placement problem, where each aggregator can reduce merged traffic from multiple map tasks. A decomposition-based distributed algorithm is proposed to deal with the large-scale optimization problem for big data application and an online algorithm is also designed to adjust data partition and aggregation in a dynamic manner. Finally, extensive simulation results demonstrate that our proposals can significantly reduce network traffic cost under both offline and online cases.

11. IEEE 2016: The SP Theory of Intelligence: Distinctive Features and Advantages

 Abstract: This paper aims to highlight distinctive features of the SP theory of intelligence, realized in the SP computer model, and its apparent advantages compared with some AI-related alternatives. Perhaps most importantly, the theory simplifies and integrates observations and concepts in AI-related areas, and has potential to simplify and integrate of structures and processes in computing systems. Unlike most other AI-related theories, the SP theory is itself a theory of computing, which can be the basis for new architectures for computers. Fundamental in the theory is information compression via the matching and unification of patterns and, more specifically, via a concept of multiple alignment. The theory promotes transparency in the representation and processing of knowledge, and unsupervised learning of natural structures via information compression. It provides an interpretation of aspects of mathematics and an interpretation of phenomena in human perception and cognition. Concepts in the theory may be realized in terms of neurons and their inter-connections (SP-neural). These features and advantages of the SP system are discussed in relation to AI-related alternatives: the concept of minimum length encoding and related concepts, how computational and energy efficiency in computing may be achieved, deep learning in neural networks, unified theories of cognition and related research, universal search, Bayesian networks and some other models for AI, IBM's Watson, solving problems associated with big data and in the development of intelligence in autonomous robots, pattern recognition and vision, the learning and processing of natural language, exact and inexact forms of reasoning, representation and processing of diverse forms of knowledge, and software engineering. In conclusion, the SP system can provide a firm foundation for the long-term development of AI and related areas, and at the same time, it may deliver useful results on relatively short timescales.

 

12. IEEE 2016: A Parallel Patient Treatment Time Prediction Algorithm and Its Applications in Hospital Queuing-Recommendation in a Big Data

Abstract: Effective patient queue management to minimize patient wait delays and patient overcrowding is one of the major challenges faced by hospitals. Unnecessary and annoying waits for long periods result in substantial human resource and time wastage and increase the frustration endured by patients. For each patient in the queue, the total treatment time of all the patients before him is the time that he must wait. It would be convenient and preferable if the patients could receive the most efficient treatment plan and know the predicted waiting time through a mobile application that updates in real time. Therefore, we propose a Patient Treatment Time Prediction (PTTP) algorithm to predict the waiting time for each treatment task for a patient. We use realistic patient data from various hospitals to obtain a patient treatment time model for each task. Based on this large-scale, realistic dataset, the treatment time for each patient in the current queue of each task is predicted. Based on the predicted waiting time, a Hospital Queuing-Recommendation (HQR) system is developed. HQR calculates and predicts an efficient and convenient treatment plan recommended for the patient. Because of the large-scale, realistic dataset and the requirement for real-time response, the PTTP algorithm and HQR system mandate efficiency and low-latency response. We use an Apache Spark-based cloud implementation at the National Supercomputing Center in Changsha to achieve the aforementioned goals. Extensive experimentation and simulation results demonstrate the effectiveness and applicability of our proposed model to recommend an effective treatment plan for patients to minimize their wait times in hospitals.

 

13. IEEE 2016: Protection of Big Data Privacy

Abstract: In recent years, big data have become a hot research topic. The increasing amount of big data also increases the chance of breaching the privacy of individuals. Since big data require high computational power and large storage, distributed systems are used. As multiple parties are involved in these systems, the risk of privacy violation is increased. There have been a number of privacy-preserving mechanisms developed for privacy protection at different stages (e.g., data generation, data storage, and data processing) of a big data life cycle. The goal of this paper is to provide a comprehensive overview of the privacy preservation mechanisms in big data and present the challenges for existing mechanisms. In particular, in this paper, we illustrate the infrastructure of big data and the state-of-the-art privacy-preserving mechanisms in each stage of the big data life cycle. Furthermore, we discuss the challenges and future research directions related to privacy preservation in big data.

 

14. IEEE 2016: Sentiment Analysis of Top Colleges in India Using Twitter Data

Abstract: In today’s world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely Twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people’s opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets, a probabilistic model based on Bayes’ theorem was used for spelling correction, which is overlooked in other research studies. This paper also highlights a comparison between the results obtained by exploiting the following machine learning algorithms: Naïve Bayes and Support Vector Machine and an Artificial Neural Network model: Multilayer Perceptron. Furthermore, a contrast has been presented between four different kernels of SVM: RBF, linear, polynomial and sigmoid.

 

15. IEEE 2016: FiDoop: Parallel Mining of Frequent Itemsets Using MapReduce

Abstract: Existing parallel mining algorithms for frequent itemsets lack a mechanism that enables automatic parallelization, load balancing, data distribution, and fault tolerance on large clusters. As a solution to this problem, we design a parallel frequent itemsets mining algorithm called FiDoop using the MapReduce programming model. To achieve compressed storage and avoid building conditional pattern bases, FiDoop incorporates the frequent items ultrametric tree, rather than conventional FP trees. In FiDoop, three MapReduce jobs are implemented to complete the mining task. In the crucial third MapReduce job, the mappers independently decompose itemsets, the reducers perform combination operations by constructing small ultrametric trees, and the actual mining of these trees separately. We implement FiDoop on our in-house Hadoop cluster. We show that FiDoop on the cluster is sensitive to data distribution and dimensions, because itemsets with different lengths have different decomposition and construction costs. To improve FiDoop’s performance, we develop a workload balance metric to measure load balance across the cluster’s computing nodes. We develop FiDoop-HD, an extension of FiDoop, to speed up the mining performance for high-dimensional data analysis. Extensive experiments using real-world celestial spectral data demonstrate that our proposed solution is efficient and scalable.Type your paragraph here.

Android Projects

This article gives an introduction to Final Year Projects to BE / B.Tech students

IEEE Android Projects

IEEE 2017 Latest Android Projects with abstracts. For free demo / more details visit our office.

Blog - DHS Projects

M.Tech Projects

This article gives an introduction to Final Year Projects to M.Tech students

BE / B.Tech Projects

This article gives an introduction to Final Year Projects to BE / B.Tech students